Preprint PUBDB-2024-01644

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Large Language Models for Human-Machine Collaborative Particle Accelerator Tuning through Natural Language

 ;  ;

2024

This record in other databases:  

Please use a persistent id in citations: doi:

Report No.: arXiv:2405.08888

Abstract: Autonomous tuning of particle accelerators is an active and challenging field of research with the goal of enabling novel accelerator technologies cutting-edge high-impact applications, such as physics discovery, cancer research and material sciences. A key challenge with autonomous accelerator tuning remains that the most capable algorithms require an expert in optimisation, machine learning or a similar field to implement the algorithm for every new tuning task. In this work, we propose the use of large language models (LLMs) to tune particle accelerators. We demonstrate on a proof-of-principle example the ability of LLMs to successfully and autonomously tune a particle accelerator subsystem based on nothing more than a natural language prompt from the operator, and compare the performance of our LLM-based solution to state-of-the-art optimisation algorithms, such as Bayesian optimisation (BO) and reinforcement learning-trained optimisation (RLO). In doing so, we also show how LLMs can perform numerical optimisation of a highly non-linear real-world objective function. Ultimately, this work represents yet another complex task that LLMs are capable of solving and promises to help accelerate the deployment of autonomous tuning algorithms to the day-to-day operations of particle accelerators.


Note: 22 pages, 5 figures

Contributing Institute(s):
  1. Strahlkontrollen (MSK)
Research Program(s):
  1. 621 - Accelerator Research and Development (POF4-621) (POF4-621)
  2. InternLabs-0011 - HIR3X - Helmholtz International Laboratory on Reliability, Repetition, Results at the most advanced X-ray Sources (2020_InternLabs-0011) (2020_InternLabs-0011)
Experiment(s):
  1. Accelerator Research Experiment at SINBAD

Appears in the scientific report 2024
Database coverage:
Creative Commons Attribution CC BY 4.0 ; OpenAccess ; Published
Click to display QR Code for this record

The record appears in these collections:
Private Collections > >DESY > >M > MSK
Document types > Reports > Preprints
Public records
Publications database
OpenAccess


Linked articles:

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png Journal Article  ;  ;
Large language models for human-machine collaborative particle accelerator tuning through natural language
Science advances 11(1), eadr4173 () [10.1126/sciadv.adr4173]  GO OpenAccess  Download fulltext Files  Download fulltextFulltext Download fulltextFulltext by arXiv.org BibTeX | EndNote: XML, Text | RIS


 Record created 2024-05-06, last modified 2024-11-24