Home > Publications database > Large language models for human-machine collaborative particle accelerator tuning through natural language |
Journal Article | PUBDB-2024-06977 |
; ;
2025
Assoc.
Washington, DC [u.a.]
This record in other databases:
Please use a persistent id in citations: doi:10.1126/sciadv.adr4173 doi:10.3204/PUBDB-2024-06977
Abstract: Autonomous tuning of particle accelerators is an active and challenging research field with the goal of enabling advanced accelerator technologies and cutting-edge high-impact applications, such as physics discovery, cancer research, and material sciences. A challenge with autonomous accelerator tuning remains that the most capable algorithms require experts in optimization and machine learning to implement them for every new tuning task. Here, we propose the use of large language models (LLMs) to tune particle accelerators. We demonstrate on a proof-of-principle example the ability of LLMs to tune an accelerator subsystem based on only a natural language prompt from the operator, and compare their performance to state-of-the-art optimization algorithms, such as Bayesian optimization and reinforcement learning–trained optimization. In doing so, we also show how LLMs can perform numerical optimization of a nonlinear real-world objective. Ultimately, this work represents another complex task that LLMs can solve and promises to help accelerate the deployment of autonomous tuning algorithms to day-to-day particle accelerator operations.
![]() |
The record appears in these collections: |
Preprint
Large Language Models for Human-Machine Collaborative Particle Accelerator Tuning through Natural Language
[10.3204/PUBDB-2024-01644]
Files
Fulltext by arXiv.org
BibTeX |
EndNote:
XML,
Text |
RIS