TY  - JOUR
AU  - Kaiser, Jan
AU  - Xu, Chenran
AU  - Eichler, Annika
AU  - Santamaria Garcia, Andrea
AU  - Stein, Oliver
AU  - Bruendermann, Erik
AU  - Kuropka, Willi
AU  - Dinter, Hannes
AU  - Mayet, Frank
AU  - Vinatier, Thomas
AU  - Burkart, Florian
AU  - Schlarb, Holger
TI  - Reinforcement learning-trained optimisers and Bayesian optimisation for online particle accelerator tuning
JO  - Scientific reports
VL  - 14
IS  - 1
SN  - 2045-2322
CY  - [London]
PB  - Macmillan Publishers Limited, part of Springer Nature
M1  - PUBDB-2023-03590
SP  - 15733
PY  - 2024
AB  - Online tuning of particle accelerators is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods like Bayesian optimisation (BO) hold great promise in improving plant performance and reducing tuning times. At the same time, reinforcement learning (RL) is a capable method of learning intelligent controllers, and recent work shows that RL can also be used to train domain-specialised optimisers in so-called reinforcement learning-trained optimisation (RLO). In parallel efforts, both algorithms have found successful adoption in particle accelerator tuning. Here we present a comparative case study, assessing the performance of both algorithms while providing a nuanced analysis of the merits and the practical challenges involved in deploying them to real-world facilities. Our results will help practitioners choose a suitable learning-based tuning algorithm for their tuning tasks, accelerating the adoption of autonomous tuning algorithms, ultimately improving the availability of particle accelerators and pushing their operational limits.
LB  - PUB:(DE-HGF)16
C6  - pmid:38977749
UR  - <Go to ISI:>//WOS:001271178000014
DO  - DOI:10.1038/s41598-024-66263-y
UR  - https://bib-pubdb1.desy.de/record/585442
ER  -