%0 Journal Article
%A Kaiser, Jan
%A Xu, Chenran
%A Eichler, Annika
%A Santamaria Garcia, Andrea
%A Stein, Oliver
%A Bruendermann, Erik
%A Kuropka, Willi
%A Dinter, Hannes
%A Mayet, Frank
%A Vinatier, Thomas
%A Burkart, Florian
%A Schlarb, Holger
%T Reinforcement learning-trained optimisers and Bayesian optimisation for online particle accelerator tuning
%J Scientific reports
%V 14
%N 1
%@ 2045-2322
%C [London]
%I Macmillan Publishers Limited, part of Springer Nature
%M PUBDB-2023-03590
%P 15733
%D 2024
%X Online tuning of particle accelerators is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods like Bayesian optimisation (BO) hold great promise in improving plant performance and reducing tuning times. At the same time, reinforcement learning (RL) is a capable method of learning intelligent controllers, and recent work shows that RL can also be used to train domain-specialised optimisers in so-called reinforcement learning-trained optimisation (RLO). In parallel efforts, both algorithms have found successful adoption in particle accelerator tuning. Here we present a comparative case study, assessing the performance of both algorithms while providing a nuanced analysis of the merits and the practical challenges involved in deploying them to real-world facilities. Our results will help practitioners choose a suitable learning-based tuning algorithm for their tuning tasks, accelerating the adoption of autonomous tuning algorithms, ultimately improving the availability of particle accelerators and pushing their operational limits.
%F PUB:(DE-HGF)16
%9 Journal Article
%$ pmid:38977749
%U <Go to ISI:>//WOS:001271178000014
%R 10.1038/s41598-024-66263-y
%U https://bib-pubdb1.desy.de/record/585442