Home > Publications database > Deep Meta Reinforcement Learning for Rapid Adaptation In Linear Markov Decision Processes: Applications to CERN’s AWAKE Project |
Contribution to a conference proceedings/Contribution to a book | PUBDB-2024-01836 |
; ; ; ; ; ; ;
2024
Springer Nature Switzerland
Cham
ISBN: 978-3-031-65992-8, 978-3-031-65993-5 (electronic)
This record in other databases:
Please use a persistent id in citations: doi:10.1007/978-3-031-65993-5_21
Abstract: Real-world applications of reinforcement learning (RL) face challenges such as the need for numerous interactions and achieving stable training under dynamic conditions. Meta-RL emerges as a solution, particularly in environments where simulations cannot perfectly mimic real-world conditions. This study demonstrates Meta-RL’s potential in the CERN’s AWAKE project, focusing on the electron line’s control. By incorporating Model-Agnostic Meta-Learning (MAML), we showcase how Meta-RL facilitates rapid adaptation to environmental changes with minimal interaction steps. Our findings indicate Meta-RL’s efficacy in managing Partially Observable Markov Decision Processes (POMDPs) with evolving hidden parameters, underlining its significance in high-dimensional control challenges prevalent in particle physics experiments and beyond.
Keyword(s): Engineering (LCSH) ; Computational intelligence (LCSH) ; Artificial intelligence (LCSH)
![]() |
The record appears in these collections: |