09: Computing and Data Science for Accelerator Systems
Paper Title Page
MOPA01 Realistic CAD-Based Geometries for Arbitrary Magnets with Beam Delivery Simulation (BDSIM) 55
 
  • E. Ramoisiaux, R. Dantinne, E. Gnacadja, C. Hernalsteens, S. Musibau, B. Ndihokubwayo, N. Pauly, R. Tesse, M. Vanwelde
    ULB, Bruxelles, Belgium
  • S.T. Boogert, L.J. Nevay, W. Shields
    Royal Holloway, University of London, Surrey, United Kingdom
  • C. Hernalsteens
    CERN, Meyrin, Switzerland
 
  Monte Carlo simulations are required to evaluate beam losses and secondary radiation accurately in particle accelerators and beamlines. Detailed CAD geometries are critical to account for a realistic distribution of material masses but increase the model complexity and often lead to code duplication. Beam Delivery Simulation (BDSIM) and the Python package pyg4ometry enable handling such accelerator models within a single, simplified workflow to run complete simulations of primary and secondary particle tracking and interactions with matter using Geant4 routines. Additional capabilities have been developed to model arbitrary bent magnets by associating externally modeled geometries to the magnet poles, yoke, and beampipe. Individual field descriptions can be associated with the yoke and vacuum pipe separately to provide fine-grained control of the magnet model. The implementation of these new features is described in detail and applied to the modeling of the CERN Proton Synchrotron (PS) combined function magnets.  
poster icon Poster MOPA01 [0.781 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-MOPA01  
About • Received ※ 02 August 2022 — Revised ※ 07 August 2022 — Accepted ※ 09 August 2022 — Issue date ※ 16 September 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPA08 Beamline Optimization Methods for High Intensity Muon Beams at PSI 63
 
  • E.V. Valetov
    PSI, Villigen PSI, Switzerland
 
  Funding: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 884104 (PSI-FELLOW-III-3i).
We perform beamline design optimization for the High Intensity Muon Beams (HIMB) project at the Paul Scherrer Institute (PSI), which will deliver muon beams at the unprecedented rate of 1·1010 muons/s to next-generation intensity frontier particle physics and material science experiments. For optimization of the design and operational parameters to maximize the beamline transmission, we use the asynchronous Bayesian optimization package DeepHyper and a custom build of G4beamline with variance reduction and measured cross sections. We minimize the beam spot size at the final foci using a COSY INFINITY model with differential-algebraic system knobs, where we minimize the respective transfer map elements using the Levenberg-Marquardt and simulated annealing optimizers. We obtained a transmission of 1.34·1010 muons/s in a G4beamline model of HIMB’s MUH2 beamline into the experimental area.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-MOPA08  
About • Received ※ 02 August 2022 — Revised ※ 08 August 2022 — Accepted ※ 11 August 2022 — Issue date ※ 23 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPA13 Design of a Surrogate Model for MUED at BNL Using VSim, Elegant and HPC 72
 
  • S.I. Sosa Guitron, S. Biedron, T.B. Bolin
    UNM-ECE, Albuquerque, USA
  • S. Biedron
    Element Aero, Chicago, USA
  • S. Biedron
    UNM-ME, Albuquerque, New Mexico, USA
 
  Funding: U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Program of Electron and Scanning Probe Microscopes, award number DE-SC0021365.
The MeV Ultrafast Electron Diffraction (MUED) instrument at Brookhaven National Laboratory is a unique capability for material science. As part of a plan to make MUED a high-throughput user facility, we are exploring instrumentation developments based on Machine Learning (ML). We are developing a surrogate model of MUED that can be used to support control tasks. The surrogate model will be based on beam simulations that are benchmarked to experimental observations. We use VSim to model the beam dynamics of the radio-frequency gun and Elegant to transport the beam through the rest of the beam-line. We also use High Performance Computing resources from Argonne Leadership Computing Facility to generate the data for the surrogate model based on the original simulation as well as training the ML model.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-MOPA13  
About • Received ※ 01 August 2022 — Revised ※ 09 August 2022 — Accepted ※ 11 August 2022 — Issue date ※ 21 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPA18 Residual Dose and Environmental Monitoring for the Fermilab Main Injector Tunnel Using the Data Acquisition Logging Engine (Dale) 87
 
  • N. Chelidze, R. Ainsworth, B.C. Brown, D. Capista, K.J. Hazelwood, D.K. Morris, M.J. Murphy
    Fermilab, Batavia, Illinois, USA
 
  Funding: Fermi National Accelerator Laboratory
The Recycler and the Main Injector are part of the Fermilab Accelerator complex used to deliver proton beam to the different experiments. It is very important to control and minimize losses in both machines during operation, to reduce personnel dose from residual activation and to preserve component lifetime. To minimize losses, we need to identify the loss points and adjust the components accordingly. The Data Acquisition Loss Engine (DALE) platform has been developed within the Main Injector department and upgraded throughout the years. DALE is used to survey the entire enclosure for residual dose rates and environmental readings when unrestricted access to the enclosure is possible. Currently DALE has two radiation meters, which are aligned along each machine, so loss points can be identified for both at the same time. DALE attaches to the enclosure carts and is continuously in motion monitoring dose rates and other environmental readings. In this paper we will describe how DALE is used to provide radiation maps of the residual dose rates in the enclosure. We will also compare the loss points with the Beam Loss monitor data.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-MOPA18  
About • Received ※ 02 August 2022 — Revised ※ 05 August 2022 — Accepted ※ 11 August 2022 — Issue date ※ 21 September 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPA44 Utilizing Python to Prepare the VENUS Ion Source for Machine Learning 151
 
  • A. Kireeff, L. Phair, M.J. Regis, M. Salathe, D.S. Todd
    LBNL, Berkeley, California, USA
 
  Funding: This work was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Contract No. DE-AC02-05CH11231.
The fully superconducting electron cyclotron resonance (ECR) ion source VENUS is one of the world’s two highest-performing ECR ion sources, and a copy of this source will soon be used to produce ion beams at FRIB. The tuning and optimization of ECR ion sources is time consuming, and there are few detailed theoretical models to guide this work. To aid in this process, we are working toward utilizing machine learning to both efficiently optimize VENUS and reliably maintain its stability for long campaigns. We have created a Python library to interface with the programmable logic controller (PLC) in order to operate VENUS and collect and store source and beam data. We will discuss the design and safety considerations that went into creating this library, the implementation of the library, and some of the capabilities it enables.
 
poster icon Poster MOPA44 [0.862 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-MOPA44  
About • Received ※ 17 July 2022 — Revised ※ 27 July 2022 — Accepted ※ 05 August 2022 — Issue date ※ 16 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPA55 Facilitating Machine Learning Collaborations Between Labs, Universities, and Industry 164
 
  • J.P. Edelen, D.T. Abell, D.L. Bruhwiler, S.J. Coleman, N.M. Cook, A. Diaw, J.A. Einstein-Curtis, C.C. Hall, M.C. Kilpatrick, B. Nash, I.V. Pogorelov
    RadiaSoft LLC, Boulder, Colorado, USA
  • K.A. Brown
    BNL, Upton, New York, USA
  • S. Calder
    ORNL RAD, Oak Ridge, Tennessee, USA
  • A.L. Edelen, B.D. O’Shea, R.J. Roussel
    SLAC, Menlo Park, California, USA
  • C.M. Hoffmann
    ORNL, Oak Ridge, Tennessee, USA
  • E.-C. Huang
    LANL, Los Alamos, New Mexico, USA
  • P. Piot
    Northern Illinois University, DeKalb, Illinois, USA
  • C. Tennant
    JLab, Newport News, Virginia, USA
 
  It is clear from numerous recent community reports, papers, and proposals that machine learning is of tremendous interest for particle accelerator applications. The quickly evolving landscape continues to grow in both the breadth and depth of applications including physics modeling, anomaly detection, controls, diagnostics, and analysis. Consequently, laboratories, universities, and companies across the globe have established dedicated machine learning (ML) and data-science efforts aiming to make use of these new state-of-the-art tools. The current funding environment in the U.S. is structured in a way that supports specific application spaces rather than larger collaboration on community software. Here, we discuss the existing collaboration bottlenecks and how a shift in the funding environment, and how we develop collaborative tools, can help fuel the next wave of ML advancements for particle accelerators.  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-MOPA55  
About • Received ※ 10 August 2022 — Revised ※ 11 August 2022 — Accepted ※ 22 August 2022 — Issue date ※ 01 September 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPA57 Online Models for X-Ray Beamlines 170
 
  • B. Nash, D.T. Abell, M.V. Keilman, P. Moeller, I.V. Pogorelov
    RadiaSoft LLC, Boulder, Colorado, USA
  • Y. Du, A. Giles, J. Lynch, T. Morris, M.S. Rakitin, A. Walter
    BNL, Upton, New York, USA
  • N.B. Goldring
    STATE33 Inc., Portland, Oregon, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Science, under Award Number DE-SC0020593
X-ray beamlines transport synchrotron radiation from the magnetic source to the sample at a synchrotron light source. Alignment of elements such as mirrors and gratings are often done manually and can be quite time consuming. The use of photon beam models during operations is not common in the same way that they are used to great benefit for particle beams in accelerators. Linear and non-linear optics including the effects of coherence may be computed from source properties and augmented with measurements. In collaboration with NSLS-II, we are developing software tools and methods to include the model of the x-ray beam as it passes on its way to the sample. We are integrating the Blue-Sky beamline control toolkit with the Sirepo interface to several x-ray optics codes. Further, we are developing a simplified linear optics approach based on a Gauss-Schell model and linear canonical transforms as well as developing Machine Learning models for use directly from diagnostics data. We present progress on applying these ideas on NSLS-II beamlines and give a future outlook on this rather large and open domain for technological development.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-MOPA57  
About • Received ※ 27 July 2022 — Revised ※ 02 August 2022 — Accepted ※ 07 August 2022 — Issue date ※ 11 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPA89 RHIC Electron Beam Cooling Analysis Using Principle Component and Autoencoder Analysis 260
 
  • A.D. Tran, Y. Hao
    FRIB, East Lansing, Michigan, USA
  • X. Gu
    BNL, Upton, New York, USA
 
  Funding: Work supported by the US Department of Energy under contract No. DE-AC02-98CH10886.
Principal component analysis and autoencoder analysis were used to analyze the experimental data of RHIC operation with low energy RHIC electron cooling (LEReC). This is unsupervised learning which includes electron beam settings and observable during operation. Both analyses were used to gauge the dimensional reducibility of the data and to understand which features are important to beam cooling.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-MOPA89  
About • Received ※ 02 August 2022 — Revised ※ 05 August 2022 — Accepted ※ 06 August 2022 — Issue date ※ 12 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUXE1
The Importance of Data, High-Performance Computing, and Artificial Intelligence/Machine Learning  
 
  • C.M. Sweeney
    LANL, Los Alamos, New Mexico, USA
  • A.L. Edelen
    SLAC, Menlo Park, California, USA
  • D.E. Martin
    ANL, Lemont, Illinois, USA
 
  As existing accelerator facilities are upgraded and new facilities come online, data volumes and velocity are increasing even with shorter data collection times. High-performance computing (HPC) systems doing simulation, data analytics and artificial intelligence/machine learning (AI/ML) are playing a major role in pre-experiment planning, design of experiments, real-time beam line and experiment analysis and control, and post-run data processing. Simulation and AI incorporated into experimental data analysis workflows are making efficient use of expensive facilities and accelerating scientific discoveries. HPC is experiencing its own growth, with exascale computers and AI acceleration coming online at several supercomputer centers. AI/ML is in the midst of rapid growth of techniques and expansion into new application areas. This session will focus on current and emerging technologies in HPC, experimental workflows, and AI/ML techniques to help you incorporate them into your own research. Dr. D. Martin will provide "HPC Overview" followed by "Workflows" by Dr. C. Sweeney. "AI and ML" by Dr. A. Edelen will be followed by community discussions and questions from the audience.  
slides icon Slides TUXE1 [11.946 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUXE2
High Performance Computing - DOE Facilities, Direction and Applications  
 
  • D.E. Martin
    ANL, Lemont, Illinois, USA
 
  Funding: This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.
High-performance computing (HPC) systems doing simulation, data analytics and machine learning are playing a major role in accelerator physics, including pre-experiment planning, design of experiments, real-time beam line analysis and control, and post-run data processing. DOE high performance computer user facilities are evolving with new architectures, faster systems and a focus on integrating with experimental facilities.
 
slides icon Slides TUXE2 [16.438 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUXE3
Artificial Intelligence and Machine Learning for Particle Accelerators  
 
  • A.L. Edelen
    SLAC, Menlo Park, California, USA
 
  As existing accelerator facilities are upgraded and new facilities come online, data volumes and velocity are increasing even with shorter data collection times. High-performance computing (HPC) systems doing simulation, data analytics and artificial intelligence/machine learning (AI/ML) are playing a major role in pre-experiment planning, design of experiments, real-time beam line and experiment analysis and control, and post-run data processing. Simulation and AI incorporated into experimental data analysis workflows are making efficient use of expensive facilities and accelerating scientific discoveries. HPC is experiencing its own growth, with exascale computers and AI acceleration coming online at several supercomputer centers. AI/ML is in the midst of rapid growth of techniques and expansion into new application areas. This session will focus on current and emerging technologies in HPC, experimental workflows, and AI/ML techniques to help you incorporate them into your own research. Dr. D. Martin will provide "HPC Overview" followed by "Workflows" by Dr. C. Sweeney. "AI and ML" by Dr. A. Edelen will be followed by community discussions and questions from the audience.  
slides icon Slides TUXE3 [17.252 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUYE1 Coulomb Crystals in Storage Rings for Quantum Information Science 296
 
  • K.A. Brown
    BNL, Upton, New York, USA
  • A. Aslam, S. Biedron, T.B. Bolin, C. Gonzalez-Zacarias, S.I. Sosa Guitron
    UNM-ECE, Albuquerque, USA
  • B. Huang
    SBU, Stony Brook, USA
  • T.G. Robertazzi
    Stony Brook University, Stony Brook, New York, USA
 
  Quantum information science is a growing field that promises to take computing into a new age of higher performance and larger scale computing as well as being capable of solving problems classical computers are incapable of solving. The outstanding issue in practical quantum computing today is scaling up the system while maintaining interconnectivity of the qubits and low error rates in qubit operations to be able to implement error correction and fault-tolerant operations. Trapped ion qubits offer long coherence times that allow error correction. However, error correction algorithms require large numbers of qubits to work properly. We can potentially create many thousands (or more) of qubits with long coherence states in a storage ring. For example, a circular radio-frequency quadrupole, which acts as a large circular ion trap and could enable larger scale quantum computing. Such a Storage Ring Quantum Computer (SRQC) would be a scalable and fault tolerant quantum information system, composed of qubits with very long coherence lifetimes.  
slides icon Slides TUYE1 [8.834 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-TUYE1  
About • Received ※ 17 July 2022 — Revised ※ 02 August 2022 — Accepted ※ 08 August 2022 — Issue date ※ 11 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUYE2 Next Generation Computational Tools for the Modeling and Design of Particle Accelerators at Exascale 302
 
  • A. Huebl, R. Lehé, C.E. Mitchell, J. Qiang, R.D. Ryne, R.T. Sandberg, J.-L. Vay
    LBNL, Berkeley, USA
 
  Funding: Work supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. DOE SC and the NNSA, resources of NERSC, and by LBNL LDRD under DOE Contract DE-AC02-05CH11231.
Particle accelerators are among the largest, most complex devices. To meet the challenges of increasing energy, intensity, accuracy, compactness, complexity and efficiency, increasingly sophisticated computational tools are required for their design and optimization. It is key that contemporary software take advantage of the latest advances in computer hardware and scientific software engineering practices, delivering speed, reproducibility and feature composability for the aforementioned challenges. A new open source software stack is being developed at the heart of the Beam pLasma Accelerator Simulation Toolkit (BLAST) by LBNL and collaborators, providing new particle-in-cell modeling codes capable of exploiting the power of GPUs on Exascale supercomputers. Combined with advanced numerical techniques, such as mesh-refinement, and intrinsic support for machine learning, these codes are primed to provide ultrafast to ultraprecise modeling for future accelerator design and operations.
[1] J.-L. Vay, A. Huebl, et al, Phys. Plasmas 28, 023105 (2021)
[2] J.-L. Vay, A. Huebl, et al, J. Instr. 16, T10003 (2021)
[3] A. Myers, et al (incl. A. Huebl), Parallel Comput. 108, 102833 (2021)
 
slides icon Slides TUYE2 [9.399 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-TUYE2  
About • Received ※ 13 July 2022 — Revised ※ 02 August 2022 — Accepted ※ 08 August 2022 — Issue date ※ 11 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUYE3 An Open-Source Based Data Management and Processing Framework on a Central Server for Scientific Experimental Data 307
 
  • A. Liu, J.R. Callahan, S. Poddar, W. Si
    Euclid TechLabs, Solon, Ohio, USA
  • J. Gao
    AJS Smartech LLC, Naperville, TX, USA
 
  Funding: This work is supported by the US DOE SBIR program under contract number DE-SC0021512.
The ever-expanding size of accelerator operation and experimental data including those generated by electron microscopes and beamline facilities renders most proprietary software inefficient at managing data. The Findability, Accessibility, Interoperability, and Reuse (FAIR) principles of digital assets require a convenient platform for users to share and manage data on. An open-source data framework for storing raw data and metadata, hosting databases, and providing a platform for data processing and visualization is highly desirable. In this paper, we present an open-source, infrastructure-independent data management software framework, named by Euclid-NexusLIMS, to archive, register, record, visualize and process experimental data. The software was targeted initially for electron microscopes, but can be widely applied to all scientific experimental data.
 
slides icon Slides TUYE3 [5.891 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-TUYE3  
About • Received ※ 04 August 2022 — Revised ※ 07 August 2022 — Accepted ※ 08 August 2022 — Issue date ※ 24 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUYE4 Machine Learning for Anomaly Detection and Classification in Particle Accelerators 311
 
  • I. Lobach, M. Borland, K.C. Harkay, N. Kuklev, A. Sannibale, Y. Sun
    ANL, Lemont, Illinois, USA
 
  Funding: The work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.
We explore the possibility of using a Machine Learning (ML) algorithm to identify the source of occasional poor performance of the Particle Accumulator Ring (PAR) and the Linac-To-PAR (LTP) transport line, which are parts of the injector complex of the Advanced Photon Source (APS) at Argonne National Lab. The cause of reduced injection or extraction efficiencies may be as simple as one parameter being out of range. Still, it may take an expert considerable time to notice it, whereas a well-trained ML model can point at it instantly. In addition, a machine expert might not be immediately available when a problem occurs. Therefore, we began by focusing on such single-parameter anomalies. The training data were generated by creating controlled perturbations of several parameters of PAR and LTP one-by-one, while continuously logging all available process variables. Then, several ML classifiers were trained to recognize certain signatures in the logged data and link them to the sources of poor machine performance. Possible applications of autoencoders and variational autoencoders for unsupervised anomaly detection and for anomaly clustering were considered as well.
 
slides icon Slides TUYE4 [9.534 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-TUYE4  
About • Received ※ 03 August 2022 — Revised ※ 07 August 2022 — Accepted ※ 08 August 2022 — Issue date ※ 28 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUYE5
Multiobjective Optimization of the LCLS-II Photoinjector  
 
  • N.R. Neveu
    SLAC, Menlo Park, California, USA
  • T.H. Chang, S.T.P. Hudson, J.M. Larson
    ANL, Lemont, Illinois, USA
  • P.L. Franz
    Stanford University, Stanford, California, USA
 
  Genetic algorithms and particle swarm optimization are currently the most widely used optimization algorithms in the accelerator physics community. These algorithms can take many evaluations to find optimal solutions for one machine prototype. In this work, the LCLS-II photoinjector is optimized with three optimization algorithms: a genetic algorithm, a surrogate based algorithm, and a multi-start scalarization method. All three algorithms were able to optimize the photoinjector, with various trade-offs for each method discussed here.  
slides icon Slides TUYE5 [3.919 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUYE6
High-Fidelity Simulations and Machine Learning for Accelerator Design and Optimization  
 
  • D. Winklehner
    MIT, Cambridge, Massachusetts, USA
  • A. Adelmann
    PSI, Villigen PSI, Switzerland
 
  Funding: NSF #1505858, NSF #162606, NSF #1626069
Computation has become a critically important tool for particle accelerator design and optimization. Thanks to massively parallel codes running on high-performance clusters, we can now accurately predict emergent properties of particle ensembles and non-linear collective effects, and use machine learning (ML) for analysis and to create "virtual twins" of accelerator systems. Here, we will present the IsoDAR experiment in neutrino physics as an example. For it, we have developed a compact and cost-effective cyclotron-based driver to produce very high-intensity beams. The system will be able to deliver cw proton currents of 10 mA on target in the energy regime around 60 MeV. 10 mA is a factor of 10 higher than commercially available machines. This increase in current is possible due to longitudinal-radial coupling through space charge, an effect dubbed "vortex motion". We will discuss the high-fidelity OPAL simulations performed to simulate this effect in the IsoDAR cyclotron and predict beam losses due to halo formation. We will present uncertainty quantification for this design and we will show our study to optimize the IsoDAR injector RFQ using ML.
 
slides icon Slides TUYE6 [2.414 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUZG1
Magnets to ML to Light Sources: Designing from the Browser with Sirepo  
 
  • J.P. Edelen, B. Nash
    RadiaSoft LLC, Boulder, Colorado, USA
  • R.J. Roussel
    SLAC, Menlo Park, California, USA
 
  Join us for an afternoon intensive with the Sirepo CAE & Design platform! Experts will present their work and lead tutorials using Sirepo’s apps and tools for magnet design, ML techniques in control systems, and X-ray beamline modeling. Bring your own laptop and pre-register to use the free gateway at Sirepo.com for practical exercises. Dr. Edelen will provide a tutorial on how to design dipoles and undulators using parameterized magnets. Dr. Roussel will demonstrate combining the classical Preisach model of hysteresis with ML techniques to efficiently create non-parametric, high-fidelity models of arbitrary systems exhibiting hysteresis. Also shown will be how using these joint hysteresis-beam models allows users to overcome optimization performance limitations when hysteresis effects are ignored. Dr. Nash will review the capabilities of Shadow and SRW within Sirepo, discuss additional optics tools in the interface including brightness and linear optics, demonstrate translation between the two codes, and show how to continue your simulation work in a python based Jupyter notebook.  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPA14 Fast First-Order Spin Propagation for Spin Matching and Polarization Optimization with Bmad 369
 
  • J.M. Asimow, G.H. Hoffstaetter, D. Sagan, M.G. Signorelli
    Cornell University (CLASSE), Cornell Laboratory for Accelerator-Based Sciences and Education, Ithaca, New York, USA
 
  Accurate spin tracking is essential for the simulation and propagation of polarized beams, in which a majority of the particles’ spin point in the same direction. Bmad, an open-sourced library for the simulation of charged particle dynamics, traditionally tracks spin via integrating through each element of a lattice. While exceptionally accurate, this method has the drawback of being slow; at best, the runtime is proportional to the length of the element. By solving the spin transport equation for simple magnet elements, Bmad can reduce this algorithm to constant runtime while maintaining high accuracy. This method, known as "Sprint," enables quicker spin matching and prototyping of lattice designs via Bmad.  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-TUPA14  
About • Received ※ 30 July 2022 — Revised ※ 09 August 2022 — Accepted ※ 10 August 2022 — Issue date ※ 24 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPA29 Machine Learning for Predicting Power Supply Trips in Storage Rings 413
 
  • I. Lobach, M. Borland, G.I. Fystro, A. Sannibale, Y. Sun
    ANL, Lemont, Illinois, USA
  • A. Diaw, J.P. Edelen
    RadiaSoft LLC, Boulder, Colorado, USA
 
  Funding: The work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.
In the Advanced Photon Source (APS) storage ring at Argonne National Lab, trips in the magnet power supplies (PSs) lead to a complete electron beam loss a few times a year. This results in unexpected interruptions of the users’ experiments. In this contribution, we investigate the historical data for the last two decades to find precursors for the PS trips that could provide an advance notice for future trips and allow some preventive action by the ring operator or by the PS maintenance team. Various unsupervised anomaly detection models can be trained on the vast amounts of available reference data from the beamtime periods that ended with an intentional beam dump. We find that such models can sometimes detect trip precursors in PS currents, voltages, and in the temperatures of magnets, capacitors and transistors (components of PSs).
 
poster icon Poster TUPA29 [2.116 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-TUPA29  
About • Received ※ 03 August 2022 — Revised ※ 07 August 2022 — Accepted ※ 10 August 2022 — Issue date ※ 18 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPA55 Progress Toward Improving Accelerator Performance and Automating Operations with Advanced Analysis Software 465
 
  • J.E. Koglin, J.E. Coleman, M. McKerns, D. Ronquillo, A. Scheinker
    LANL, Los Alamos, New Mexico, USA
 
  Funding: Research presented in this conference paper was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers XXG2, XX8R and XXB6.
The penetrating radiography provided by the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility is a key capability in executing a core mission of the Los Alamos National Laboratory (LANL). A new suite of software is being developed in the Python programming language to support operations of the of two DARHT linear induction accelerators (LIAs). Historical data, built as hdf5 data structures for over a decade of operations, are being used to develop automated failure and anomaly detection software and train machine learning models to assist in beam tuning. Adaptive machine learning (AML) that incorporate physics-based models are being designed to use non-invasive diagnostic measurements to address the challenge of time variation in accelerator performance and target density evolution. AML methods are also being developed for experiments that use invasive diagnostics to understand the accelerator behavior at key locations, the results of which will be fed back into the accelerator models. The status and future outlook for these developments will be reported, including how Jupyter notebooks are being used to rapidly deploy these advances as highly-interactive web applications.
 
poster icon Poster TUPA55 [1.919 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-TUPA55  
About • Received ※ 15 July 2022 — Revised ※ 08 August 2022 — Accepted ※ 10 August 2022 — Issue date ※ 12 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPA83 Derivative-Free Optimization of Multipole Fits to Experimental Wakefield Data 523
 
  • N. Majernik, G. Andonian, W.J. Lynn, J.B. Rosenzweig
    UCLA, Los Angeles, California, USA
  • P. Piot, T. Xu
    Northern Illinois University, DeKalb, Illinois, USA
 
  Funding: Department of Energy DE-SC0017648.
A method to deduce the transverse self-wakefields acting on a beam, based only on screen images, is introduced. By employing derivative-free optimization, the relatively high-dimensional parameter space can be efficiently explored to determine the multipole components up to the desired order. This technique complements simulations, which are able to directly infer the wakefield composition. It is applied to representative simulation results as a benchmark and also applied to experimental data on skew wake observations from dielectric slab structures.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-TUPA83  
About • Received ※ 02 August 2022 — Revised ※ 21 August 2022 — Accepted ※ 26 August 2022 — Issue date ※ 11 September 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPA84 Reconstructing Beam Parameters from Betatron Radiation Through Machine Learning and Maximum Likelihood Estimation 527
 
  • S. Zhang, N. Majernik, B. Naranjo, J.B. Rosenzweig, M. Yadav
    UCLA, Los Angeles, California, USA
  • Ö. Apsimon, C.P. Welsch, M. Yadav
    The University of Liverpool, Liverpool, United Kingdom
 
  Funding: US Department of Energy, Division of High Energy Physics, under Contract No. DE-SC0009914.
The dense drive beam used in plasma wakefield acceleration generates a linear focusing force that causes electrons inside the witness beam to undergo betatron oscillations, giving rise to betatron radiation. Because information about the properties of the beam is encoded in the betatron radiation, measurements of the radiation such as those recorded by the UCLA-built Compton spectrometer can be used to reconstruct beam parameters. Two possible methods of extracting information about beam parameters from measurements of radiation are machine learning (ML), which is increasingly being implemented for different fields of beam diagnostics, and a statistical technique known as maximum likelihood estimation (MLE). We assess the ability of both machine learning and MLE methods to accurately extract beam parameters from measurements of betatron radiation.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-TUPA84  
About • Received ※ 02 August 2022 — Revised ※ 07 August 2022 — Accepted ※ 10 August 2022 — Issue date ※ 05 October 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEXF1
Accelerator Computation: Fast, Cheap, and Easy  
 
  • J.R. Cary
    Tech-X, Boulder, Colorado, USA
 
  Computation has seen dramatically increased speed over the past decade, with the rise of device (GPU) computing. To make this increased computing capability more widely available, there is a need to move it to the cloud, where elastic and cheap resources are available. To make this capability more accessible, there is a need for easily used codes, usable after minimal training. This talk presents an approach to all three of these goals, showing that accelerator computations can be made fast, cheap, and easy.  
slides icon Slides WEXF1 [6.705 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPA09 A Parallel Automatic Simulation Tool for Cavity Shape Optimization 634
 
  • L. Ge, Z. Li, C.-K. Ng, L. Xiao
    SLAC, Menlo Park, California, USA
  • M. Beall, B.R. Downie, O. Klaas
    Simmetrix Inc., Clifton Park, USA
 
  Funding: U.S. Department of Energy under contract No. DE-SC0018715.
We present a parallel automatic shape optimization workflow for designing accelerator cavities. The newly developed 3D parallel optimization tool Opt3P based on discrete adjoint methods is used to determine the optimal accelerator cavity shape with the desired spectral response. Initial and updated models, meshes, and design velocities of design parameters for defining the cavity shape are generated with Simmetrix tools for mesh generation (MeshSim), geometry modification and query (GeomSim), and user interface tools (SimModeler). Two shape optimization examples using this automatic simulation workflow will be presented here. One is the TESLA cavity with higher-order-mode (HOM) couplers and the other is a superconducting rf (SRF) gun. The objective for the TESLA cavity is to minimize HOM damping factors and for the SRF gun to minimize the surface electric and magnetic fields while maintaining its operating mode frequency at a prescribed value. The results demonstrate that the automatic simulation tool allows an efficient shape optimization procedure with minimal manual operations. All simulations were performed on the NERSC supercomputer Cori system for solution speedup.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-WEPA09  
About • Received ※ 03 August 2022 — Revised ※ 05 August 2022 — Accepted ※ 09 August 2022 — Issue date ※ 08 October 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPA23 SRF Cavity Instability Detection with Machine Learning at CEBAF 669
 
  • D.L. Turner, R. Bachimanchi, A. Carpenter, J. Latshaw, C. Tennant, L.S. Vidyaratne
    JLab, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
During the operation of CEBAF, one or more unstable superconducting radio-frequency (SRF) cavities often cause beam loss trips while the unstable cavities themselves do not necessarily trip off. Identifying an unstable cavity out of the hundreds of cavities installed at CEBAF is difficult and time-consuming. The present RF controls for the legacy cavities report at only 1 Hz, which is too slow to detect fast transient instabilities. A fast data acquisition system for the legacy SRF cavities is being developed which samples and reports at 5 kHz to allow for detection of transients. A prototype chassis has been installed and tested in CEBAF. An autoencoder based machine learning model is being developed to identify anomalous SRF cavity behavior. The model is presently being trained on the slow (1 Hz) data that is currently available, and a separate model will be developed and trained using the fast (5 kHz) DAQ data once it becomes available. This paper will discuss the present status of the new fast data acquisition system and results of testing the prototype chassis. This paper will also detail the initial performance metrics of the autoencoder model.
 
poster icon Poster WEPA23 [1.859 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-WEPA23  
About • Received ※ 01 August 2022 — Revised ※ 04 August 2022 — Accepted ※ 09 August 2022 — Issue date ※ 24 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPA24 pyJSPEC - A Python Module for IBS and Electron Cooling Simulation 672
 
  • H. Zhang, S.V. Benson, M.W. Bruker, Y. Zhang
    JLab, Newport News, Virginia, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177.
The intrabeam scattering is an important collective effect that can deteriorate the property of a high-intensity beam and electron cooling is a method to mitigate the IBS effect. JSPEC (JLab Simulation Package on Electron Cooling) is an open-source C++ program developed at Jefferson Lab, which simulates the evolution of the ion beam under the IBS and/or the electron cooling effect. The Python wrapper of the C++ code, pyJSPEC, for Python 3.x environment has been recently developed and released. It allows the users to run JSPEC simulations in a Python environment. It also makes it possible for JSPEC to collaborate with other accelerator and beam modeling programs as well as plentiful python tools in data visualization, optimization, machine learning, etc. In this paper, we will introduce the features of pyJSPEC and demonstrate how to use it with sample codes and numerical results.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-WEPA24  
About • Received ※ 02 August 2022 — Revised ※ 08 August 2022 — Accepted ※ 11 August 2022 — Issue date ※ 26 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPA29 Real-Time Cavity Fault Prediction in CEBAF Using Deep Learning 687
 
  • M. Rahman, K.M. Iftekharuddin
    ODU, Norfolk, Virginia, USA
  • A. Carpenter, T.S. McGuckin, C. Tennant, L.S. Vidyaratne
    JLab, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
Data-driven prediction of future faults is a major research area for many industrial applications. In this work, we present a new procedure of real-time fault prediction for superconducting radio-frequency (SRF) cavities at the Continuous Electron Beam Accelerator Facility (CEBAF) using deep learning. CEBAF has been afflicted by frequent downtime caused by SRF cavity faults. We perform fault prediction using pre-fault RF signals from C100-type cryomodules. Using the pre-fault signal information, the new algorithm predicts the type of cavity fault before the actual onset. The early prediction may enable potential mitigation strategies to prevent the fault. In our work, we apply a two-stage fault prediction pipeline. In the first stage, a model distinguishes between faulty and normal signals using a U-Net deep learning architecture. In the second stage of the network, signals flagged as faulty by the first model are classified into one of seven fault types based on learned signatures in the data. Initial results show that our model can successfully predict most fault types 200 ms before onset. We will discuss reasons for poor model performance on specific fault types.
 
poster icon Poster WEPA29 [1.339 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-WEPA29  
About • Received ※ 02 August 2022 — Revised ※ 07 August 2022 — Accepted ※ 11 August 2022 — Issue date ※ 17 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPA38 Progress on Machine Learning for the SNS High Voltage Converter Modulators 715
 
  • M.I. Radaideh, S.M. Cousineau, D. Lu
    ORNL, Oak Ridge, Tennessee, USA
  • T.J. Britton, K. Rajput, M. Schram, L.S. Vidyaratne
    JLab, Newport News, Virginia, USA
  • G.C. Pappas, J.D. Walden
    ORNL RAD, Oak Ridge, Tennessee, USA
 
  The High-Voltage Converter Modulators (HVCM) used to power the klystrons in the Spallation Neutron Source (SNS) linac were selected as one area to explore machine learning due to reliability issues in the past and the availability of large sets of archived waveforms. Progress in the past two years has resulted in generating a significant amount of simulated and measured data for training neural network models such as recurrent neural networks, convolutional neural networks, and variational autoencoders. Applications in anomaly detection, fault classification, and prognostics of capacitor degradation were pursued in collaboration with the Jefferson Laboratory, and early promising results were achieved. This paper will discuss the progress to date and present results from these efforts.  
poster icon Poster WEPA38 [1.320 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-WEPA38  
About • Received ※ 25 July 2022 — Revised ※ 08 August 2022 — Accepted ※ 11 August 2022 — Issue date ※ 03 October 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPA40 The L-CAPE Project at FNAL 719
 
  • M. Jain, V.C. Amatya, G.U. Panapitiya, J.F. Strube
    PNNL, Richland, Washington, USA
  • B.F. Harrison, K.J. Hazelwood, W. Pellico, B.A. Schupbach, K. Seiya, J.M. St. John
    Fermilab, Batavia, Illinois, USA
 
  The controls system at FNAL records data asynchronously from several thousand Linac devices at their respective cadences, ranging from 15Hz down to once per minute. In case of downtimes, current operations are mostly reactive, investigating the cause of an outage and labeling it after the fact. However, as one of the most upstream systems at the FNAL accelerator complex, the Linac’s foreknowledge of an impending downtime as well as its duration could prompt downstream systems to go into standby, potentially leading to energy savings. The goals of the Linac Condition Anomaly Prediction of Emergence (L-CAPE) project that started in late 2020 are (1) to apply data-analytic methods to improve the information that is available to operators in the control room, and (2) to use machine learning to automate the labeling of outage types as they occur and discover patterns in the data that could lead to the prediction of outages. We present an overview of the challenges in dealing with time-series data from 2000+ devices, our approach to developing an ML-based automated outage labeling system, and the status of augmenting operations by identifying the most likely devices predicting an outage.  
poster icon Poster WEPA40 [1.870 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-WEPA40  
About • Received ※ 03 August 2022 — Revised ※ 12 August 2022 — Accepted ※ 17 August 2022 — Issue date ※ 31 August 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPA70 Tensor Decomposition for the Compression and Analysis of 10 kHz BPM Data 792
 
  • J. Choi, Y. Hidaka, Y. Hu, G.M. Wang
    BNL, Upton, New York, USA
 
  Funding: This work is supported in part by the U.S. Department of Energy (DOE) under contract No. DE-SC0012704.
In the NSLS-II storage ring during user operation, fast-acquisition (FA) 10-kHz BPM data are collected, and their spectral properties are analyzed. Various periodograms and spectral peaks are being provided every minute, and they are very useful in identifying any changes in the orbit. Unfortunately, because of the large amount of data, only several numbers are being continually archived for later study, and the full raw data are saved only by hand when needed. We are developing methods utilizing tensor decomposition techniques to save and analyze the FA data; this paper reports the current status of this project.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-WEPA70  
About • Received ※ 02 August 2022 — Revised ※ 08 August 2022 — Accepted ※ 10 August 2022 — Issue date ※ 28 September 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THXE1
Accelerators for Quantum Technologies  
 
  • A.S. Romanenko
    Fermilab, Batavia, Illinois, USA
  • K.A. Brown
    BNL, Upton, New York, USA
  • S. Sosa
    ODU, Norfolk, Virginia, USA
 
  Dr. A. Romanenko will present "SRF-based accelerator technologies for Quantum" followed by "Large Ion Traps for Quantum Information Systems" presented by Dr. K. Brown. "Examples of AI/ML enabled by HPCs in design applied to a QIS" presentation by Dr. S. Sosa will be followed by panel discussion and audience questions.  
slides icon Slides THXE1 [7.433 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THXE2
Accelerators for Quantum Technologies  
 
  • K.A. Brown
    BNL, Upton, New York, USA
 
  Dr. A. Romanenko will present "SRF-based accelerator technologies for Quantum" followed by "Large Ion Traps for Quantum Information Systems" presented by Dr. K. Brown. "Examples of AI/ML enabled by HPCs in design applied to a QIS" presentation by Dr. S. Sosa will be followed by panel discussion and audience questions.  
slides icon Slides THXE2 [6.768 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THXE3
Accelerators for Quantum Technologies  
 
  • S. Sosa
    ODU, Norfolk, Virginia, USA
 
  Dr. A. Romanenko will present "SRF-based accelerator technologies for Quantum" followed by "Large Ion Traps for Quantum Information Systems" presented by Dr. K. Brown. "Examples of AI/ML enabled by HPCs in design applied to a QIS" presentation by Dr. S. Sosa will be followed by panel discussion and audience questions.  
slides icon Slides THXE3 [1.958 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FRXE1 Bayesian Algorithms for Practical Accelerator Control and Adaptive Machine Learning for Time-Varying Systems 921
 
  • A. Scheinker
    LANL, Los Alamos, New Mexico, USA
  • R.J. Roussel
    SLAC, Menlo Park, California, USA
 
  Particle accelerators are complicated machines with thousands of coupled time varying components. The electromagnetic fields of accelerator devices such as magnets and RF cavities drift and are uncertain due to external disturbances, vibrations, temperature changes, and hysteresis. Accelerated charged particle beams are complex objects with 6D phase space dynamics governed by collective effects such as space charge forces, coherent synchrotron radiation, and whose initial phase space distributions change in unexpected and difficult to measure ways. This two-part tutorial presents recent developments in Bayesian methods and adaptive machine learning (ML) techniques for accelerators. Part 1: We introduce Bayesian control algorithms, and we describe how these algorithms can be customized to solve practical accelerator specific problems, including online characterization and optimization. Part 2: We give an overview of adaptive ML (AML) combining adaptive model-independent feedback within physics-informed ML architectures to make ML tools robust to time-variation (distribution shift) and to enable their use further beyond the span of the training data without relying on re-training.  
slides icon Slides FRXE1 [34.283 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-NAPAC2022-FRXE1  
About • Received ※ 08 August 2022 — Revised ※ 10 August 2022 — Accepted ※ 12 August 2022 — Issue date ※ 27 September 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FRXE2
Bayesian Algorithms for Practical Accelerator Control and Adaptive Machine Learning for Time-Varying Systems  
 
  • R.J. Roussel
    SLAC, Menlo Park, California, USA
 
  Particle accelerators are complicated machines with thousands of coupled time varying components. The electromagnetic fields of accelerator devices such as magnets and RF cavities drift and are uncertain due to external disturbances, vibrations, temperature changes, and hysteresis. Accelerated charged particle beams are complex objects with 6D phase space dynamics governed by collective effects such as space charge forces, coherent synchrotron radiation, and whose initial phase space distributions change in unexpected and difficult to measure ways. This two-part tutorial presents recent developments in Bayesian methods and adaptive machine learning (ML) techniques for accelerators. Part 1: We introduce Bayesian control algorithms, and we describe how these algorithms can be customized to solve practical accelerator specific problems, including online characterization and optimization. Part 2: We give an overview of adaptive ML (AML) combining adaptive model-independent feedback within physics-informed ML architectures to make ML tools robust to time-variation (distribution shift) and to enable their use further beyond the span of the training data without relying on re-training.  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)