The purpose of this workshop is to exchange information on the latest scientific results and computational methods in lattice QCD simulation and related fields. The program will consist of invited talks and poster sessions. The number of participants is limited to 50.
This workshop is held IN PERSON only at R-CCS, Kobe, Japan.
Venue:
Information for Poster Presenters:
All presenters:
Please upload your presentation slides/poster on the Contributions page.
In this presentation, I will discuss applications of machine learning techniques in Lattice QCD. Lattice QCD has physical symmetries and quantum-statistical features compared to standard machine learning in image processing. I mainly explain the gauge covariant neural network, capable of these symmetries and features, and its applications in the context of exact simulations. I will also briefly mention the LatticeQCD.jl library, which enables lattice QCD simulations using the gauge covariant neural network with LLVM compiling technology.
We demonstrate that a state-of-the art multi-grid preconditioner can be learned efficiently by gauge-equivariant neural networks. We show that the models require minimal re-training on different gauge configurations of the same gauge ensemble and to a large extent remain efficient under modest modifications of ensemble parameters. We also demonstrate that important paradigms such as communication avoidance are straightforward to implement in this framework.
We present results for the axial charge and root-mean-square (RMS) radii of the nucleon obtained from 2+1 flavor lattice QCD at the physical point with a large spatial extent of about 10 fm. Our calculations are performed with the PACS10 gauge configurations generated by the PACS Collaboration with the six stout-smeared $O(a)$ improved Wilson-clover quark action and Iwasaki gauge action at $\beta$ = 1.82 and 2.00 corresponding to lattice spacings of 0.085 fm and 0.063 fm respectively. We first evaluate the value of $g_A/g_V$, which is not renormalized in the continuum limit and thus ends up with the renormalized axial charge. Moreover, we also calculate the nucleon elastic form factors and determine three kinds of isovector RMS radii such as electric, magnetic and axial ones at the two lattice spacings. We finally discuss the discretization uncertainties on renormalized axial charge and isovector RMS radii towards the continuum limit.
In this talk, I will discuss the method of the heavy-quark operator product expansion (HOPE) in lattice-QCD computations for parton physics. The extraction of the Mellin moments of the pion light-cone distribution amplitude is employed as an illustration of this approach. I will present numerical results of the second and the fourth moments (the latter being exploratory).
We report on JLQCD's studies on B meson semileptonic decays.
We perform a non-perturbative lattice calculation of the decay rates for inclusive semi-leptonic decays of charmed mesons. In view of the long-standing tension in the determination of the CKM matrix elements $|V_{ub}|$ and $|V_{cb}|$ from exclusive and inclusive processes, recently, the use of lattice QCD has been extended towards the description of inclusive decays. Since the determination of hadronic input parameters from QCD based methods require independent tests, we focus on the charm sector, since it not only offers experimental data, but also well determined CKM parameters.
We carry out a pilot lattice simulation for the $D_s \rightarrow X_s \ell\nu$ and explore the improvement of existing techniques. Our simulation employs Möbius domain-wall charm and strange quarks whose masses are tuned to be approximately physical and we cover the whole kinematical region. We report on our progress in analyzing different sources of systematic effects, especially the contribution from the extrapolation of the kernel function chosen for the Chebsyhev approximation.
The type IIB matrix model, also known as the IKKT model, is a promising candidate for the non-perturbative formulation of the string theory. Its Lorentzian version, in which the indices are contracted using the Lorentzian metric, has a sign problem stemming from e^{iS} in the partition function (where S is the action). It has turned out that the Lorentzian version is equivalent to the Euclidean version, in which the SO(10) rotational symmetry is spontaneously broken to SO(3), under the Wick rotation as it is. This leads us to add the Lorentz-invariant mass term to the Lorentzian version of the type IIB matrix model. The model we study involves a sign problem, and we perform numerical simulations based on the complex Langevin method, a stochastic process for complexified variables. We discuss the possibility of the emergence of the (3+1)-dimensional expanding universe.
We consider fermion systems on a square lattice with a mass term having a curved domain-wall. It is shown that massless and chiral edge states appear on the wall. In the cases of $S^1$ and $S^2$ domain-walls embedded into flat cubic lattices, we find that these edge modes feel gravity through the induced Spin or Spin$^c$ connections. The gravitational effect is encoded in the Dirac eigenvalue spectrum as a gap from zero. In the standard continuum extrapolation of the square lattice, we find a good agreement with the analytic prediction in the continuum theory. We also discuss how to couple the system to the gauge field and how to detect its nontrivial anomaly inflow between the bulk and edge.
We present the results of nucleon structure studies measured in 2+1 flavor QCD with physical light quarks in large spatial extents of about 10 and 5 fm. Our calculations are performed on 2+1 flavor gauge configurations generated by the PACS Collaboration with the stout-smeared O(a) improved Wilson fermions and Iwasaki gauge action at beta=1.82 corresponding to the lattice spacing of 0.085 fm. This poster mainly focuses on nucleon isovector scalar and tensor couplings. Especially, the tensor coupling is known as the 1st Mellin moment of transversity parton distribution and is itself related to the information of quark-EDM.
It is a fundamental question: what is the origin of the glueball masses? In the pure Yang-Mills theory, there is no mass scale in the classical level, while the breaking of scale invariance is induced by quantum effects. This is regarded as the trace anomaly, which is associated with the non-vanishing trace of the energy-momentum tensor (EMT) operator. In this context, the origin of the glueball masses can be attributed to the trace anomaly. Our purpose is to quantify how much the trace anomaly contributes to the glueball masses by using lattice simulations. Once one can have the renormalized EMT operator $T_{\mu\nu}$, the hadron matrix element of $T_{00}$ directly provides the mass of hadron. Therefore, it is natural to consider the mass decomposition in terms of the trace and traceless part of the EMT operator. However, it is hard to construct the renormalized EMT operator on the lattice, where the loss of translational invariance is inevitable due to the discretization of the space-time. To overcome this problem, H. Suzuki proposed that the gradient Yang-Mills flow approach can be utilized to construct the renormalized EMT operator from the flowed fields. In this talk, we directly measure the glueball matrix element of $T_{00}$ that is calculated by the gradient flow method, and then evaluate the contributions of the trace anomaly to the scalar glueball mass.
Investigation of QCD thermodynamics for $N_f$=2+1 along the lines of constant physics with Möbius domain wall fermions is underway. At our coarsest lattice $N_t$=12, reweighting to overlap fermions is not successful. To use domain wall fermions with the residual mass larger than average physical $ud$ quarks, careful treatments of the residual chiral symmetry breaking are necessary. One of the examples is the chiral condensate where a UV power divergence associated with the residual chiral symmetry breaking emerges with a coefficient not known a priori. In this presentation we introduce first the setup of the computations and then discuss methodologies to overcome potential problems towards the continuum limit in this setup.
We have been developing a general purpose lattice QCD code set Bridge++ [1] and its new version contains an optimization for A64FX systems like supercomputer Fugaku. In this presentation, we show the benchmark results of Bridge++ on Fugaku.
The bottleneck of LQCD application is solving linear equations, Dx = b, where fermion matrix D is a large sparse matrix and its operation is a stencil computation on four-dimensional space-time lattice. We apply iterative algorithms to solve this equation. Therefore, the performance of D multiplication is rucially important. The shape of matrix D is not unique and Bridge++ has implementation of several types of D that are widely used in the LQCD simulations. The benchmark result covers the performance of the following types of D: Wilson, Clover, Staggered, Domainwall, and their site even-odd preconditioned version. In the implementation, we adopt so-called Array of Structure of Array (AoSoA) data structure to use the SIMD feature of A64FX, and the lattice site degrees of
freedom is vectorized. We use 2-dimensional tiling of the lattice sites for the SIMD vectorization. The kernel codes are written using the Arm C-Language Extension (ACLE). The communication to exchange the boundary data is overlapped with the bulk computations. More details of the implementation for Fugaku are found in [2] and [3].
As mixed precision schemes are often used in the iterative solvers, we implement the fermion matrix D in both double- and single- precisions. The performance of D multiplication is around 400 GFlops/node in the single precision. We observe a very good weak scaling up to 512 nodes, which is the largest benchmark we tried. We also observe a good weak scaling of iterative BiCGstab (or CG) solvers.
[1] Lattice QCD code Bridge++, https://bridge.kek.jp/lattice-code/.
[2] Tatsumi Aoyama, Issaku Kanamori, Kazuyuki Kanaya, Hideo Matsufuru and Yusuke Namekawa, PoS LATTICE2022 (2023) 284, https://doi.org/10.22323/1.430.0284.
[3] Issaku Kanamori, Keigo Nitadori and Hideo Matsufuru, to appear in International Conference on High Performance Computing in Asia-Pacific Region Workshops (HPCASIA-WORKSHOP 2023), February 27-March 2, 2023, Raffles Blvd, Sin-
gapore. ACM, New York, NY, USA, 10 pages. [doi:10.1145/3581576.3581610]
One of the motivation for studying QCD thermodynamics is to understand the chiral symmetry restoration at finite temperature. Lattice QCD (LQCD) calculations with chiral fermions at finite temperature can be carried out on modern supercomputers nowadays. M\"{o}bius Domain Wall fermions in 5-d represent one realization of chiral fermions, with slight chiral symmetry breaking due to the finite size in the fifth dimension. Therefore, we refer to them as ``almost" chiral fermions.
In this poster, we will examine the use of ``almost" chiral fermions to evaluate the effectiveness and usefulness of the mass reweightingin the light quark sector on finite temperature lattices. In the "almost" chiral fermion case, one needs to perform the configuration generation twice. The first step involves identifying the small amount of chiral symmetry breaking ($m_{res}$), and the second step involves correcting the input quark masses by subtracting the $m_{res}$ effect. The mass reweighting method allows for reweighting observables generated using one mass value to obtain the value in other mass values, thus eliminating the need to generate new configurations with corrected input masses. we will be using the mass reweighting on ensembles generated by the JLQCD collaboration that utilizes 5-d M\"{o}bius Domain Wall fermions.
We will use the Bridge++: 2.0 code base on the Fugaku supercomputer to calculate reweighting factors with practical parameters and demonstrate when it is successful and when it fails. This is important because we only have a limited number of configurations available. Additionally, we will apply the reweighting method to real calculations and present the observables before and after reweighting.
Similarity between the Yang-Mills gradient flow and the stout smearing was first implied by M. Lüscher in 2010 and the rigorous proof of the equivalence was recently given by K. Sakai and S. Sasaki at the zero limit of the lattice spacing and the smearing parameter.
However, it is not obvious that they remain equivalent even with finite parameters within some numerical precision, therefore we verified the equivalence by comparing the energy density $\langle E\rangle$ measured in numerical simulations.
We apply the tensor renormalization group method to the (1+1)-dimensional SU(2) principal chiral model at finite chemical potential with the use of the Gauss-Legendre quadrature to discretize the SU(2) Lie group. The internal energy at vanishing chemical potential $µ = 0$ shows good consistency with the prediction of the strong and weak coupling expansions. This indicates an effectiveness of the Gauss-Legendre quadrature for the partitioning of the SU(2) Lie group. In the finite density region with $µ\neq0$ at the strong coupling we observe the Silver-Blaze phenomenon for the number density.
In this work, we investigate the CP(1) model using the tensor renormalization group technic, which does not suffer from the sign problem. The phase structure of the CP(1) model with the theta term is an interesting topic since it could be related to the well-known Haldane's conjectures. We apply the recent tensor renormalization technic to the CP(1) model and show that the CP(1) model has no second-order transition at theta = pi, up to the beta < 1.1. We also provide a detailed discussion of the systematic error in our calculation.
The type IIB matrix model, also known as the IKKT model, is a promising candidate for the non-perturbative formulation of the string theory. Its Lorentzian version, in which the indices are contracted using the Lorentzian metric, has a sign problem stemming from e^{iS} in the partition function (where S is the action). It has turned out that the Lorentzian version is equivalent to the Euclidean version, in which the SO(10) rotational symmetry is spontaneously broken to SO(3), under the Wick rotation as it is. This leads us to add the Lorentz-invariant mass term to the Lorentzian version of the type IIB matrix model. The model we study involves a sign problem, and we perform numerical simulations based on the complex Langevin method, a stochastic process for complexified variables. We discuss the possibility of the emergence of the (3+1)-dimensional expanding universe.
We investigate the finite temperature QCD phase transition with three degenerate quark flavors using Mobius domain wall fermions. To explore the order of phase transition on the lower left corner of Columbia plot and if possible, to locate the critical endpoint
we performed simulations at temperatures around 181 and 121 MeV with lattice spacing $a=0.1361(20)$~fm corresponding to temporal lattice extent $N_{\tau}=8,12$ with varying quark mass for two different volumes with aspect ratios $N_{\sigma}/N_{\tau}$ ranging from 2 to 3. By analyzing the volume and mass dependence of the chiral condensate, disconnected chiral susceptibility and Binder cumulant we find that there is a crossover at $m_q^{\mathrm{\overline {MS}}}(2\, \mathrm{GeV}) \sim 44\, \mathrm{MeV}$ for $\mathrm{T_{pc}}\sim$ 181 MeV, At temperature 121 MeV, the binder cumulant suggests a crossover at $m_q^{\mathrm{\overline {MS}}}(2\, \mathrm{GeV}) \sim 3.7\, \mathrm{MeV}$,
although a study of volume dependence would be important to confirm this.
Understanding the nature of correlated quantum many-body systems is the main purpose of modern condensed matter physics. Current booming quantum computing techniques offer a new way to treat these challenging systems: the quantum simulation approach. Using the quantum computer, which is a controllable quantum many-body system by itself, we can simulate other correlated quantum systems in which we have interests. However, since we are currently in the noisy intermediate-scale quantum (NISQ) era, specific algorithms need to be developed to maximally utilize current noisy quantum devices (NISQ devices). Here, I will introduce one of the main algorithms on NISQ devices, the variational quantum eigensolver (VQE), and its application to study correlated quantum many-body systems.
The renormalization group (RG) $\beta$ function describes the running of the renormalized coupling and connects the ultraviolet and infrared regimes of quantum field theories. Using different gradient flow schemes, we define renormalized couplings and determine the RG $\beta$ function using a more traditional step-scaling method as well as the concept of the continuous $\beta$ function which showcases a direct relation between gradient flow and RG flow.
We present results for SU(3) gauge systems with different number of flavors in the fundamental representation and discuss advantageous of the continuous $\beta$ function. In addition we point our future applications.
In the early days of QCD, the axial U(1) anomaly was considered to trigger the breaking of the SU(2)_L x SU(2)_R symmetry through topological excitations of gluon fields. However, it has been a challenge for lattice QCD to quantify the effect. In this work, we simulate QCD at high temperatures with chiral symmetric lattice Dirac operator. The exact chiral symmetry enables us to separate the contribution from the axial U (1) breaking from others among the susceptibilities in the scalar and pseudoscalar channels. Our result in two-flavor QCD indicates that the connected and disconnected chiral susceptibilities, which is conventionally used as a probe for SU(2) _L x SU(2)_R breaking, are dominated by the axial U(1) anomaly at temperatures greater than 165 MeV.
I will give an overview of the development directions of Grid on current and future US exascale computers.
I will also give an overview of the USQCD SciDAC-5 algorithm project to develop multiscale algorithms to exploit these.
We investigate the finite temperature QCD phase transition with three degenerate quark flavors using Mobius domain wall fermions. To explore the order of phase transition on the lower left corner of Columbia plot and if possible, to locate the critical endpoint
we performed simulations at temperatures around 181 and 121 MeV with lattice spacing $a=0.1361(20)$~fm corresponding to temporal lattice extent $N_{\tau}=8,12$ with varying quark mass for two different volumes with aspect ratios $N_{\sigma}/N_{\tau}$ ranging from 2 to 3. By analyzing the volume and mass dependence of the chiral condensate, disconnected chiral susceptibility and Binder cumulant we find that there is a crossover at $m_q^{\mathrm{\overline {MS}}}(2\, \mathrm{GeV}) \sim 44\, \mathrm{MeV}$ for $\mathrm{T_{pc}}\sim$ 181 MeV, At temperature 121 MeV, the binder cumulant suggests a crossover at $m_q^{\mathrm{\overline {MS}}}(2\, \mathrm{GeV}) \sim 3.7\, \mathrm{MeV}$,
although a study of volume dependence would be important to confirm this.
Simulation framework named “braket” for quantum computer with qubits and gates circuit is developed for massively-parallelized HPC systems using the state-vector method. On the “Fugaku” supercomputer, simulation for 40 qubits circuit is achieved using 1,024 or less nodes, and if its full nodes are available, we will reach 48 qubits with double precision and 51 qubits with byte precision. Simulation time per gate is less than one second, though it takes more for circuits more than about 40 qubits. As an application, quantum variational algorithm is tested for quantum Heisenberg chain with 40 spins, which treats 41 40-qubits circuits and evaluations with system Hamiltonian between the circuits and therefore quantum mechanical state with totally 41 x 40 = 1640 qubits is simulated exactly up to numerical accuracy.
Understanding the nature of correlated quantum many-body systems is the main purpose of modern condensed matter physics. Current booming quantum computing techniques offer a new way to treat these challenging systems: the quantum simulation approach. Using the quantum computer, which is a controllable quantum many-body system by itself, we can simulate other correlated quantum systems in which we have interests. However, since we are currently in the noisy intermediate-scale quantum (NISQ) era, specific algorithms need to be developed to maximally utilize current noisy quantum devices (NISQ devices). Here, I will introduce one of the main algorithms on NISQ devices, the variational quantum eigensolver (VQE), and its application to study correlated quantum many-body systems.
The numerical sign problem is one of the major obstacles to first-principles calculations in a variety of important systems. Typical examples include finite-density QCD, some condensed matter systems such as strongly correlated electron systems and frustrated spin systems, and real-time dynamics of quantum fields. Until very recently, individual methods were developed for each target system, but over the past decade there has been a movement to find a versatile solution to the sign problem. In this talk, I first explain the essence of the sign problem and outline some of the approaches proposed in line with the movement. I then focus on methods based on the Lefschitz thimble, and argue that the "Worldvolume Hybrid Monte Carlo method" [Fukuma and Matsumoto, arXiv:2012.08468] is a promising method due to its reliability and versatility.
Critical slowing down is one of the major difficulties in lattice QCD. Recently, it is becoming an urgent problem in the field as the precision goal is getting high and small lattice spacings have become demanding. As a promising approach towards conquering this problem, we here study the idea of the trivializing map, proposed by Luscher. In particular, we study the properties of the map at large beta using a toy model, and discuss how conventional approximations are insufficient to realize the triviality. We also consider possible strategies to circumvent the obstacles.
The tensor renormalization group (TRG) approach is a variant of the real-space renormalization group to evaluate the path integral defined on the thermodynamic lattice, without resorting to any probabilistic interpretation for the given Boltzmann weight. Moreover, since the TRG can directly deal with the Grassmann variables, this approach can be formulated in the same manner for the systems with bosons, fermions, or both. These advantages of the TRG approach have been confirmed by the earlier studies of various lattice theories, which suggest that the TRG potentially enables us to investigate the parameter regimes where it is difficult to access with the standard stochastic numerical methods, such as the Monte Carlo simulation.
In this talk, explaining our recent applications of the TRG approach to several (3+1)-dimensional field theories on a lattice, we demonstrate the efficiency of the TRG as a tool to investigate higher-dimensional theories and future perspectives.
We obtain the equation of state (EoS) for two-color QCD at low temperature and high density from the lattice Monte Carlo simulation. We find that the velocity of sound exceeds the relativistic limit (cs2/c2=1/3) after the BEC-BCS crossover in the superfluid phase. Such an excess of the sound velocity is previously unknown from any lattice calculations for QCD-like theories. This finding might have a possible relevance to the EoS of neutron star matter revealed by recent measurements of neutron star masses and radii.
We are generating the 2+1 flavor PACS10 configuration, whose
physical volumes are more than (10 fm)$^4$ at the physical point,
using the Iwasaki gauge action and $N_f=2+1$ stout-smeared
nonperturbatively $O(a)$ improved Wilson quark action at
three lattice spacings. We present our results for several physical
quantities calculated from the PACS10 configurations, such as
the pseudoscalar decay constant and kaon semileptonic form factors.
The stabilised Wilson fermion (SWF) framework combines numerical enhancements and a new discretisation scheme for Wilson-Clover fermions. In this presentation I discuss the components of the framework and give an overview of the status of the application of SWF in two cases: Traditional lattice QCD simulations, i.e. with spatial lengths less than 6 fm, and simulations with large spatial volumes, so-called master-field simulations.
The former is being addressed by the newly formed open lattice initiative (OpenLat) and recent work shows some benefits for the SWF, for example in terms of discretisation effects. The latter requires also a rethinking of measurement strategies aside of the generation of such large lattices. Both are challenging tasks and are built upon the concept of stochastic locality. I highlight some thoughts on how this can be exploited and show recent numerical results.
We first review theoretical aspects of the HAL QCD method, by comparing its pros and cons with the finite volume method. We then present the latest investigations in the HAL QCD method. In particular, we report on dibaryons and exotics at the almost physical pion mass.