We report progress for the NIMROD Project for the second quarter of FY2002, April - June, 2002. Progress is reported in the following areas:
Code Development
Coupling to the AZTEC parallel linear solver software
Regularity conditions for 3-D matrices
Energetic particle effects
Chapman-Enskog-like (CEL) heat flux closure
Applications
Neo-classical tearing modes
High-b disruption in DIII-D
Spherical torus startup
Pulsed poloidal current drive (PPCD) in RFPs
1/1 internal kink
Vertical displacement events (VDEs) in ITER
Reconnection studies
Code Management
Code repository updates
Post processing/Data processing
Details are given in the following sections.
1. CODE DEVELOPMENT
1.1 Coupling to the AZTEC parallel linear solver software
To give NIMROD the capability to solve nonsymmetric and non-Hermitian systems of equations, we are upgrading subroutines, written by Steve Plimpton for the bilinear- and linear-element version of NIMROD, that call the AZTEC parallel linear system software from Sandia National Lab. This effort will increase the number of numerical methods available for treating two-fluid effects, advection, and other physics, and it will allow us to try preconditioning techniques that are not available in the NIMROD-native software. The intermediate goal of being able to reproduce the functionality of the earlier version was accomplished early in this quarter after completing most of the matrix and data structure modifications for the new version of NIMROD during the previous quarter. With block-diagonal preconditioning, bilinear basis functions, and modifications to the NIMROD solver to use the 2-norm, the norm of the residual from the AZTEC and NIMROD-native conjugate gradient routines match at each iteration in different test problems. Performance on the NERSC IBM SP3 is better with AZTEC for this preconditioner than what it is with the NIMROD-native solver. This finding is the opposite of what was obtained for earlier versions of the two codes. The change is likely due to improved optimization in AZTEC and to the performance penalty for being able to use general Lagrange-type basis functions in the new version of NIMROD.
Code development to use AZTEC for high-order basis functions is progressing. We have created a list of necessary improvements in the routine that allocates AZTEC data structures and in the routine that maps matrix elements from their NIMROD storage structures to AZTEC storage. This is quite involved for high-order basis functions, since the NIMROD implementation divides the basis-function nodes into four types (grid vertex-centered, horizontal and vertical side-centered, and cell-interior), and matrix structures have up to 16 sets of matrix elements for the different basis-type combinations, each with different column-offset ranges. All of this must be translated to the unstructured AZTEC format, and block-boundary data is converted so that none of the nodes are shared between processors, as done in the NIMROD-native solver. We anticipate completion of this development within the final quarter of the fiscal year and look forward to meaningful performance comparisons on research-grade simulations for the new version of NIMROD. [Michael Kissick, Hao Tian, and Carl Sovinec]
Regularity conditions for 3-D matrices
Regularity conditions that enforce for the n=1 Fourier component of a vector A at R=0 have been added to the NIMROD 3D conjugate gradient routine. Mathematically, it is identical to what is used for 2D n=1 linear systems, but the implementation is necessarily different with the matrix-free algorithm used for 3D systems of equations. Here, changing variables to and taking linear combinations of the corresponding matrix rows are done by manipulating the direction vector and the matrix/direction-vector product vector without changing the finite element calculations that lead to the matrix-free product. [Carl Sovinec]
1.3 Energetic particles
Work is underway to model energetic particle effects on macroscopic MHD instabilities and ensuing nonlinear saturation. Progress to date includes an efficient massively- parallel implementation of particle dynamics. This includes particle search, sorting and memory management algorithms for an unstructured finite element grid. A significant reduction in the memory overhead has been achieved allowing for the efficient handling of 10's of millions of particles on the T3E. A preliminary timing for the particles shows efficient scaling.
Computational Parameters: 32x64 grid, 32 processors, 10 timesteps
103
particles/processor seconds
% of total NIMROD Calculation
------ -------
-------
10 3.5 7.1
20 7.4 13.5
40 16.5 22.9
80 40.0 31.7
The equations for particle motion and evolution
of the perturbed distribution function have been formulated in
generalized flux coordinates and implemented in NIMROD.
Currently, self-consistent pressure coupling is being tested and
debugged using the SciDAC CEMM internal kink benchmark test
case. Initial runs show a prompt loss behavior of energetic
particles due to large drift orbits, with approximately one-third
of the particles lost in two transit times (see Fig.1). Fig. 1
shows the number of particles vs. time, with
dt=5x10-8s and tau_A=1x10-7s. Test runs
show peaking of hot particle pressure around the q=1 surface
shown in Fig. 2.
FIG. 1. Number of energetic particles as a function of time, illustrating a prompt loss of approximately one third of the particles in two transit times due to large drift orbits.
FIG. 2. Distribution of hot particle pressure near the q = 1 surface.
1.4 Chapman-Enskog-like closures
The CEL heat flow closure has been implemented in the electron temperature equation. Simulations show that it is possible to evolve Te in a stable manner using a diffusive, semi-implicit operator in parallel with an explicit treatment of the large CEL heat flow. Tests on efficiency of the heat flow calculation are underway with the help of two graduate students. Further developments include plans for a viscous stress tensor calculation to be done at the same time as the heat flow. This means that NIMROD will be equipped with a complete closure scheme for parallel ion and electron dynamics. It is expected that this will be completed in time to show preliminary results at APS. [E. Held]
2. APPLICATIONS
2.1 Neo-classical tearing modes
Studies of the generation and growth of neo-classical islands in DIII-D shot 86144 at 2250 msec. has continued. The discharge has been found to be unstable to both n = 1 and n = 2 resistive MHD modes. The kinetic energy in these modes as a function of time from a resistive MHD calculation is shown in Figure 3a. Note that the n = 2 mode makes a transition from being linear stable to being nonlinearly driven by mode coupling from the n = 1 mode. The corresponding magnetic islands are shown in Figure 3b. The size of the driven 3/2 island is insufficient to account for the experimental observations, indicating that resistive MHD alone is insufficient to explain the discharge properties.
a) b)
FIG. 3. a) Kinetic energy versus time during growth and saturation of the n = 1 and n = 2 modes. Note the transition in the n = 2 mode from linearly unstable to nonlinearly driven. b) 1/1,3/2, and 2/1 magnetic islands in the early nonlinear stage. Note the small amplitude of the 3/2 island.
The calculation has been repeated using the poloidal flow damping neo-classical closure at several different values of the Lundquist number S. The results are shown in Figure 4, where we plot the width of the driven 3/2 magnetic island as a function of time. Note that the width of the driven 3/2 magnetic island only becomes larger that the viso-resistive layer width for the largest value of S. The saturated width is still somewhat less than is observed in the experiment.
FIG. 4. Width of the driven 3/2 magnetic island as a function of normalized time for three different values of S using the poloidal flow damping closure.
Future progress on this problem will likely be minimal due to changes in personnel. [T. Gianakon, D. Schnack]
2.2 High-b disruption in DIII-D
Studies of a high-b disruption in DIII-D shot 87009 have continued. A heating term has been added to the energy equation to simulate energy deposition with a neutral beam, as in the experiment. The calculation is initiated with a stable equilibrium and slowly heated through the ideal MHD marginal point. The growth and saturation of the perturbed magnetic energy of the mode is shown in Figure 5a. Note the super-exponential growth. Theory predicts growth as exp(t3/2). In Figure 5b we plot the initial growth of the log of the perturbation versus (t-t0)3/2 for two different values of the heating rate. The results agrees well with theoretical predictions, and with the experiment.
a)
b)
FIG. 5. a) Growth of the log of the perturbed magnetic energy. Note the super exponential growth. b) The log of the magnetic energy plotted for two different heating rates. The deduced growth is in good agreement with both theory and experiment.
The discharge exhibits a stochastic core during the late nonlinear stages of the evolution. This is consistent with disruptive behavior. Calculations with anisotropic thermal conduction are underway.
2.3 Spherical torus start-up
Magnetohydrodynamic activity arising during Ohmic startup of spherical tori (ST) has a significant impact on shot duration and plasma-b, due to geometric limitations on the Volt-seconds available for the inductive flux swing. Driving past a non-disruptive instability consumes a much larger fraction of the Ohmic current drive than in a conventional tokamak. We are therefore pursuing a numerical study of this activity in the Pegasus device at the University of Wisconsin-Madison. In preparation for three-dimensional studies, most of the simulations run so far have considered symmetric dynamics alone. This preparatory phase has included developing realistic external poloidal fields, including a time-dependent component that models vertical field diffusing through the chamber wall. With temperature-dependent resistivity and anisotropic thermal conduction, the simulations start from an initial configuration with a poloidal field null, like the experiment, to aid and control current formation (see Figure 6). While the temperature-dependent resistivity is an important aspect for getting realistic current profiles, it also makes the result very sensitive to thermal diffusion coefficients and particle number density. With information provided by the Pegasus experimental group, the simulations have evolved to being sufficiently realistic that three-dimensional studies will be meaningful. [Carl Sovinec]
a) b)
FIG. 6. Poloidal magnetic flux distribution in a simulation of Ohmic ST startup at a) initial vacuum state, and b) 3 ms into current rise producing 70 kA out of the targeted 150 kA.
2.4 Pulsed poloidal current drive in RFPs
We are using NIMROD to simulate the reversed-field pinch (RFP) configuration with pulsed poloidal current drive (PPCD), which has produced very significant improvements to energy confinement in experiment. Numerical boundary conditions for magnetic evolution have been modified to model the application of poloidal electric field, and a series of nonlinear resistive MHD simulations at finite plasma-b with PPCD have been completed. The computed profile evolution compares favorably to measurements of edge current density changes [B. E. Chapman, et al., Phys. Plasmas 7, 3491 (2000)] and to profile evolution according to equilibrium fits [D. L. Brower, et al., Phys. Rev. Lett. 88, 185005 (2002)] for the Madison Symmetric Torus (MST). At this point, the simulations are considered qualitative modeling, due to the low Lundquist number used. However, detailed analysis of electric-field profile and magnetic fluctuation spectrum evolution is providing new insight into how PPCD modifies the profile, while efforts to perform the simulations at more realistic parameters continue. Traces of computed energy-confinement time resulting with different levels of applied poloidal electric field are shown in Figure 7. In addition to understanding the profile evolution, the continued simulation effort will study the role of the pressure gradient in driving fluctuations, in particular the isolated m=0 modes, during the improved-confinement conditions resulting from PPCD. [Jim Reynolds and Carl Sovinec]
FIG. 7. Simulated confinement time for PPCD with applied Epol magnitudes varying from 1/8 to 1/3 of the applied toroidal electric field (labels 1-4). A comparable simulation without PPCD is also shown (0).
2.5 1/1 internal kink mode
Frank Waelbroeck (UT-Austin) visited SAIC to learn to run NIMROD and to collaborate with Scott Kruger on investigating the effects of geometry on the 1/1 internal kink. This work is motivated by interesting DIII-D results by Ed Lazarus and theoretical work by Chris Gimblett. Currently, a target equilibrium has been created, and runs have begun. [S. Kruger]
2.6 Vertical displacement events (VDEs) in ITER
Roberto Paccagnella (Padua) visited SAIC to investigate using NIMROD to study VDE's with NIMROD. During his visit, it was decided that the current interface to the VACUUM code by Morrell Chance is inadequate because it does not include the "secular terms" to the Green's function. During Paccagnella's visit, the gridding of the ITER vacuum chamber was done (see Figure 8). The current EQDSK files used are inadequate for computation because they do not sufficiently satisfy the equilibrium condition numerically (see problems near the axis in the figure). Once Chance finishes the necessary development of the vacuum code and reasonable EQDSK files are obtained, NIMROD will immediately be able to begin simulations of ITER VDE's with the ultimate goal of simulating three-dimensional halo current generation. [S. Kruger]
FIG. 8. Sample equilibrium and grid for study of VDEs in ITER..
2.7 Reconnection studies
NIMROD is being used to study some interesting results from Zoran Mikic and Roberto Lionello (SAIC). In studies of merging flux tubes in two dimensions, different behavior is observed when the ratio of private flux to public flux is varied, with the possibility of fast reconnection being observed. Due to the initial configuration, the time to simulate full reconnection is much longer than previous studies which makes the simulations challenging. Preliminary results may be seen at:
http://www.nimrodteam.org/graphics/
This simulations are slowly continuing in the low-priority queue on seaborg.nersc. [S. Kruger]
3. CODE MANAGEMENT
3.1 Code repository updates
The current stable version of NIMROD is 3.0.2. Preparations are underway to release a stable version 3.1 which includes many changes. The primary changes are (and authors):
wTemperature is used as a fundamental variable instead of pressure (CRS)
wDensity evolution equation (CRS)
wResistive wall in slab, cylindrical, and toroidal geometry (TAG)
wThree-dimensional resistivity - especially needed for "vacuum region" (SEK and CRS)
wNumerous changes to the preprocessor. (SEK and TAG)
wInterface to MDS+ database. (SEK)
Currently work is being done on merging the changes from Kruger and Gianakon into the main CVS repository. [S. Kruger]
3.2 Post processing/data processing
Scott Klasky (PPPL) has given UW and SAIC beta versions of his data visualization package using the AVS/Express package. Although in a somewhat rough state, it is extremely well written and answers almost everything on the NIMROD data visualization wish list. Currently, SAIC is to provide a list of needed improvements to Klasky for him to work on improving the interface.
Allen Sanderson (University of Utah) visited SAIC and GA to install the SciRUN data visualization package. User feedback is also needed to give guidance on the what items need to be improved the most.
The interface to MDS+ is currently not being utilized due to problems with the MDS+ server being unable to handle the large datasets that NIMROD produces. Solutions to the problems were attempted, but so far have not met with success. Because the current interface to the graphics package is through MDS+, this is creating a bottleneck for the data visualization usage. Currently, Dave Schissel (GA) is investigating working with NERSC to install an MDS+ server at NERSC to facilitate some of the issues. An interface from NIMROD data to HDF5 is also being planned to speed the adoption of the data visualization packages. [S. Kruger]