NIMROD Progress Report
4^{th} Quarter FY 2000
November 3, 2000
1. Overview
We report progress for the NIMROD Project for the period July-September, 2000.. Progress during this period includes:
1. Improvements in pre-conditioning algorithms for generalized elements have improved efficiency;
2. NIMROD has been run with both static non-uniform density, and with evolution of the axisymmetric component of the density profile;
3. Two papers reporting the application of NIMROD to spheromak dynamics have been accepted for publication in Physical Review Letters and Physics of Plasmas;
4. Benchmark calculations with M3D under the auspices of the Macroscopic Modeling Project are continuing;
5. NIMROD is ready to interface with Morrell Chance's VACUUM code. We await his attention to this project;
6. Conditions for accurate calculations with highly anisotropic heat flux have been quantified;
7. Neo-classical tearing modes have been calculated with the "pressure and flow" closure;
8. Progress has been made in incorporating energetic particles into NIMROD;
9. Nonlinear vacuum calculations have shown numerical instability;
10. A new visualization tool, nimX, has been developed;
11.NIMROD output has been remotely interfaced with the MDS+ data base. NIMROD data is thus treated on the same footing as D3D experimental data.
Progress in some areas continues to be slow due to lack of manpower.
2. Status
The present version of the NIMROD code is 2.3.3. It is publicly available from http://nimrod.saic.com/download/home.html. The status of this version of the NIMROD code is as follows:
1. The physics kernel contains the following physical models:
2. The following grids have been implemented:
3. Interfaces with several commonly used equilibrium file formats are available.
The following development projects are underway:
1. Beta-testing of the high-order finite elements;
2. An algorithm for a vacuum region and a moving separatrix is being implemented;
3. The implementation of kinetic and energetic particle effects by particle pushing using the gyro-kinetic model is continuing;
4. Resistive wall boundary condition is being implemented
1. Linear stability studies for the UCLA Electric Tokamak (ET);
2. Nonlinear growth and saturation of the neo-classical tearing mode in D3D;
3. High-b disruption in D3D;
4. Relaxation in spheromaks;
5. Two-fluid effects in RFPs and FRCs;
6. High-S campaign.
3. Highlights
Matrix preconditioning
The development of generalized Lagrange basis functions in NIMROD has led to new structures containing data along the sides and within the interiors of finite elements. When a matrix equation is formed in NIMROD, the unknowns in element interiors are eliminated prior to the call to the iterative solver. This has proven critical. However, the data along the sides may not be eliminated without producing a dense matrix, due to the off-diagonal connections to and from both adjacent cells.
The first version of the line-Jacobi preconditioner that was extended for the new data structures uses only connections in one logical coordinate direction for each of the two sets of 1D solves. This has proven very successful for bilinear finite elements due to its robustness to large condition numbers and our parallel implementation. However, this approach exhibits a large increase in the number of iterations in going from bilinear to biquadratic finite elements.
To improve performance, we have tried expanding the definition of a "line" to include all data within a line of elements. The off diagonal connections inverted during preconditioning then include some connections in the direction perpendicular to that of the "line." While this reduces the number of iterations substantially (factors of 2 are common), the added work leads to little reduction of the CPU time required for serial computations (0-20%). The gain may be better for parallel computations, where reducing the iteration count reduces the amount of communication. [Carl Sovinec]
Nonuniform density
The two-fluid benchmark for the Computational Initiative prompted a study of how NIMROD performs with a nonuniform density profile. It was found that a fast-growing but smooth artifact arises in the solutions, unless the time steps are small enough to allow an explicit computation. The semi-implicit advance for velocity has been in the form
where F represents the linear force operator. In this form, the integration by parts required for the finite element method leads to a term proportional to the density gradient. This term has not been included, since it destroys the symmetry of the resulting matrix. Its absence effectively puts an artificial source term in the equation, resulting in the artifact mentioned above. Rewriting the numerical formulation to have kinetic and potential energies separated in different terms
eliminates the artifact and the need for a density gradient term, in a natural way. The successful computations of the benchmark problem with density gradient described in this report use this form for the velocity advance. However, this form also suggests that matrix coupling between different Fourier components, resulting from toroidal variations in density, may be unavoidable when a density evolution equation is incorporated. [Carl Sovinec]
Spheromak simulations
Two manuscripts reporting NIMROD spheromak simulation results have been accepted for publication in refereed journals. They are:
The two papers will represent the first published physics results from the NIMROD code. [Carl Sovinec]
DIII-D-relevant tearing mode benchmark
One of the two tearing mode benchmarks for the Computational Initiative uses an analytic equilibrium (provided by Ming Chu) with a profile and wall that are shaped to represent typical equilibria in the DIII-D tokamak. We have used this benchmark as an opportunity to test bicubic and biquartic basis functions in realistic conditions. Linear growth rates and eigenfunctions have been computed at S values of 10^{4} and 10^{6} over a range of magnetic Prandtl numbers. Figure 1 shows results on growth rate and error for different meshes and basis functions. Similar problems run with bilinear elements have required hundreds of elements in each direction, so the 2-3% deviation about the most resolved case gives a good indication of how higher-order elements improve accuracy. Figure 2 plots the radial component of the velocity eigenfunction at S=10^{6} against the root of the poloidal flux function, and Fig. 3 shows the same data plotted vs. the radial cell index to show how grid packing has been used to resolve the singular surfaces.
Fig. 1. Linear results on growth rate and divergence error for the DIII-D-relevant tearing mode at S=10^{6} with different (radial x poloidal) meshes and different finite element basis functions.
Fig. 2. Flux surface normal component of the velocity eigenfunction at S=10^{6} plotted vs. the square root of the poloidal flux function.
Fig. 3. Flux surface normal component of the velocity eigenfunction (same as Fig. 2) at S=10^{6} plotted vs. the radial mesh index (normalized to unity at the separatrix).
A nonlinear simulation at S=10^{4} has also been run from this equilibrium using bicubic elements. The (2,1) mode saturates, inducing visible island chains at both the q=2 and q=3 rational surfaces, as shown in Fig. 4. Less packing of the grid was used in this simulation than in the linear simulations to anticipate spreading of the singular behavior in the nonlinear stage. [Carl Sovinec]
Fig. 4. Poincaré surface of section from the nonlinear S=10^{4} simulation showing saturated magnetic islands at the q=2 and q=3 surfaces.
Two-fluid Benchmark
Fig. 5. Comparison of MHD growth rates from M3D and NIMROD versus the inverse of the magnetic Prandtl number for the linear geometry benchmark.
Work over the last quarter has focused on the linear geometry simulation for the Computational Initiative benchmark. A comparison of the growth rates from NIMROD and M3D is shown in Fig. 5 where only MHD terms have been used in Ohm's law. There is generally good agreement between the two codes. The Prandtl number was not available for the M3D case. Since the August NIMROD meeting, density gradients have been added to NIMROD, and this increased the growth rates slightly. Work is now proceeding to compare with the two fluid terms included. This will require adding some of the gyroviscous stress tensor to NIMROD. [Rick Nebel]
Resistive Wall
The algorithm has been updated to work with the generalized finite element basis functions. It has also been modified to use the fact that the normal component of the normal gradient can be expressed as a surface gradient, since the divergence of the magnetic field vanishes. A spline of the magnetic field in the resistive wall is now made to evaluate the surface derivatives of the tangential components. All the necessary machinery is ready to interface with Morrell Chance's VACUUM code. Progress is slowed pending his attention to this problem. [Tom Gianakon]
Anisotropic Thermal Diffusion
An important lesson on the magnitude of semi-implicit operators was learned over the last three months. Accuracy of the cross-field diffusion requires that the cross-field diffusion be larger than the term leading the isotropic semi-implicit operator, i.e. . Failure to satisfy this criterion effectively slows and perhaps stops cross-field diffusion in a static magnetic field. This is illustrated in Fig. 6 where the anisotropic pressure equation is evolved with the same static magnetic island perturbation. The initial condition on the pressure is below its equilibrium value at time t = 0 and the simulations are all run for the same period of time. When the criterion is violated in this test, the peak pressure effectively does not change from its initial condition. In simulations involving a time-evolving magnetic field, violating the criterion may also have the peculiar effect of producing unphysical cross-field transport.
Fig. 6. Results from numerical tests of semi-implicit anisotropic diffusion; the largest reported value is the correct result. Mesh size and polynomial degree are indicated in the legend. The semi-implicit coefficient is given the value .
The above criterion is an accuracy statement and does not guarantee that the pressure advance is numerically stable. A test case has been successfully run with a time-evolving magnetic field given the parameters:
The pressure equation is unstable when . Subcycling the pressure equation may help simultaneously achieve accuracy and stability, and it has been implemented in an ad hoc manner but remains to be debugged. [Tom Gianakon]
Neoclassical Tearing Modes (NTM)
Successful simulation of anisotropic pressure diffusion has led to progress with regard to simulation of NTMs. Six approximations have been implemented for the bootstrap current. However, only two produce a perturbed bootstrap current without numerical problems. They are the perturbed pressure variants of the poloidal flow-damping model, e.g. where is self-consistent or analytic. Figures 7 and 8 show the perturbed bootstrap current for two values of the electron poloidal flow-damping rate. The rate is unphysically large and produces a bootstrap current that is larger than the equilibrium current, but it does produce an island (indicated in Fig. 9). This particular case has an unrealistically large resistivity value, which forces use of a large flow damping rate to insure that the electron stress tensor can compete with the resistive term in Ohm's law. [Tom Gianakon]
Fig. 7. Toroidal current density profiles from the simulation with neoclassical effects and the larger flow-damping rate.
Fig. 8. Toroidal current density profiles from the simulation with neoclassical effects and the smaller flow-damping rate.
Fig. 9. Comparison of mode evolution for the two different flow damping rates.
Energetic Particles
The effects of energetic particles are important for modeling realistic fusion plasmas. Nonlinear saturation mechanisms via wave-particle interactions strongly affect MHD modes, e.g. Fishbone modes and TAE modes. We are in the process of incorporating energetic particle effects into NIMROD, a nonlinear MHD simulation code.
The use of finite elements in NIMROD allows for general geometries, including divertor regions and complex vacuum chambers. However, the use of finite elements presents a challenge for the implementation of conventional Particle-In-Cell methods. The grid is highly nonuniform and/or unstructured. This makes the particle gather and scatter nontrivial. There is also an issue of tracking each particle as it moves from one finite element to the next.
The kinetic particles are coupled through the pressure tensor in the MHD momentum equations. It is assumed that the density of hot particles is small compared to that of the bulk plasma, but the pressure contribution of the hot particles is comparable to the bulk plasma. We are modeling the hot particle contribution using a df formulation.
Fig. 10. A typical NIMROD grid for computation.
Fig. 11. Orbits of energetic particles traced in a static magnetic field on the NIMROD grid shown in Figure 10.
Fig. 12. Parallel and perpendicular particle energies demonstrating energy conservation.
Fig. 13. Radial particle drift in an applied electric field.
Each finite element is mapped to a logical coordinate space (R,Z)è (p,q) where (R,Z) is the real space NIMROD coordinate and (p,q) is the logical coordinate space that runs between [-1,1]. Each vertex of the finite element is accordingly mapped to the logical coordinate {(-1,-1),(1,-1),(1,1),(-1,1)}. The internal points must also be mapped in a consistent fashion. Unfortunately, the mapping requires an iterative solve, requiring extra computation. Currently a Newton Method is employed to solve for the logical coordinate of each particle residing within a finite element. In the process of calculating the logical coordinate, the Jacobian of the transformation must be calculated. This will be useful in the evaluation of the gradient terms. We may also use the result of the mapping to logical space in the search algorithm.
As mentioned, the search algorithm for particles in a nonuniform and unstructured grid is more complicated than for a conventional PIC simulation. In the conventional PIC simulation, a simple divide and casting into INTEGER (or some similar means) is all that is required to identify the grid cell that the particle is in. The particle search in the NIMROD grid is not so trivial. The result from the iterative solve for the logical coordinate is used to test whether a particle is in the grid cell or not. The particle is in the finite element if the logical coordinate satisfies the condition that (p,q) are both in [-1,1]. Otherwise, the particle is not in that finite element and a neighboring finite element must be searched. This too is an iterative process. For the present implementation, the search and calculation of the correct logical coordinate value takes a few microseconds.
To test the search algorithm and performance, a reduced equation of motion is used. The electric field is set to zero in the drift kinetic equations and an axisymmetric magnetic field is assumed. We test to see that the particles trace out the magnetic field lines and that they execute the proper bounce motion. We follow the parallel and perpendicular energies of the particles to assure energy conservation. The results, along with the NIMROD grid employed for the tests, are shown in Figures 10-12. For a reasonable number of iterations, the energy conservation is seen to be better than a few parts in 10,000.
The second test is to include an electric field and compare the observed drift velocity of the banana orbits to the expected values. The results of this test are shown in Figure 13.
We are now in the process of depositing particles onto the grid to calculate the pressure contribution from the particles. In addition, faster means of calculating the fields on the finite element grid are being looked into. [Scott Parker, Charlson Kim]
Continuity Equation
The density equation has been added to the set of equations of the physics kernel. So far only the n = 0 component of the density is taken into account in the nonlinear coupling of the integrand routines, although the density is defined and advanced as a 3D field.
The upgraded code is being applied to simulations of plasma flow in a magnetic nozzle and in Field Reversed Configurations for plasma propulsion studies (Variable Specific Impulse Magnetoplasma Rocket, VASIMR [please see: http://spaceflight.nasa.gov/mars/technology/propulsion/aspl, and A. Tarditi, Bull. APS, v. 45 (7), p. 130 (2000)]) as well to the UCLA Electric Tokamak. Near term developments are the introduction of "open" boundary conditions and the to full 3D treatment of the density equation.
First tests have been conducted both with and without magnetic field by considering an initial gaussian shaped density profile in a cylindrical geometry (in the r-z plane). A simple case with drifting profile along the cylindrical z-axis in a nozzle-like magnetic fields was also tested in preparation of more realistic runs with accelerated plasma flows (see Figure 14).
Other minor upgrades were introduced to facilitate these test.
Fig. 14 A plasma density profile evolving from an initial half-gaussian distribution in R and drift velocity and uniform magnetic field along Z. Plot produced with the Transform graphic package.
Vacuum Implementation
Extension of the cylindrical benchmark into the nonlinear regime has begun. The sharp boundary of the concentration variable (see previous quarterly report submissions for discussion of algorithm) cause numerical problems as soon as the run is made nonlinear. The beginnings of this numerical instability is seen in Figure 15, which is plot of the radial velocity profile after 15 time steps. This numerical instability is well-known in the computational fluid dynamics community that works on two-phase flow problems (See for example, W.J. Rider and D.B. Kothe, JCP 141 (1998) 112 and M. Sussman and E.G. Puckett JCP 145 (2000) 301). The computational techniques employed by the CFD community are being investigated for applicability to the MHD vacuum problem and tests are ongoing for our nonlinear benchmark.
A linear benchmark in toroidal geometry has been identified in collaboration with Ming Chu at General Atomics. The case, based on DIII-D Shot number 97441, is a classical tearing mode unstable case, but the stability of the tearing mode depends critically upon the existence of the vacuum region. The beginning benchmark uses a conformal wall instead of the realistic wall boundary to facilitate the comparison with the PEST-III calculations performed by Ming Chu. A summary of the equilibrium chosen can be seen in Figure 16. [Scott Kruger]
Fig. 15. The radial velocity profile after only 15 time steps already shows the beginning of the numerical instability which eventually terminates the simulation when running nonlinearly.
Fig. 16.Summary of equilibrium chosen for the toroidal vacuum benchmark. The case is DIII-D Shot #97441 at 1650 ms which is delta-prime unstable in the presence of a vacuum region (as determined by PEST-III calculations, but is stable with a conducting wall at the surface.
Visualization
We have developed nimX, a code for visually exploring NIMROD data sets. With nimX researchers can easily generate three-dimensional field lines, isosurfaces, sections and volumes. nimX is written as an Open-DX network. Open-DX is a free visualization program created by IBM and maintained by the open source community. DX not only provides the algorithms for generating the field lines and volume renderings, etc... It also has built in interaction capabilities for rotating and translating the data. DX allows all of the relevant parameters for the code to be managed from a graphical control panel.
nimX is freely available to the NIMROD community. It can be downloaded at http://samjones.colorado.edu/research/nimX. Examples of nimX output are shown in Figures 17-21.
Fig. 17.nimX interactive interface and control panel.
Fig. 18. Sample output from nimX showing field lines and toroidal field intensity.
Fig. 19. Magnetic field lines in spheromak gun geometry.
Fig. 20. A single field line and an isosurfaceof current density for a spheromak simulation.
Fig. 21. Field line shown in Figure 20 with the isosurface removed.
We have also developed a low-cost (under $15k) graphics engine using a high-end Linux machine running OpenDx: 800 MHz Pentium III, 512 MB RAM, 32 MB Matrox G400, Dual-Head Graphics Card, Two Viewsonic XGA Video Projectors, Lenticular Screen, Polarized Film and Glasses. This gives our NIMROD customers 3D stereographic visualization capability at a modest cost. A typical satisfied user is shown at the left. [Scott Parker and Charlson Kim]
Post-processing/Database/MDS+
In collaboration with Jeff Schacter at General Atomics, the NIMROD post-processor has been modified to write directly to an MDS+ database remotely to make it easier to compare the computational results with experiment, to capitalize on the extensive visualization tools which have been developed for the experimentalists, and to facilitate data sharing among other NIMROD team members by having centralized data storage. As an example of what has been accomplished, Fig. 22 shows a screenshot of ReviewPlus being run at SAIC accessing experimental data from DIII-D and a NIMROD simulation of the same shot. The data was stored on the DIII-D atlas shot server by running the NIMROD post-processor, nimplot, at SAIC. Further work includes porting the MDS+ modules to the new NIMROD3.0 architecture, evaluating the visualization tools provided by the DIII-D group, and investigating the storage requirements for NIMROD users.
Fig. 22. Review+ screen showing D3D and NIMROD data.
Collaborations with Scott Klasky of PPPL for advanced data visualization has begun. Modifications to nimplot have been made to facilitate generation of data formats required by advanced data visualization tools. The goal is to provide powerful data visualization tools to the average NIMROD user. The modifications currently also need to be ported to the new NIMROD3.0 version and debugged for incorporation into the code repository. [Scott Kruger]