LOS ALAMOS

		       September 23-25, 1997

Tuesday 9/23/97

Carl Sovinec presiding

Attendees: Carl, Rick, Ming, Alexander, Alfonso, Alan G., Rostom, Scott

I. Status of Physics Kernel

* Code Issues

Standards: Should we change to F90 free-format? 
Alexander has a PERL 	

code that could do the conversion.  One proposal: existing modules stay
in fixed format, new modules can be written in free format.  Votes:
allow new modules in free format...YES.  Reformat old modules: NO

NIMROD 2.0.5 has some cosmetic changes.  Current
density is now computed locally (on the fly) at the gaussian quad
points from derivatives of the magnetic field.  A new routine to do
this has been added.  This also gives more consistency to the
dispersion relation.  (The mass matrix machinery for J has been
retained with an eye toward computing the v-dot-grad-J terms in the
two-fluid Ohm's law.)

*****(Dalton to send copy of review presentation to each

Rim region: Tom G. has written a code for NIMSET using
a triangulation package from the web to triangulate the vacuum region. 
In principle the code could be run using these as tblocks.  This
shouldbe incorporated into V2.0.5, and the updated version called V2.1.
 Alan G.suggests we have Rblocks in the vacuum to the maximum extent
possible.  This would have a major impact on effiency and accuracy. 
("Phooey to you!" - Alan G.)

Parallel decomposition of tblocks: ???

Do we want to find a commercial triangulation code (like Tom has done)
and make the users use that?  First step is for Carl to implement Tom's
changes and see how easy/hard the free gridding package is to use. 
Then we can make a decision on whether to seek a comercial package.

FLUXGRID needs to be able to build a grid on open field lines, or not
based on flux surfaces at all, eg., geometry.

Gridding of a divertor separatix is still a research project.

* Validation Cases:

Draft plan to be reviewed and revised.

***Carl went over the Validation Plan and made notes on the vugraphs. 
He'll give them to Dalton.***

We're doing pretty well on the validation, but documentation of what's
been done is a problem.  

All tests should have convergence in time and space.  Plots of gamma
vs. dt and dx.  Ideal modes done without dissipation.  With
dissipation, show that ideal result is recovered in the limit of
dissipation going to zero.  Resistive, make sure S is large enough to
compare with theory.

Building a validation library (Alfonso).  Brief
paragraph describing the problem (README) + all necessary input files +
output from the case.  Output parameters vs. input parameters in a
table of numbers. has been volunteered as a
repository.  This will be available on the web.  Alfonso will put out a
sample of how he envisions the library to be structured, including
naming conventions.

The private web site will be moved from NERSC to 
Scott K. will be the webmaster.

* Div-B


Diagnostic is now keff**2 = Int(dV*abs(divB)**2)/Int(dV*abs(B)**2).  If
keff**2 > a**2, divB is not a problem.  ince the bug fixes, keff looks
good for all the test runs.

2, Preprocessing

* Reading eigenfunctions

Alan and Ming have worked together to automate the reading of
eigenfunctions and equilibria from GATO.  These were demonstrated by
Alan G.  The grid is packed near the rational surfaces.  This was done
by GATO.  

***It looks like there's a problem with the GATO eigenfunction near the
axis.  It has f = 0, instead if f' = 0.***

The test case was run for about 100 growth times.

The problem may be that we are getting xi-dot-grad-psi/abs(grad-psi),
but GATO computes in x=xi-dot-grad-psi.  The former is ill-behaved near
the axis.  Alan is now asking to get the raw GATO variables.  Ming will
do this.

3. Graphics

* Popova progress

Contour plots using multiple rblocks, multiple quantities, multiple
timesteps, and curvilinear coordinates have been implemented and were

Her next task is contours on tblocks.  After that, menus!

* Recent advances (Alan G. and Dan Laney)

POINCARE has been written to plot surfaces of section with both DX and
Xdraw.  The surface is generated by computing a maping by ntegrating
many points around one in phi, and then using the resulting mapping to
produce the surface of section by iteration.

ISO-q surfaces have been generated and displayed with DX.

Dan Laney will continue to work with us on this problem.

4. GUI

* Upgrade to Tcl8.0 (we're runnung v7.6).

Faster version.  Tom recommends upgrading.  The concensus is yes, do
the upgrade.  Tom can do it, but the upgrade will be done at his
discretion (Scott K. will help).  We all understand they have other
things to do.

5. Documentation Status

* Documentation should be on the web.  The documentation should consist

User's manual:

	Installation and compile

	Code structure (block diagram)

	How to run

	Input description

	Simple test cases

	Tips on setting up a case

	Graphics - Xdraw and DX

Also needed:

	Physics Description

	Algorithm description

The documentation web page will be maintained on

Scott K. has made a good start.  SAIC will take responsibility for
maintaining this.  Will coordinate with Scott K.

6. CVS (Scott K.)

Manage code changes during parallel development.

Scott K. gave us a presentation on how to use CVS.

Carl should be the librarian.

Carl will study and give us a full report.

7. The view from Germantown (Rostom)

FY98 Budgets:

	Pres: $225M

	Senate: $240M

	House: $225 including $7.5M for NERSC (a new addition, so a real 


Conference committee meeting "as we speak".  Our budgets will depend on
whether we get the $7.5M back or not.  However, FY99 may be bad.

The review was excellent and is standing NIMROD well in the theory

Wednesday, 9/24/97

Same crew in attendance

8. Milestones

We went over the APS and Sherwood milestones on that are on the Web.

Tom G. will be in Wisconsin for 3 weeks near the end of Octover.  Curt
should look into getting him funding to travel to the APS.

Near term code development requirements (APS):

	- v-dot-grad-V for nonlinear problems

Mid-term code development requirements (Sherwood):

	- gridding of vacuum region

	- resolution of separatrix

	- advection of resistivity in three dimensions (

****Rostom asks "Where will you be in Spring (April) 1999?"  We will
write a "vision statement" for DOE by APS meeting.****

9. Linear Solver

Alfonso described his investigations of AZTEC and ISIS.  Both packages
require pre-storing the matrix.  Firt cut is to run a nonperiodic, 1
block problem (to run on a workstation).  ISIS has been installed on
the T3E.  Alfonso is having problems with AZTEC.  The plan is to make a
version of NIMROD that calls AZTEC and ISIS, and compare it directly
with the present version of NIMROD.

Carl has been working on optimizing the solver in NIMROD.  The F77 and
F90 versions have been successfully benchmarked.  There is at least no
penalty in speed for using F90.  Carl has looked into ILU
preconditioning.  This has been put into NIMROD as an input option. 
Loops must be optimized.  F90 array syntax may not not be optimum. 
Optimization is not likely to be problem dependent, but may be machine
architecture dependent.  Optimization has been done for workstations,
not vector machines, because MP processors are likely to be like
workstation processors.  On the workstation (octane) with single block,
ILU wins (by about a factor of 3) over diagonal preconditioning.  On
the C90 the vectorization of the diagonal preconditioner makes diagonal
win out by a lot.  For multiple block problems, direct starts to win
out at about 7 blocks (on the workstation for a fixed problem size).

The message: there's no one right thing to do in any
situation.  C90:  keep number of blocks down and use diagonal.  On
workstation, use wither ILU1 or ILU0, with ILU0 requiring less memory.

****Alan wants reports on details of problem size when problems on the
lanl octane when you run out of memory.***

Steve Ashby has identified a potential person to work on NIMROD.  Curt
says that $47K is available.  Dalton will talk to Steve and Curt to
make sure this goes forward.

10. Parallel Computing Issues

Steve and Carl will work together to add parallel capability to ILU

Redo Carl's performance tests on the T3E.

File format problems for IEEE64 standard still exist on the T3E.  Alan
will continue to follow this problem.  We may use ASCII files instead
of binary.

We need to do scaling to 512 processors on the T3E (as per request from
Jim McGraw).

Steve will follow up on his MICS funding and let us know if any action
is needed.

Working Sessions

Wednesday afternoon, 9/24, and Thursday morning, 9/25, were devoted to
informal technical discussions.  The following occured:

1. Review of running NIMROD

Carl guided us (with Ming at the controls) through installing NIMROD,
setting up a case, and running it.  All was doine through the GUI.  The
case was the finite beta internal kink mode for DIIID.  We modified the
case to use a random kick as an initial condition instead of an
eigenfunction.  The case executed overnite but had problems because of
either lack of dissipation (there is no viscosity or resistivity) or
not enough div-B dissipation.

2. Formulation of vacuum region for external

A lively and fruitful discussion was held about how to represent the
"vacuum" region (outside the separatrix).  It was decided that the
initial plan will be: Solve the same fluid equations everywhere.  Track
the interface by advecting a marker variable ( <<0 => vacuum; >0 =>
plasma).  This tells which cells contain plasma and wich contain
vacuum.  Add a drag term to the equation of motion.  The drag
coefficient will be zero in the plasma and large in the vacuum.  The
viscosity will be moderate in the plasma and very small (or zero) in
the vacuum.  (This will minimize momentum coupling between the "vacuum
fluid" and the core plasma.)  The resistivity will be small
(moderate??) in the plasma and very large in the vacuum.  It may or may
not be required to make the vacuum a low density region.  If it is
required, the continuity equation will have to be solved.  We also need
to optimize the gridding of the vacuum region and the separatrix.

3. Advection

We reviewed how to add advection (v-dot-grad-v) to the equation of
motion.  Led by Carl, we worked it out on the board.

A Final Point

There was discussion about lumping NIMROD with the Numerical Tokamak
for the purpose of obtaining time on ASCI computers as a grand
challange project.  In the end, this idea was soundly defeated by the
team.  NIMROD will NOT be lumped with the
Numerical Tokamak in any way!