SATURDAY, NOV. 14 1998 12:00 NOON - 5:30 PM
SUNDAY, NOV. 15 1998 8:30 AM - 5:30 PM
MAGNOLIA ROOM RADISSON HOTEL NEW ORLEANS, LA
*** THEMES OF THE MEETING ***
WHERE ARE WE? WHERE ARE WE GOING? HOW DO WE GET THERE?
AGENDA (REVISION 4)
Saturday, Nov. 14, 12:00 NOON
Bolton Tarditi Gianakon Kruger Aydemir Callen Popova Sovinec Nebel Glasser Schnack
I. WHERE ARE WE?
1. Status
A. Physics Kernel
i. Physics Model
b. R = 0
Carl has made real progress in implementing R=0 boundary condition. It seems to work well, and is in a separate CVS branch. This required changing variables from R*B_phi to just B_phi. Carl want people to run some toroidal problems with that version to see if that change affects any previous results. MHD works both linearly and nonlinearly. There may be some problems with the 2-fluid effects, but that may come from the Hall SI operator.
Alfonso and Alan will work to gt an FRC equilibrium into fluxgrid so the tilting mode can be benchmarked.
c. Electron and ion pressure (two-fluid physics)
Both electron and ion pressures are in the code, done by Carl and Alfonsi. Actually, the code solves for the total and electron pressures.
Rick reported some nonlinear 2-fluid results. These have been run to saturation. Stable results required increasing the pressure SI coefficient.
d. Vacuum region
e. Resistive wall
The team will evaluate Mike Hughes' proposal for vacuum and resistive walls.
Perhaps he should be concentrating on a physics problem instead of on code modification.
Scott also expressed an interest in implementing the vacuum region.
There was a discussion of the appropriate way to spend the potential resources, and the appropriate way to get these problems done. Carl said he could do these problems in a few weeks. We need to decide on allocaing these resources.
f. Hot ion kinetic effects
No progress here. We need a plan.
g. ???
Nothing here.
ii. Performance
Putting the equilibrium current into the SI operator has allowed true decoupling of the trme step from the fastest time scale in the problem!
a. Linear solver
- G-mode and coarse grid preconditioning
SI in V works better than SI in B. There is a correct g-mode, but there is also a numerical mode that eventually dominates. At low S the g-mode is overstable, which is unphysical.
Coarse grid shows performance improvements (X 2) in PRECON, but the results have been "less than I had hoped" in NIMROD.
"There are no more bug."....AHG
b. Parallal scaling
Carl has demonstrated N**(-0.8) scaling ("ideal" is 1/N). We need scalar optimization.
c. Scalar optimization
We need to be more clever about use of cache, etc. CRAY has info about T3E optimization on its web pages. Carl has tried rearranging the index ordering in arrays. Moving the offset indices to the first 2 slots the matrix-vector multiply runs twice as fast. The general philosophy is to make the most use of contiguous data. We need resources to do this in a systematic manner.
Dalton should check with NERSC to see if they can help us with this.
d. Higher order elements
We're "thinking about it"?
i ii. Validation
a. Sheared slab problem
Problems reported at Madison were cured by making DIVBD smaller (the same as the electrical diffusivity).
Choice of DIVBD may be most crucial input parameter.
b. Need for comparison with M3D kink-ballooning problem?
Do we want to redo their problems? Lutjens has also done this problem. We probably should do it, too. Tom will contact Lutjens to get the input.
This might be a good problem for Mike Hughes (among other things).
Is there a problem that NIMROD can do and M3D can't? (g-mode?????)
B. Pre- and Post-processing
i. Graphics and animation
a. Demonstration of Xdraw
Nina Popova gave a demonstration.
**********END OF SATURDAY SESSION**************
Sunday, Nov. 15
Kruger Gianakon Aydemir Leboeuf Chu Sovinec Nebel Glasser Tarditi Schnack Popova Wright Callen Bolton
Alan raised the possibility of introducing "F-blocks" to enable Fourier representation in the poloidal direction. This may make the matrices smaller and the code faster.
b. Suggestions for further Xdraw improvements
Menus? Yes. Nina to choose the best machine independent method.
Compare different runs on same windows.
Data scaling. (eg., Semi-log.)
Varying data format.
Import a color table.
Change plot type without need to change .bin file.
(Check on Island View??? for editing postscript files.)
Nina will post the proposals on the web and we'll prioritize them for her.
i i. Poincare plots???
Tom's package needs to be incorporated into the NIMROD installation.
C. GUI
Being maintained as features are added. The remote operation may need to be revisited.
D. Documentation
2. Public Release
This is a competitive time. If we put the code out to the public and there are problems, then out "colleagues" may take advantage of that.
There should be a "nimrod users" e-mail group. Tom and Scott will be on the users' group list.
Scott will take responsibility for maintaining the web pages, but not until aftr he graduates.
Are we ready??
Limited release to users who request it. "License" a la UCLA release will be on the web. When it is signed they get a password to the developers web page.
- Is the code stable?
Carl needs to update the default values.
- Improvements to "user-friendliness" needed?
See above. GUI is up to date.
- Customer support?
e-mail group for users, and maybe an FAQ.
- Sufficient documentation?
Need description of each input variable and a "tutorial" on how to run the code. Need to make sure the input.f stuff on the web are kept up to date.
- Web pages ready?
Scott will maintain the web pages.
- How to announce?
Put on OFE webpage
Saturday, Nov. 14, 6:30 PM - ??? Team Bonding
Sunday, Nov. 15, 8:30 AM
II. WHERE ARE WE GOING (6-12 months)?
1. Discussion of SS* process
A. Status of Initiative
B. Selection Criteria, FYI:
1) Readiness to move to terascale computing.
Yes. May need a specific problem that illustrates the need and the cpability.
a) The state of the underlying science as well as the understanding of scalable numerical methods for the area will be evaluated to ensure that the area is prepared to make early use of terascale computers.
b) readiness of the associated scientific community to take advantage of SSI scale computing resources
Yes.
2) Benefits fo Agency Missions. The problem is significant to the missions of the DOE.
Fusion is great!!?? We need the OFES mission statement.
3) Benefits to Science and Technology. Applications that stand to gain the most significance in the shortest period of time.
Justification for building 10 Tflop machines. Help make the argument that we need it in FY2000. Need the answers in the 00-05 time frame.
4) Solutions to National Problems. A fundamental science or engineering problem that has the potential economic and/or societal impact and that can be advanced by applying high performance computing resources
Redundant, but emphasize the "complex systems" part. Cutting edge international tool. This is the US contibution to the internatonal fusion pogram.
5) Sense of Urgency. The importance to the nation of initiating this project now, rather than in 2-3 years.
We must lead internationally.
6) Management and Collaborative Potential. Probable advances in enabling software or hardware technologies developed by the proposed Application that benefit other Applications will be treated favorably, as will Applications which use advanced software development frameworks.
Linear solvers.
C. Selection Process
We don't know.
D. The Competition
We don't know whether we're to compete or collaborate with our "colleagues". Will the competition be open or closed?
2. How does NIMROD stand?
See above!
A. Physics model
B. Performance
C. Demonstrated Applications
D. Are we ready for 10 Tflop computing?
3. Public Release?
4. Applications to Alternates
Sunday, Nov. 15, 1:30 PM
III. HOW DO WE GET THERE?
1. SS* Strategy
A. Physics model
Thermal conduction to enable no-classical calculations. Reproduce neo-FAR results. Tom will work on both of these.
Two near term problems:
1. Secondary island generation from sawtooth crash. Tom G.
2. "shot 87009". We need external kink to do this, but now Alfonso is trying with a different equilibrium that is internal kink unstable. We need the vacuum region to do this problem. Carl will make this a priority code development issue.
3. Spheromak, FRC, RFP (Alfonso/Carl, Alfonso, Carl)
The only new "physics" to be added are the vacuum region and the 2-fluid curent advection, plus some "things to be cleaned up".
B. Performance
Coarse grid pre-conditioning. (Alan) F-blocks (Alan) Automation of choice of DIVBD. Scalar optimization (approach NERSC on this!!!!!!!)??
C. Applications
see Above
2. Alternatives?
3. Formulation of Plan
See above.
4. Manpower Allocation
Who's going to do what??
See above.
IV. OTHER MATTERS
1. New team members
Mike Hughes to focus on problems with ext. kink with res. wall and feedback.
2. New beta-testers (ie., Columbia)
OK. (A member of the user group.)
3. T3E allocation/utilization
We should use ALL of our 1st quarter allocation. No feedback on the "new" NERSC queues.
4. Impact of LANL firewall
We may need to use a server at another site to assure access (NERSC? SAIC? UW?) Alan will keep everyone posted.
5. How are we functioning?
We should have a meeting in the middle of January in Austin (start on Tues Jan. 12).. Ahmet will be host and tell us about all the NIMROD applications at UT.
Teleconference: 1 PM EST on Thursday Dec. 17. Curt will set it up and Dalton will send an agenda.
6. Meeting and teleconference (?) schedule
see above.
V. ADJOURN