The Focus Fusion Society Forums Lawrenceville Plasma Physics Experiment (LPPX) Computational resources available from the DOE

Viewing 15 posts - 1 through 15 (of 16 total)
  • Author
    Posts
  • #494
    ailabs
    Participant

    Might be helpful to apply for the INCITE program to speed up simulation work:

    DOE Awards 265 Million Processor-Hours To Science Projects
    from the yay-i-get-to-compute dept.
    posted by Zonk on Monday January 21, @02:21 (Supercomputing)
    http://science.slashdot.org/article.pl?sid=08/01/20/1737203

    Weather Storm writes “DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program supports computationally intensive, large-scale research projects at a governmental level. They recently awarded 265 million processor-hours to 55 scientific projects, the largest amount of supercomputing resource awards donated in the
    DOE’s history and three times that of last year’s award. The winners were chosen based on their potential breakthroughs in the areas of science and engineering research, and the suitability of the project for using supercomputers.

    This year’s INCITE applications ranged from developing nanomaterials to advancing the nation’s basic understanding of physics and chemistry, and from designing quieter cars to improving commercial aircraft design. The next round of the INCITE competition will be announced this summer. Expansion of the DOE Office of Science’s computational capabilities should approximately quadruple the 2009 INCITE award allocations to close to a billion processor hours.”

    Link: http://www.technologynewsdaily.com/node/9009

    #2937
    Brian H
    Participant

    ailabs wrote: Might be helpful to apply for the INCITE program to speed up simulation work:

    DOE Awards 265 Million Processor-Hours To Science Projects
    from the yay-i-get-to-compute dept.
    posted by Zonk on Monday January 21, @02:21 (Supercomputing)
    http://science.slashdot.org/article.pl?sid=08/01/20/1737203

    Weather Storm writes “DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program supports computationally intensive, large-scale research projects at a governmental level. They recently awarded 265 million processor-hours to 55 scientific projects, the largest amount of supercomputing resource awards donated in the
    DOE’s history and three times that of last year’s award. The winners were chosen based on their potential breakthroughs in the areas of science and engineering research, and the suitability of the project for using supercomputers.

    This year’s INCITE applications ranged from developing nanomaterials to advancing the nation’s basic understanding of physics and chemistry, and from designing quieter cars to improving commercial aircraft design. The next round of the INCITE competition will be announced this summer. Expansion of the DOE Office of Science’s computational capabilities should approximately quadruple the 2009 INCITE award allocations to close to a billion processor hours.”

    Link: http://www.technologynewsdaily.com/node/9009

    Eric has posted the following:
    “Changes to the software will allow a roughly five-fold speed up allowing us to run about 80 ns per month.

    This is still slow, as the full run-down and compression lasts 400 ns, so LPP is actively seeking additional processing power to speed the insulations. Currently we are using a 2.8 GHz Intel Xeon

    #2938
    ailabs
    Participant

    I don’t see why it would not apply, based on the proposed allocation here: http://www.science.doe.gov/ascr/INCITE/

    #2939
    Brian H
    Participant

    ailabs wrote: I don’t see why it would not apply, based on the proposed allocation here: http://www.science.doe.gov/ascr/INCITE/

    “Specifically, 80% of the leadership-class Cray computers at ORNL and the IBM Blue Gene resources at ANL are allocated through the INCITE program. In addition to the leadership-class resources at ORNL and ANL, 10% of the National Energy Research Scientific Computing Center (NERSC) high-performance computing resources at LBNL and 5% of the Hewlett-Packard MPP system at PNNL will be made available to INCITE.”

    Yes, I see! Eric, you might like to check that out. Sounds like a few hours would do wonders for your schedule!

    #3002
    texaslabrat
    Participant

    As I mentioned to Dr. Lerner in email…I personally think the project would be best served by investing some coding resources to add support for either ATI (now AMD) or Nvidia gpu-assisted computation. Especially now that both camps can handle double-precision computation in hardware. I’m sure the folding@home folks at Stanford would assist in sharing code and know-how if requested. The amount of floating-point power available in today’s video cards (and, by extension, the purpose-made “stream processing” cards built from them) completely dwarfs what is available on general-purpose cpu’s. IBM’s recent new supercomputing entry (“roadrunner”) uses beefed-up Cell processors in conjunction with AMD cpu’s for the same reason. Just a thought 🙂

    #3009
    Lerner
    Participant

    You’re right and we are starting to work on this now. Details will be announced soon.

    #3014
    Brian H
    Participant

    Lerner wrote: You’re right and we are starting to work on this now. Details will be announced soon.

    Right about which one? The DoE system, or the GPU strategy?

    #3021
    Lerner
    Participant

    Sorry to be cryptic. You’re right about the GPU one. I don’t think DOE will give us any right now.

    #3026
    Brian H
    Participant

    Lerner wrote: Sorry to be cryptic. You’re right about the GPU one. I don’t think DOE will give us any right now.

    Does the DoE have a grudge against you? Their mandate seems clear enough. You could, if you had the spare time, probably apply some public pressure …

    #3028
    Alex Pollard
    Participant

    texaslabrat wrote: Especially now that both camps can handle double-precision computation in hardware.

    Lerner wrote: You’re right and we are starting to work on this now. Details will be announced soon.

    This is excellent news, single-precision GPUs are wholly inadequate for scientific purposes.

    #3035
    Henning
    Participant

    The new processors Geforce GTX 280 and GTX 260 support double precision: NVIDIA CUDA Compute Unified Device Architecture Programming Guide (see Appendix A and B)

    But isn’t single precision enough here? Errors are just quantum fluctations… :smirk:

    #3053
    ailabs
    Participant

    This just in from SlashDot:

    +– ——————————————————————+
    | Cool/Weird Stuff To Do On a Cluster? |
    | from the whose-merest-operational-parameters-i-am-not-worthy-to-c|
    | posted by kdawson on Tuesday June 24, @19:09 (Supercomputing) |
    | http://tech.slashdot.org/article.pl?sid=08/06/24/221246 |
    +——————————————————————–+

    Gori writes “I’m a researcher at a university. Our group mainly does
    Agent Based Modeling of interdisciplinary problems (think massive
    simulations where technology, policy, and economics meet). Recently, we
    managed to get a bunch of money for a High Performance Cluster to run our
    stuff on. The code is mostly written in Java. Our IT support people are
    very capable of setting up a stable cluster that will run Java perfectly.
    But where’s the fun in that? What I’m trying to figure out are other,
    more far-out and interesting things to do with this machine

    #3054
    Brian H
    Participant


    What I’m trying to figure out are other,
    more far-out and interesting things to do with this machine

    #3061
    Breakable
    Keymaster

    Eric mentioned that his simulation cannot be paralelised.
    I am guessing INCITE can be only implemented by a lot of parralel computer cores.
    So there is no speed up by running a serial task there, because you could use just one computer core.
    Its better to run on a GPU which has more serial powe.
    Of course dont forget that most GPU’s are also implemented by using several cores in paralel,
    so you should choose one that has the fastest core, instead of the most teraflops.

    #3063
    texaslabrat
    Participant

    Breakable wrote: Eric mentioned that his simulation cannot be paralelised.
    I am guessing INCITE can be only implemented by a lot of parralel computer cores.
    So there is no speed up by running a serial task there, because you could use just one computer core.
    Its better to run on a GPU which has more serial powe.
    Of course dont forget that most GPU’s are also implemented by using several cores in paralel,
    so you should choose one that has the fastest core, instead of the most teraflops.

    While I don’t want to speak for Dr. Lerner, I think you’ve mis-interpreted his statement. He said nothing about the problem not being a highly parallel problem….rather he mentioned that it is the type of problem that doesn’t easily lend itself to being farmed out to large numbers of unlinked processors such as in the folding@home or distributed.net projects (which have the luxury of divvying up work to be completed asychronously since the results from one data set do not depend on another for the most part). That, in itself, means nothing in regards to the parallel nature of the problem, but rather that the steps involved in the calculation are closely related to one another and can’t be done asynchronously. And that makes perfect sense…I would imagine that the calculations are trying to keep track of a bunch of particles and force lines…and every step changes the environment for all of them. Thus, this is the quintessential parallel problem insofar that all the calculations must truly be parallel in a strict sense (all the forces, particle vectors, etc must be calculated and results placed into a multi-dimensionsal array before moving onto the next time slice, which will use that array as the initial conditions for the next turn of the crank). It would seem that the highly parallel nature of gpu architecture would be very well suited, and TFLOPS very much matters 😉 That said, the ease of programming is probably the biggest variable…I’ve heard that CUDA is much more “programmer-friendly” than ATi’s CTM api….but I have no first-hand experience in programming either of them.

    I look forward to learning more details whenever Dr. Lerner has an opportunity to write something up.

Viewing 15 posts - 1 through 15 (of 16 total)
  • You must be logged in to reply to this topic.