Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
    Posts
  • #777
    Breakable
    Keymaster

    I got the idea here
    http://www.talk-polywell.org/bb/viewtopic.php?t=2126&postdays=0&postorder=asc&start=15

    It is that we can simulate FF at some level using rubber band modeling
    Results:
    http://imgur.com/GL8onl&od7qN;
    What can be learned:
    Plasmoid is probably less orderly than what the FF animation lets us believe.

    Edit
    Further questions: how orderly is the the ion/electron beam

    #6194
    jamesr
    Participant

    Although twisting a rubber band and seeing it twist and kink may help figuratively in picturing what is going on, it can be misleading to think that the behaviour of plasmas are that similar.

    A rubber band has a fixed topology – it cannot split into separate strands or merge back together. One of the key aspects of plasma instabilities is precisely that ability to filament, form ‘magnetic islands’ and reconnect. The transfer of energy to the plasma from the magnetic field and vice versa is intimately linked to the reconnection rate & changing topology of the field.

    Only through simple idealized simulations like these:
    two-stream instability
    Double tearing mode plus a shear instability.
    3-D evolution of Kelvin-Helmholtz instability
    or full device models like this:
    DIII-D tokamak simulation

    What we need is for the plasma focus device simulations that Eric Lerner and others are working on to mature to the level of some of the codes the tokamak or inertial confinement fusion communities now have at their disposal. This is what is going to take lots of funding in the years ahead.

    The 1D & 2D plasma focus codes around now may be sufficient to characterize the first few phases of a DPF pulse, and separate codes could start modeling the properties believed to exist in the small plasmoid. Currently no one program can model a DPF as a whole.

    A fully 3D program to model plasma focus devices that can cope with the vastly different time and space scales involved (even if written as separate modules for each phase), would do wonders not only for the science but also the PR point of view. With convincing 3D graphics based on a sound underlying model it will be a lot easier to convince people, rather that the simple artists impressions we have at the moment.

    #6195
    Breakable
    Keymaster

    Thank you, this certainly sheds some light on the situation

    #6196
    Aeronaut
    Participant

    I agree, James. Cool graphics can sell all sorts of things. I take it Sing Lee’s DPF simulator is 1D? Any kind of guesstimates how many people months it might take to upgrade it to 2D, and then on to 3D?

    #6198
    jamesr
    Participant

    As part of my PhD, I am writing a 3D model at the moment for simulating the turbulence at the edge of tokamaks to run on our 960-core parallel HPC cluster. All the time though, in the back of my mind, I keep thinking about how I can make it generic enough to cope with the conditions in a DPF.

    Most simulations work either by simplifying the ion & electron motion by averaging out the fast gyro motion of the ions/electrons around the magnetic field lines, or just treating it as a single conducting fluid, with a resistivity that varies with density & temperature. These assume there is only one ion species (typically they just model a perfect hydrogen plasma), so the number of ions & electrons are the same.

    Hopefully, when I get a bit further, my model will treat the electrons and different ion species as separate fluids, coupled together by the E&B fields and the odd collision. So in theory going from my edge plasma with some impurities to a hydrogen-boron plasma should not be too much extra.

    One thing it won’t do though is handle the bremsstrahlung & other radiation effects key to a DPF begin viable. – That’s a whole new topic.

    #6199
    Aeronaut
    Participant

    Cool deal. What would the outlook look like if you were to begin with each core thinking it was measuring and projecting the weather for 960 towns, villages, crossroads, etc? Iow, we have a much smaller plasma, so we can map it more precisely.

    #6200
    jamesr
    Participant

    Aeronaut wrote: Cool deal. What would the outlook look like if you were to begin with each core thinking it was measuring and projecting the weather for 960 towns, villages, crossroads, etc? Iow, we have a much smaller plasma, so we can map it more precisely.

    The smaller plasma does not make it simpler. The small scale, and high density just mean the grid size & time-step for the simulation are smaller – although not quite as bad as the situation in inertial confinement fusion.

    Our little cluster is tiny really – I can only model 3D grids of a few hundred points on each side, maybe a billion gridpoints for a few tens of thousands of timesteps. To model the focus of a DPF device would take much more.

    My code, as do most parallel codes of this sort, works by domain-decomposition – each processor handles a small local region, say 64x64x64 grid points then exchanges the field values, densities, velocities etc at the each edge with the relevant neighbouring processor each timestep. This should in theory scale well, that is it efficiently makes use of all the processors as you run it across more & more of them.
    Then I may be able to run it on the UK’s larger academic supercomputer called Hector which has 22656 cpu cores, and is currently No.20 in the top500
    The Met Office’s weather modeling computer is down at No.89 in the list. All these are still small in comparison to the DOE/NNSA/LANL or Oak Ridge’s computing facilities.

    If FF-1 achieves its goals then it is this scale of computer that we will want to run detailed simulations on. However well the experiments go, these days in order to manufacture any nuclear device they must be simulated to prove the physics is well understood, and dosage/shielding calculations need to be done to design the required containment (however small the dose may be).

    #6202
    Aeronaut
    Participant

    jamesr wrote:

    Cool deal. What would the outlook look like if you were to begin with each core thinking it was measuring and projecting the weather for 960 towns, villages, crossroads, etc? Iow, we have a much smaller plasma, so we can map it more precisely.

    The smaller plasma does not make it simpler. The small scale, and high density just mean the grid size & time-step for the simulation are smaller – although not quite as bad as the situation in inertial confinement fusion.

    Our little cluster is tiny really – I can only model 3D grids of a few hundred points on each side, maybe a billion gridpoints for a few tens of thousands of timesteps. To model the focus of a DPF device would take much more.

    My code, as do most parallel codes of this sort, works by domain-decomposition – each processor handles a small local region, say 64x64x64 grid points then exchanges the field values, densities, velocities etc at the each edge with the relevant neighbouring processor each timestep. This should in theory scale well, that is it efficiently makes use of all the processors as you run it across more & more of them.
    Then I may be able to run it on the UK’s larger academic supercomputer called Hector which has 22656 cpu cores, and is currently No.20 in the top500
    The Met Office’s weather modeling computer is down at No.89 in the list. All these are still small in comparison to the DOE/NNSA/LANL or Oak Ridge’s computing facilities.

    If FF-1 achieves its goals then it is this scale of computer that we will want to run detailed simulations on. However well the experiments go, these days in order to manufacture any nuclear device they must be simulated to prove the physics is well understood, and dosage/shielding calculations need to be done to design the required containment (however small the dose may be).

    Wow. Didn’t realize the how many processors would be needed, or how big the arrays could be. Are you assuming a combination of top-down and bottom-up prediction as the variables are developed? How long do you guesstimate a simushot would take on the Met Office’s size array? and for comparison, on the ORNL’s array? Dang I’m full of questions today!

    Excellent point about the simulations and experimental confirmations of them being required for manufacturing. I should have seen that coming. And that’s just for straight pB-11 fill gasses. But I imagine it’s going to need simulations for almost any gas if it’s going to be widely distributed.

    #6214
    Phil’s Dad
    Participant

    Aeronaut wrote: Wow. Didn’t realize the how many processors would be needed, or how big the arrays could be.

    There was an experiment in the UK to get the public to run climate models on their home PC’s. The big picture was split into chucks they could cope with – probably just a handful of grid points/cells – which they ran for a week. They e-mailed their results to other members of the public to crunch and so on until it was boiled down to something a single machine could handle.

    In the US you would have a bigger catchment, about 60 million processors? Makes the Cray’s 1/4 million look modest. What would that do for you?

    #6215
    Phil’s Dad
    Participant
    #6218
    jamesr
    Participant

    Phil’s Dad wrote:
    There was an experiment in the UK to get the public to run climate models on their home PC’s. The big picture was split into chucks they could cope with – probably just a handful of grid points/cells – which they ran for a week. They e-mailed their results to other members of the public to crunch and so on until it was boiled down to something a single machine could handle.

    Actually that project used a different method. When you’re running on separate home PC’s you can’t exchange data very well as the bandwidth is too low and latency way too high. This is also true for all the other distributed computing projects like SETI or Folding@Home. Each machine needs a separate task that is not time sensitive or dependent on any other processors.

    They gave each PC a whole earth to model, but with a fairly course grid. They are , after all interested in climate not local weather. Each PC was given slightly different starting conditions and response parameters. For example one scenario the oceans may absorb a little more CO2, another the rate of deforestation may be a little lower etc. By running the climate model forward from each of these scenario’s you learn how sensitive it is to variations in each parameter and how hard it can be pushed. So if you can estimate the errors on your starting assumptions you can extrapolate forward to give you the error on the prediction a hundred years from now.

Viewing 11 posts - 1 through 11 (of 11 total)
  • You must be logged in to reply to this topic.