While it is always good to test ideas with a simple model like the ideal gas law, to get rough ballpark figures. You should also be aware of under what conditions they are a good approximation and when they will break-down, (and they do break down at any fusion capable densities/temperature/timescales) .
In plasma physics it is common to order each term in the equations in a characteristic time and length scale (or characteristic speed). So for example the conservation of energy gives you the rate of change of pressure, which in turn gives you the characteristic speed(s) of a pressure fluctuation, which for a normal gas is just the speed of sound. In a plasma however there is not just one solution to the equation, there are many.
If the local fluid velocities are small in comparison to the speed of sound then the adiabatic term is the only one you need. However, as the temperature & pressure gradients increase the velocities of the individual ions/electrons become further from Maxwellian and you need to take into account the other terms such as the heat flux.
At that point, if you want to stay with the fluid model, you can either just keep adding on the extra equations of the higher order moments of the velocity distribution function with a closure at a higher order. Or move to a full kinetic model and use a PIC (particle in cell) approach.
If you’re interested in looking at the maths the full fluid equations are described here http://farside.ph.utexas.edu/teaching/plasma/lectures/node32.html
Congratulations to all the new official board members! Together, with a big cheer for Rezwan 🙂
I know its listed on the main website, I thought I just list them again here with links to their main and forum profiles.
As although some are obvious, I suspect many members & visitors are not aware of who’s who with regard to tying up real names with userid aliases.
Arif Basha – Arif
Henning Burdack – Henning
Ben Ferris – benf
Ignas Galvelis – Breakable
T. Joseph Nkansah-Mahaney – ProtoJ
Linda Schade – LindaSchade
John Shellito – JShell
For all those that weren’t in the meeting earlier to hear them in person, may I ask that the new board, and those who haven’t posted on here before introduce themselves to the rest of the forum
It’s good that stars burn their fuel so slowly. This comes from the fact that the fusion cross-section for normal proton-proton fusion is around 20 orders of magnitude smaller than for D-T (since it has to rely on the weak nuclear force, rather than the strong force).
Trying to contain D-T fusion is more like trying to contain a Supernova!
Building a tokamak fusion power plant is not a scientific problem, but an engineering one.
The science works and says for a stable plasma with the temperature & density gradients at the edge manageable, then the device needs to be very big. ITER is just about the smallest size that can achieve Q>1.
Just because the first ‘candle’ of fusion needs to operate on a scale larger than our everyday human interactions should not surprise us. Its still a lot smaller than the Sun.
Yes I do has access via my institution.
This has nothing to do with cavitation, acoustic, shocks etc. pressure & temperature are meaningless on this energy/time-scale.
The plasma ‘wake’ is the interaction with the huge EM fields of the laser pulse – not a fluid-like response, you have to think in terms of kinetics of individual ions/electrons.
Laser wakefield acceleration is nothing new, see: http://en.wikipedia.org/wiki/Plasma_acceleration
Similarly there have been improvements in Free Electron Lasers in the last 10 years or so.
This experiment was not about the energy of the accelerated electrons, 700MeV in this case, compared to >1GeV in others. Or the intensity (number of photons) of the resulting X-ray beam.
It was that the spectrum of the betatron excited beam of high energy photons peaked around 150keV rather than the ~10keV typical of previous experiments. Thus the X-rays are now in the ‘gamma-ray’ portion of the EM-spectrum rather than the relatively soft X-rays. (NB. traditionally the gamma terminology is reserved for photons emitted by nuclei rather than electrons)
Although it talked about gamma rays the Science Daily write-up failed to mention the key figure of 150keV, and instead went on about the intensity – which it then quoted incorrectly.
High intensity is needed, but it is the combination of high photon energy, and short pulse duration is key for imaging proteins etc. and make this potentially a big step forward in having a lab sized machine rather than the huge synchrotron accelerators.
As I posted on FB
Apart from failing to mention the photon energy range in the article. It makes an obvious error quoting
“The peak brilliance of the gamma rays was measured to be greater than 1023 photons per second, per square milliradian, per square millimetre, per 0.1% bandwidth.”
Whereas the Nature paper abstract actually says
“10^8 gamma-ray photons, with spectra peaking between 20 and 150keV, and a peak brilliance >10^23photons s^−1mrad^−2mm^−2 per 0.1% bandwidth, are measured for 700MeV beams, with 10^7 photons emitted between 1 and 7MeV”
Only 20 orders of magnitude out!
It should have been obvious to any science journalist that 1023 photons per second is not very intense, and maybe it should be written with a superscript, or if they can’t then at least as 10^23 or 1e23 notation
Is it too much to ask that proof reading should involve checking the numbers as well as spelling grammar and punctuation?
I think delt0r and I were trying to be polite about the idea.
Confining a charged ion beam in a large aspect ratio ring is one thing. But once you introduce a second beam at a different velocity it will be unstable.
http://en.wikipedia.org/wiki/Two-stream_instability
Here is an animation – note the y axis is velocity, so you start with the beams being two lines at different y positions. Then through wave-particle interactions it goes unstable and you quickly get a mix in velocity space (this doesn’t require any large angle collisions).
http://www.youtube.com/watch?v=ZzJFvYVVWPw
This, with collisions, will settle down to a Maxwellian thermal equilibrium.
Also, if your two beams had a deltaE of 30keV this would be the approximate temperature they relax to. Given your ridiculously high estimate of density of 10^23m^-3 this would equate to a pressure of 4.7*10^8 Pa or 4700 atmospheres. I’ll leave you to figure out the magnetic field you need to contain that at a beta of less than 1.
Even if the pressure was highly anisotropic so the radial part was only a few percent of the total, the required toroidal field would be impossible to create on that scale.
A couple of points…
Although you mention briefly ion scattering, I don’t think you appreciate the stability (or lack thereof) of the conditions you are trying to achieve.
You will get wave-particle interactions, coupling to all sorts of kink and ballooning modes. The plasma instabilities will grow on a much faster time-scale than can be controlled, and hit the walls within a microsecond.
A linear design would have to be hundreds of kilometres long, and a ‘cyclic’ design is essentially a Tokamak, ie governed by all the same stability criteria of q-profile, Troyon limit etc.
Also the acceleration of ion/electron beams to high energies is not efficient enough – you will never get the Q number you claim.
I also note from the report: http://www.gpo.gov/fdsys/pkg/CRPT-112srpt75/pdf/CRPT-112srpt75.pdf
Some other interesting points…
Page 84:
Plutonium-238 Production Restart Project.—The Committee pro- vides no funding for the Plutonium-238 Production Restart project.
ie. No new space missions will be able to have radioisotope thermoelectric generators (RTGs) as used on missions like Voyager and Cassini
Page 96:
The Committee recommends no funding for the nuclear waste disposal program.
The Committee reiterates its support for the $8,000,000,000 in loan guarantee authority authorized in Public Law 110–161 for Advanced Fossil Energy Projects.
Page 99:
The Committee recommends $1,804,882,000 for directed stockpile work.
Page 101:
Science Campaign.—The Committee recommends $347,055,000. Within these funds, at least $44,000,000 shall be used for plutonium and other physics experiments at Sandia’s Z facility. The Committee commends Sandia National Laboratory for successfully and safely performing two plutonium experiments at the refurbished Z facility. The Committee understands that these experiments yielded fundamentally new and surprising data about the behavior of plutonium at high pressure and this new data has been one of the most valuable contributions to the stockpile stewardship program. The Committee continues to strongly support the weapons physics activities at Sandia’s Z facility that are critical to sustaining a safe, secure, and effective nuclear stockpile.
I agree with Tulse. NIF is a military project. If they do succeed in getting ignition there is growing pressure from within the laser fusion community that, at that point, the civilian research should be split off and run separately. Only then will the competitiveness of LIFE be able to be compared on a level playing field with MCF or ICCs.
I suspect as an energy source laser initiated fusion will only be viable for the foreseeable future as part of a fission-fusion hybrid design. So with that roadmap for them in mind the ICC community such as Focus Fusion need to plan their campaign accordingly. Emphasising the obvious advantages.
Also given there will be a substantial new build of fission plants around the world over the next 20 years. It should not be an either/or with regard to funding DPF vs hybrid designs, since they have real potential in transmuting waste, even if their electricity output will be more expensive per kWh.
I’d love to help, but I think I’m in the same boat as Reece. My PhD is at the stage where I really need to be concentrating on that. Also I’ve volunteered before for other things and never delivered so I don’t want to let you down again, Eric.
However, having said that when the time comes I’m eager to make a contribution. My fluid plasma code is written in C with final analysis/plotting done mostly in matlab, but I have used java before.
Henning is correct in that the language skill is less important than the maths/stats regarding the numerical methods to extract information from the data, carrying through errors to give robust figures suitable for scientific interpretation and publication.
As for data storage models I assume you’re building something akin to the Integrated Data Access Management (IDAM) system used on MAST and JET.
http://www.efda-itm.eu/~coelho/efda/EFDA/ProtectedArea/protected_docs/ITMTFCUlhamMeeting/ITM IDAM2.ppt
http://linkinghub.elsevier.com/retrieve/pii/S0920379607005595
It is basically a system whereby the data from each shot is fed automatically from all the diagnostics into a central SQL database. The raw datasets can be accessed via a simple set of APIs for C,Java,Matlab etc. Any analysis code then can, by including the relevant header, extract raw data for any shot, returning the analysed data to the database.
Other higher level codes can then take the processed data from many diagnostics and combine them to perform more complex analysis. The final stage being comparing analysed data from many shots to uncover trends, scaling etc.
I don’t pretend to know anything about his “Curvature Cosmology”, however I would point out that although the journal he published in claims to be peer reviewed, a quick google on it seems to cast doubt on the professionalism of the operation.
From a reddit thread:
Here is an account from someone asked to act as reviewer for an article
http://www.sentientdevelopments.com/2009/09/explanation-for-lifes-origins-that.html
I know controversial theories are less likely to get published in mainstream journals like IOP’s “Journal of Cosmology and Astroparticle Physics” or Elsevier’s “Physics Letters B” but I think you’d have to at least a bit guarded that the peer review process will not have been as robust in a ‘journal’ like this.
Henning wrote: Simulation isn’t as advanced as you may think. There are particle-in-cell simulations but with a density much too low for our purposes. Then there is a parametric calculation that tells you what you might expect with different diameters and pressures and so on.
As third solution there is the LPP simulation, which simulates the plasma as a fluid (with extra quirks), because you cannot handle individual particles anymore at this density. It only simulates a single plasma filament. And it isn’t finished yet.
None of them include the shape of the DPF, except being a cylinder, or a sub-part of it.
There are plenty of PIC codes that can handle the modest densities of a DPF plasmoid (ie around solid density) Inertial confinement simulations have to cope with 1000 times higher.
No one simulation would be able to cope with the full range of scales involved. I would think at least three simulations would be needed, each feeding into the next. First a cylindrical resistive MHD code with ionisation, coupled to the electrodes & circuit response. Then a two-fluid code to handle the filamentation and pinch. Followed by a PIC code for the plasmoid formation.
Once you get all that working for a simple deuterium plasma, only then can you move on to add the p+B11 ion species to the fluid stages, and including the strong magnetic field effect and radiative cooling to the PIC code.
I agree simulation of plasmas in general is not very advanced. Based on the current state-of-the-art codes for tokamaks & inertial fusion and how long they have taken to develop. I’d say it would take at least 10 years work by several reasonable size research groups for DPF models to get to even a comparable level of confidence.
I always like to bring up the comparison with steam engines – they were developed by people such as Watt in the 1770’s and improved by Trevithick in 1801, well before there was a good understanding of thermodynamics by the likes of Carnot in 1824. Certainly well before there was any way of modelling or theoretically calculating the efficiency of an engine design.
By experimenting we may crack fusion well before we know what’s going on at the microscopic scale. The trouble is that the scientific community has got used to building ever larger experiments on the scale of LHC, NIF, ITER, JWST etc. where the money men want hard evidence that projects on that scale will work before they part with the cash. We need more of the trial & error approach.
Other shots numbers in the pdf are much higher such as #18409, so even if they didn’t start from 1 they must have done a fair few.
I suspect however good the theories and computer models get, in the end trying out lots of ideas through mutation & selection will be a faster way of improving the design.
Something like this: http://www.youtube.com/watch?v=END3r3ehcDw