dash wrote:
Indeed…however if the predictions on the ultra-low cost per unit and per kWh hold true…you can bet that there will be millions of them built and then much more overall waste heat will be produced as we consume orders of magnitude more power in our society due to the near-free cost of it π That old supply-and-demand curve thing.
No one would ever need more than a megawatt.
Just as no one would ever need more than 640K bytes of memory in their computer.
-Dave
ROFL absolutely!
Brian H wrote:
That’s why I suggested doing the math. Get the cost of the “add-on”, and you will discover that the price per kwh is HUGE compared to ΒΌΒ’.
Wouldn’t waste heat be a problem at some point? If 20% of the energy comes out as useful work, and 80% is lost as heat (I’m just making up numbers), as civilization expands, won’t that waste heat become a serious issue?
Nobody (including the Gore camp) has made any noise about all the ‘thermal pollution’ that our current power production (and transportation) technology creates. And for good reason…it really doesn’t matter given the amount of solar thermal input we receive by comparison. The only time that such thermal pollution seems to make any impact is when it is being dumped into a localized and heat sensitive area such as a river where temperature changes can cause fish kills or contribute to algae blooms.
But yeah, if you had a couple of gigawatts of waste heat flux concentrated in the area of a city block…that might get a little uncomfortable to live in said city block in the middle of summer when there was no cool breeze to help out π Fortunately, such corner-case scenarios are easily avoided through just a tiny amount of planning by spreading things out a tad…that’s just one of many reasons you don’t find multi-gigawatt power plants in the middle of commerce and residential areas.
Actually, FF comes off a lot better than that. Its inherent efficiency means that it produces much less “waste” heat than any other way of generating the same amount of power. Therefore, every GW produced by FF actually REDUCES the net heat “pollution”.
Indeed…however if the predictions on the ultra-low cost per unit and per kWh hold true…you can bet that there will be millions of them built and then much more overall waste heat will be produced as we consume orders of magnitude more power in our society due to the near-free cost of it π That old supply-and-demand curve thing.
dash wrote:
That’s why I suggested doing the math. Get the cost of the “add-on”, and you will discover that the price per kwh is HUGE compared to ΒΌΒ’.
Wouldn’t waste heat be a problem at some point? If 20% of the energy comes out as useful work, and 80% is lost as heat (I’m just making up numbers), as civilization expands, won’t that waste heat become a serious issue?
Nobody (including the Gore camp) has made any noise about all the ‘thermal pollution’ that our current power production (and transportation) technology creates. And for good reason…it really doesn’t matter given the amount of solar thermal input we receive by comparison. The only time that such thermal pollution seems to make any impact is when it is being dumped into a localized and heat sensitive area such as a river where temperature changes can cause fish kills or contribute to algae blooms.
But yeah, if you had a couple of gigawatts of waste heat flux concentrated in the area of a city block…that might get a little uncomfortable to live in said city block in the middle of summer when there was no cool breeze to help out π Fortunately, such corner-case scenarios are easily avoided through just a tiny amount of planning by spreading things out a tad…that’s just one of many reasons you don’t find multi-gigawatt power plants in the middle of commerce and residential areas.
Philβs Dad wrote: In my instinctive desire to please all the people all the time I have been thinking about how to deal with the βwasteβ heat in a way that would satisfy both Brian H and Aeronaut.
Then I remembered that University of Utah physicist Orest Symko demonstrated a couple of years ago how to convert heat via sound into electricity. Symko foresaw using the devices to generate electricity from heat that is currently released from nuclear power plant cooling towers.
Itβs only an example of the principle; which is to convert the heat directly into electricity rather than resorting to a steam engine. Canβt hurt the energy I/O balance. What other methods are out there to convert heat into electricity in one or two inexpensive steps?
Stirling engines are another method. Organic Rankine cycle turbines are another for instances when the temperature of the waste heat isn’t high enough to make good-quality dry steam suitable for a traditional Rankine steam turbine. Thermoelectric technology is constantly improving, and might be a contender for low/zero maintenance electrical generation from waste heat in a few years.
However with all that said, in many regions the use of the heat directly in industrial applications as well as residential (space heating, hot water for showering/cleaning, etc) is probably just as valuable as trying to make electricity out of it.
ah, ok..it’s a matter of a massive increase in efficiency from one process to the other. That’ll work, and for all of our sakes I hope your calcs are right! π
As a related aside..how are the simulations going? Were you able to get together the software for gpu-based computation? I’m sure my fellow computer-geeks would appreciate a quicky article sometime on the technology behind the simulations (if/when you can spare a moment). π
Lerner wrote: The new LPP machine will have a maximum electric input of close to 100 kJ. Our calculations indicate that this is what will be needed to get Q>1. A generator would be similar in energy, but would pulse much more rapidly–500 times a second or so, compared with 6/ hr for the experimental device.
DianaHitech, which was located in NJ, unfortunately went out of business and the device no longer exists. It was never fully built, so I believe the peak current obtained was about 0.9 MA or so. We’ve had several discussions with Dr. Brzosko and the cross arrangement of our device was inspired by his.
By the way, in a generator, some of the electric discharge energy that is not actually absorbed by the plasma can be recaptured by well-designed circuitry, so the energy that needs to be captured from the plasma for break-even is less than 100 kJ.
First, thanks for the direct response..but it really doesn’t answer the concerns regarding the huge discrepancies between your numbers and Dr. Brzosko’s.
That “some of the electric discharge energy” you mention seems like a LOT when you are talking the difference between DianaHitech’s calculated 4GJ and your 100kJ. Unless there’s a volumetric/mass scale issue as I alluded to in my prior post, it seems that *someone’s* calcs are off by a few thousand-fold or some vital background information has been omitted.
[stream of consciousness]
Perhaps the issue could be cleared up if the “W” in those graphs were better defined. Is that energy that must be input by the electric arc each pulse or is that the net total after some energy has been released from fusion events or other input not considered in your calculations? How does it conceivably even build to 4GJ if it has to be delivered electrically in a matter of nanoseconds? Why do they seem to need 4GJ in *their* field and you only need 100kJ? Is that 4GJ perhaps built up over a much longer period of time with “pre-heating” prior to initiating the electrically-driven plasma pulses and thus assumed to be a sort of prior condition in your calcs versus a closed-system calc in theirs?
[/stream]
Thanks in advance for contributing to my nascent education in plasma physics π
I think you have to consider the context here. Those numbers were extrapolated from the tests by Brzosko and team with that particular machine. The energy needed to induce a fusion event is proportional to the volume of the reaction chamber (and ultimately the mass contained inside the plasmoid if you want to be technical)…if you want to heat up/compress a large mass of gas it will require more energy than if you want to heat up/compress a smaller mass of gas to the same temperature and density. I think this is where the numbers seem to diverge from LPP’s….ie I believe the LPP DPF is assuming that a much smaller mass will be heated per pulse than was demonstrated with Brzosko’s setup. That’s just a semi-educated guess on my part, however..and I would very much be interested in hearing an “official” explanation π
Breakable wrote: Eric mentioned that his simulation cannot be paralelised.
I am guessing INCITE can be only implemented by a lot of parralel computer cores.
So there is no speed up by running a serial task there, because you could use just one computer core.
Its better to run on a GPU which has more serial powe.
Of course dont forget that most GPU’s are also implemented by using several cores in paralel,
so you should choose one that has the fastest core, instead of the most teraflops.
While I don’t want to speak for Dr. Lerner, I think you’ve mis-interpreted his statement. He said nothing about the problem not being a highly parallel problem….rather he mentioned that it is the type of problem that doesn’t easily lend itself to being farmed out to large numbers of unlinked processors such as in the folding@home or distributed.net projects (which have the luxury of divvying up work to be completed asychronously since the results from one data set do not depend on another for the most part). That, in itself, means nothing in regards to the parallel nature of the problem, but rather that the steps involved in the calculation are closely related to one another and can’t be done asynchronously. And that makes perfect sense…I would imagine that the calculations are trying to keep track of a bunch of particles and force lines…and every step changes the environment for all of them. Thus, this is the quintessential parallel problem insofar that all the calculations must truly be parallel in a strict sense (all the forces, particle vectors, etc must be calculated and results placed into a multi-dimensionsal array before moving onto the next time slice, which will use that array as the initial conditions for the next turn of the crank). It would seem that the highly parallel nature of gpu architecture would be very well suited, and TFLOPS very much matters π That said, the ease of programming is probably the biggest variable…I’ve heard that CUDA is much more “programmer-friendly” than ATi’s CTM api….but I have no first-hand experience in programming either of them.
I look forward to learning more details whenever Dr. Lerner has an opportunity to write something up.
Brian H wrote:
You evidently didn’t do your reading assignment.
Your data sources are compromised, and the projections flawed.Read the paper. :coolmad:
back at ya. My data sources are people living in the Arctic area who see the permafrost melting and ice sheets disappearing and NASA’s own gravimetric satellite data (recently published in Discover magazine, if you care to look at it). I would offer that your single paper used as reference is flawed in that respect. But if you want to believe that everything’s hunky-dorey, you’re certainly entitled to your opinion (facts notwithstanding).
Brian H wrote:
The best solution, of course..is to simply stop spewing greenhouse gases into the atmosphere and let the earth heal on its own through natural mechanisms (assuming we haven’t reached a tipping point in climate change already).
Well this will happen in the end no matter what the people will do, and even if noone will be present to observe.
I think you Sad Sacks need some bucking-up. Read this: The Past and Future of Climate Change, and then report back! There will be a short quiz. π
I’m not worried about “saving the earth”…I’m worried about “saving the humans”. Climate change is already quite evident in the arctic regions, and recent events in Antarctica with the edge “skirt” of the ice shelf is alarming. Gravimetric readings show a large movement in mass away from the poles, indicating melting ice (likely a major cuprit in skewing temperature readings into seeming relatively stable as the heat of fusion of ice absorbs the excess global thermal load–for now). Satellite measurements indicate an accelerating rise in sea levels, when there should have been a decline when the fresh water resoivours built over the last 50 years are taken into account. Yes, the earth will cycle back to “normal” at some point…but I would prefer if it could start along that path before we’re all dead from drowning, starving, or other misfortune of our own making. I don’t consider that viewpoint as alarmist..but rather a reasonable request π
Breakable wrote: Seems to be quite extensive.
One issue I did not see adressed is that putting sequestering CO2 into ocean would make it more acidic.
A very valid concern..and one that is avoided in the ocean seeding method as the CO2 is locked away in the form of lime versus the “pure” CO2 sequestration plans that pump the gas directly to deep ocean waters. The best solution, of course..is to simply stop spewing greenhouse gases into the atmosphere and let the earth heal on its own through natural mechanisms (assuming we haven’t reached a tipping point in climate change already).
Breakable wrote: Wont seeding oceans with Iron generate an algae bloom? It produces an opposite effect as a result.
I dont remmember the exact process, but basically it creates oxigen deprived zones due to decomposition, I think.
http://en.wikipedia.org/wiki/Dead_zone_(ecology)
If you look at the 3rd link I provided, it goes through a lot of the pros/cons of such an endeavor. The short answer is that natural blooms created by wind-born iron fertilization via dust storms cover areas 10x as large as the proposed seeding areas..and no large-scale “dead zones” have ever been reported after such an event. Obviously, this is something that must be taken into consideration, but with proper scale and planning, I don’t think it’s an issue based on previous experimental and observational results.
As I mentioned to Dr. Lerner in email…I personally think the project would be best served by investing some coding resources to add support for either ATI (now AMD) or Nvidia gpu-assisted computation. Especially now that both camps can handle double-precision computation in hardware. I’m sure the folding@home folks at Stanford would assist in sharing code and know-how if requested. The amount of floating-point power available in today’s video cards (and, by extension, the purpose-made “stream processing” cards built from them) completely dwarfs what is available on general-purpose cpu’s. IBM’s recent new supercomputing entry (“roadrunner”) uses beefed-up Cell processors in conjunction with AMD cpu’s for the same reason. Just a thought π
Heh..more than enough…I feel all tingly inside now :p But seriously…you have some valid concerns and I wonder about the same things. I think we’ll all feel a lot better once the patent info goes public and we can “peek under the hood” to see what the master plan is all about.
Yes but high energy x-rays will also destroy the crystalline structure of any normal p-n junction semi-conductor used for photovoltaics, so it won’t last long enough to be worth it. I’m very hopeful for this technology I would just like to know more about how it will work and defeat problems like x-ray conversion, implying I