Phil’s Dad wrote:
There was an experiment in the UK to get the public to run climate models on their home PC’s. The big picture was split into chucks they could cope with – probably just a handful of grid points/cells – which they ran for a week. They e-mailed their results to other members of the public to crunch and so on until it was boiled down to something a single machine could handle.
Actually that project used a different method. When you’re running on separate home PC’s you can’t exchange data very well as the bandwidth is too low and latency way too high. This is also true for all the other distributed computing projects like SETI or Folding@Home. Each machine needs a separate task that is not time sensitive or dependent on any other processors.
They gave each PC a whole earth to model, but with a fairly course grid. They are , after all interested in climate not local weather. Each PC was given slightly different starting conditions and response parameters. For example one scenario the oceans may absorb a little more CO2, another the rate of deforestation may be a little lower etc. By running the climate model forward from each of these scenario’s you learn how sensitive it is to variations in each parameter and how hard it can be pushed. So if you can estimate the errors on your starting assumptions you can extrapolate forward to give you the error on the prediction a hundred years from now.