Thanks for posting that. Deutsch has a very interesting take on things. Initially i was convinced by his talk.
But now i believe there’s a problem with his position though. The problem is that it is easy to vary 😉 His position is weak because of the words “easy” and “hard”. These are completely subjective notions. The central issue is whether you want a binary notion of truth or a continuous one. Deutsch says empiricists were wrong and then goes on to tell us why he is (more) right. But that’s not completely fair because he’s actually setting out to do something quite different from what empiricism had in mind. Empirical science only purports to be a method to ultimately falsify theories (theories can never be proven, only falsified). These claims still stand, even today. What Deutsch is trying to do here, and not very successfuly in my opinion, is to compare two theories without falsifying either. This is simply a different thing. It implies a continuous notion of truth, i.e.: my theory is more true than yours. As such it is outside the scope of empiricism which deals in absolutes, i.e.: something is possible until it is falsified.
Now it is not to say that it is not interesting to try and order theories in terms of “goodness of explanation”. But if you want to do this you need to follow the scientific method and come up with something that is formal and expressible in the language of mathematics. Some would claim that this has actually been done. Conceptually it was already proposed by the 14th-century franciscan friar Father William of Ockham (http://en.wikipedia.org/wiki/Occam’s_razor). And you might say that, mathematically, it was worked out in the work of russian mathematician Andrey Kolmogorov (http://en.wikipedia.org/wiki/Kolmogorov_complexity)
The impact of Kolmogorov’s ideas of algorithmic complexity on the scientific endeavour can be summarized as follows (and i’m quoting one of my own professors here):
Science is, essentially, a form of data compression.
This is to be taken quite literally: if you want to know whether some sequence of numbers is generated by a physical process (with some degree of predictability in it) you can approach this problem as though you were writing a data compression algorithm. The trick is then to find a good model for the underlying physical process. As soon as you manage this you can probably get a very good compression on your dataset (by encoding the dataset using a description of your model, the initial start conditions and some small number of corrective bits to account for the noise). When you succeed in doing this (and you model also compresses new datasets generated by the same process), it means you have found a “good” explanation for your data, but this time in a truly quantitative sense (give me another model and we just compare, on the same dataset, how well each model manages to compress it). In sum, this makes the Kolmogorov/Occam theory a more scientific explanation of what science does than the Deutsch theory 😉
So if Occam’s razor can be reformulated for use by physicists as:
When faced with a choice between two competing theories describing the same physical phenomenon we should always prefer the simpler, more concise one.
Then there should also exist a reformulation/specialization of this adagium for use by engineers:
When faced with a choice between two competing devices for obtaining the same physical goal we should always prefer the simpler, more efficient one.
I’m still working on a further specialization for use by plasma physicists pursueing fusion energy… 😉
Yes, and data selection is therefor contentious. Outliers generate new “viewpoints” which are often alien and uncomfortable to received scientific opinion. So new instrumentation often leads to new theories.
I think some physicists pray to St. Occam every Sunday to help slay the Standard Model’s particle zoo. Some even go stringy! 😉