Theory is Dead: Long Live Theory
This “death of theory” has been argued before, and periodically comes up from various empiricist leaning circles arguing that all you really need is data. The problem is that, as Kant showed when he reconciled rationalism with empricism, theory and data are intractably linked. Contrary to what some die hard empiricists might argue, we do not believe “1 + 1 = 2” simply because we’ve seen it enough. We invent notions and concepts to think symbolically, and in that realm, we set up definitions which we use to prove “1 + 1 = 2” inside the system we’re discussing. We then tie our theories to fact on the ground to make predictions.
Indeed, I’d say the reason AI originally failed in the 1980’s is due to its overreliance on theory to the diminishment of actual data. Purely symbolic systems can only go so far, and only go so fast, especially in discerning very noisy rules like those of spoken language. And now it appears we’re swinging too far the other way – we’ll never have need of symbolic or formal reasoning systems again, it may be argued. We’ll just throw enough data at it. This obviously rubs many of those actually in the machine learning field the wrong way, as its precisely those formal methods that allow them to code (type checking, compilers) and prove (mathematics, theorem provers) their models work within certain parameters.
The argument Cringely seems to be implying is that theory has just been a crutch for we mere humans, and that if we only had enough data, we wouldn’t need it. However, theory does many things, and even in the era of big data, it will continue to do these things:
1. It is our only path to actual truth.
One can prove theorems based on assumptions. All empiricism ultimately falls to Hume’s problem of induction.
2. Theory can succinctly describe in a single equation many terrabytes of data.
Big data means lots of data – but the above fact is still true no matter how cheap data gets. We can always do more when data is married with theory, each unit of processing power will always be more useful when mixing the two rather than simply trying to neural network the whole problem.
3. Theory can communicate ideas.
Linked to the above in terms of compression of data into a single equation, our symbolic language also makes it easy to communicate ideas to one another. This will be true of big data as well – moving around simple equations will always be cheaper and faster than moving around the entirety of data sets on which they are based.
4. Theory can make predictions out of sample.
Big data’s predictions out of sample are always, at best, guesses. Educated guesses, but guesses. And they are, in turn, guesses based on a few fundamental theoretical assumptions of attempting to minimize error. If we ever run into inputs that are not in our data set, or alternatively, if we want to backsolve for our inputs given required outputs, this is always easier when we have theories to supplement our data. Regression analysis, for instance, allows for a lot more theoretical interpretation of results than a cackle of random forests. When we have regression coefficients, we can make many more predictions about our data set using far fewer facts.
Big data is already changing things, and many of Cringely’s predictions are true. But to say that Big data is going to ‘automate science’ away is a large misunderstanding of what the theoretical side of science does, and how theory serves us.
No comments yet.

Archives
 January 2018 (1)
 November 2017 (1)
 May 2017 (1)
 February 2017 (1)
 January 2017 (1)
 December 2016 (2)
 November 2016 (2)
 October 2016 (2)
 September 2016 (6)
 August 2016 (2)
 July 2016 (1)
 December 2015 (1)

Categories

RSS
Entries RSS
Comments RSS
Leave a Reply