The world’s leading publication for data science, AI, and ML professionals.

The False Philosophy Plaguing AI

Erik J. Larson and The Myth of Artificial Intelligence

Source: Frank Chamaki via Unsplash.
Source: Frank Chamaki via Unsplash.

The field of Artificial Intelligence (AI) is no stranger to prophesy. At the Ford Distinguished Lectures in 1960, the economist Herbert Simon declared that within 20 years machines would be capable of performing any task achievable by humans. In 1961, Claude Shannon – the founder of information theory – predicted that science fiction style robots would emerge within 15 years. The mathematician I.J. Good conceived of a runaway "intelligence explosion," a process whereby smarter-than-human machines iteratively improve their own intelligence. Writing in 1965, Good predicted that the explosion would arrive before the end of the twentieth century. In 1993, Verner Vinge coined the beginning of this explosion "the singularity" and stated that it would arrive within 30 years. Ray Kurzweil later declared a law of history, The Law of Accelerating Returns, which predicts the singularity’s arrival by 2045. More recently, Elon Musk has claimed that superintelligence is less than five years away, and academics from Stephen Hawking to Nick Bostrom have warned us of the dangers of rogue AI.

The hype is not limited to a handful of public figures. Every few years there are surveys of researchers working in the AI field asking for their predictions of when we’ll achieve artificial general intelligence (AGI) – machines as general purpose and at least as intelligent as humans. Median estimates from these surveys give a 10% chance of AGI sometime in the 2020s, and a one-in-two chance of AGI between 2035 and 2050. Leading researchers in the field have also made startling predictions. The CEO of OpenAI writes [that](https://future.fandom.com/wiki/Scenario:_Shane_Legg) in the coming decades, computers "will do almost everything, including making new scientific discoveries that will expand our concept of ‘everything’," and the co-founder of Google Deepmind that "Human level AI will be passed in the mid 2020’s."

These predictions have consequences. Some have called the arrival of AGI an existential threat, wondering whether we should halt technological progress in order to avert catastrophe. Others are pouring millions in philanthropic funding towards averting AI disaster. The Machine Intelligence Research Institute, for example, has received millions in funding for "ensuring smarter-than-human artificial intelligence has a positive impact."

The arguments for the imminent arrival of human level AI typically appeal to the progress we’ve seen to date in Machine Learning and assume that it will inevitably lead to superintelligence. In other words, make the current models bigger, give them more data, and voilà: AGI. Other arguments simply cite the aforementioned expert surveys as evidence in and of themselves. In his book _The Precipice_ for instance, Toby Ord argues that AGI constitutes an existential threat to humanity (he gives it a 1 in 10 chance of destroying humanity in the next 100 years). Discussing how it will be created, he first cites the number of academic papers published on AI and AI conference attendance (both of which have skyrocketed in recent years), and then writes

[T]he expert community, on average, doesn’t think of AGI as an impossible dream, so much as something that is plausible within a decade and more likely than not within a century. So let’s take this as our starting point in assessing the risks, and consider what would transpire were AGI created. (pg. 142)

What makes these researchers so confident that current approaches to AI are on the right track? Or that problem solving in narrow domains is simply a difference in degree, not of kind, from truly general-purpose intelligence? Melanie Mitchell, a professor at the Santa Fe Institute, recently called the idea that making progress on narrow AI – well-defined tasks in structured environments, such as predicting tumours, or playing chess – advances us towards AGI the foremost fallacy in AI research. Quoting Hubert Dreyfus, she notes that this is akin to claiming that monkeys climbing trees is a first step towards landing on the moon. There are no arguments supporting this fallacy, only extrapolations of current trends. But there are arguments against it.

Enter Erik J. Larson, a machine learning engineer arguing against AI orthodoxy. In The Myth of Artificial Intelligence, Larson joins the small set of voices protesting that the field of AI is pursuing a path which cannot lead to generalized intelligence. He argues that the current approach is not only based on a fundamental misunderstanding of knowledge creation, but actively prohibits progress – both in AI and other disciplines.

Larson points out that current machine learning models are built on the principle of induction: inferring patterns from specific observations or, more generally, acquiring knowledge from experience. This partially explains the current focus on "big-data" – the more observations, the better the model. We feed an algorithm thousands of labelled pictures of cats, or have it play millions of games of chess, and it correlates which relationships among the input result in the best prediction accuracy. Some models are faster than others, or more sophisticated in their pattern recognition, but at bottom they’re all doing the same thing: statistical generalization from observations.

This inductive approach is useful for building tools for specific tasks on well-defined inputs; analyzing satellite imagery, recommending movies, and detecting cancerous cells, for example. But induction is incapable of the general-purpose knowledge creation exemplified by the human mind. Humans develop general theories about the world, often about things of which we’ve had no direct experience. Whereas induction implies that you can only know what you observe, many of our best ideas don’t come from experience. Indeed, if they did, we could never solve novel problems, or create novel things. Instead, we explain the inside of stars, bacteria, and electric fields; we create computers, build cities, and change nature – feats of human creativity and explanation, not mere statistical correlation and prediction. Discussing Copernicus, Larson writes

Only by first ignoring all the data or reconceptualizing it could Copernicus reject the geocentric model and infer a radical new structure to the solar system. (And note that this raises a question: How would "big data" have helped? The data was all fit to the wrong model.)

In fact, most of science involves the search for theories which explain the observed by the unobserved. We explain apples falling with gravitational fields, mountains with continental drift, disease transmission with germs. Meanwhile, current AI systems are constrained by what they observe, entirely unable to theorize about the unknown.


The confusion caused by induction is nothing new. David Hume was the first to point out its conspicuous logical flaw: no finite number of observations can justify a general principle. No matter the number of green leaves we witness, we can never conclude that all leaves are green. This has caused an uproar among those who believe that induction was the foundation of human knowledge. Bertrand Russell, for instance, lamented that if Hume’s problem could not be resolved then "there is no intellectual difference between sanity and insanity."

Two philosophers of the 20th century resolved the issue by noticing that humans do not create knowledge by induction: Charles Sanders Peirce and Sir Karl Popper. Both noticed that our knowledge relies on guessing and checking, of unjustified and unjustifiable leaps of intuition, of trial and error. Humans guess the general, and use observations to refute our guesses. We creatively conjecture how aspects of the world work (that the speed of light is constant, that peanut butter tastes good with green beans), and use criticism and observation to disavow us of those ideas that are false. We see a brown leaf in the fall and conclude that not all leaves are green.

Bold ideas, unjustified anticipations, and speculative thought, are our only means for interpreting nature: our only organon, our only instrument, for grasping her. And we must hazard them to win our prize. Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the scientific game.

  • Karl Popper, The Logic of Scientific Discovery

Peirce called the method of guessing and checking "abduction" (although the term is now used in a variety of ways). And we have no good theory of abduction. To have one, we would have to better understand human creativity. In other words, we need a philosophical and scientific revolution that explains abduction before we can possibly generate true artificial intelligence. As long as we keep relying on induction, AI programs will be forever prediction machines hopelessly limited by what data they are fed.

Larson traces the focus of the narrow, puzzle-solving approach to AI to the founder of the field, Alan Turing:

Turing’s great genius was to clear away theoretical obstacles and objections to the possibility of engineering an autonomous machine, but in so doing he narrowed the scope and definition of intelligence itself. It is no wonder, then, that AI began producing narrow problem-solving applications, and it is still doing so to this day.

Turing took seriously the problem of creativity early in his career. He called it "intuition" and wondered how it might be programmed into machines. Gradually, however, Turing started identifying intelligence with rote problem-solving for well-defined problems. And this is precisely the domain in which machines excelled – and still do.

The focus on induction is not only hampering progress towards true artificial intelligence, it is beginning to taint other areas of science as well. Larson picks on neuroscience in particular, arguing that many researchers have forgotten the role that theories play in advancing our knowledge, and are hoping that a true understanding of the human brain will be borne out of simply mapping it more accurately. However, suppose such a map is developed – then what? The map is useful only if we have a theory to test.

Narrowing the focus of scientific inquiry to questions of mechanical problem solving and information processing is to forget that the primary role of science is the search for good explanations. We want to know why, not simply make predictions. Data can corroborate or falsify our theories, but the theories give importance to the data, not vice versa. Theories don’t magically emerge from ever larger datasets. They are creatively conjectured.

The Myth of Artificial Intelligence comes at a time when it is intellectually fashionable to denigrate the human capacity for creativity and knowledge creation. We are continually reminded that we are biased, irrational, prone to misinformation, stubborn, and unreasonable. Larson helps remind us that humans are special. We are capable of leaps of intuition which develop vaccines, of creative insights that better our circumstances, of creating deep explanations capable of explaining the universe. And all without terabytes of training data.


Related Articles