Categories
forecasting Heuristics Induction Life predictive analytics Problem Solving Randomness

Induction

In my mind, Scotsman David Hume was correct when he said that our mind often draws conclusions from relatively limited experiences that appear correct, but which are actually far from certain. The mind is an incredibly interesting correlation engine and I’ve been observing it as an amateur neuroscientist for quite some time. In my professional capacity, I often deal with the problem of overfitting and I can say with a great deal of certainty that human beings can be far worse at overfitting than most of the machine learning software algorithms I have encountered.

In inductive reasoning, one makes a series of observations and infers a new claim based on them. In essence, much of what passes for predictive analytics these days is based on this approach. In the day, Hume objected to this notion that the past predicts the future.

Philosopher Bertrand Russell gave us a very credible example of the problems with induction in his book: The Problems of Philosophy. Essentially his argument was roughly as follows:

Domestic animals expect food when they see a person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading where information is hidden from the observer. The person who has fed the turkey in America every day throughout its life, will come to the poor creature with a different intent than feeding in the weeks before Thanksgiving. At that moment, when the poor bird looses its head, it becomes obvious to the observer that a more nuanced view as to the uniformity of nature would have been useful to the turkey, for it might have chosen to fly the coop, as it were, that week.

My concern with science (or more accurately, what passes as science these days), has to do with the general approach we are taking in the realm of big data. We are out to prove to ourselves that we can reasonably anticipate future events with the hopes of mastering supply chains, the needs of people, and the like. In essence, we are still living under they tyranny of Frederick Winslow Taylor’s quixotic quest for standardization.

Back in the 1890’s, Taylor set out to apply statistical methods in the age of the factory to bring forth a science of work. In his Principle of Scientific Management he declared that “In the past the man was first, in the future the system must be first.” Leaving aside the obvious anti-liberalism nature of the statement, it was pretty clear that he felt he could systematically eliminate inefficiency from business by leveraging statistical methods and averages. Essentially, Taylor felt that individuals could be evaluated, sorted, and managed by comparing them within a statistical distribution of their population.

Good luck with that.

Standardization is, to this day, implemented in modern enterprises in a form largely unchanged from Taylor’s earliest proposals. Why? Well, in part, because it works (until it doesn’t).

Now we are engaged in the Big Data revolution (along with AI and Robotics) in our seemingly endless quest for greater efficiencies. It is true that we make great strides at prediction -I’m actively working on predictive analytics that benefit my customers all the time. The computational power we can leverage is quite amazing. The sources of data are seemingly unbound.

Yet, in my head I can’t move away from the notion that for sure I am not able to completely predict the future with these methods. I wonder if the executive management teams of the Fortune 500 likewise understand the concerns of the unknown unknowns. In the end, we run into the problem of induction.

You see, I can use my vast quantities of data to disprove something. However, I can never prove something. I can use historical data to illustrate why a testable statement might be false, but I can never prove a testable statement true. I can simply declare that such a statement is true absent any evidence to the contrary (which may come in future samples).

Like that bucket of white golf balls, I can never say that it is 100% white, based on a sample strategy where I have not observed every data point (every ball in the bucket). When I get done, the best I can say is that in this bucket, there were only white golf balls. I cannot say all golf balls are white. As we can see, samples can be greatly insufficient; things may change; we cannot know the future to certainty from historical information.

Kahneman and Tversky brought this out in their work on the representativeness heuristic. In essence, people use shortcuts to arrive at decisions and conclusions. Their famous Linda problem is an excellent example:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

Rank the likelihood of Linda’s chosen profession

Linda is a teacher in elementary school.
Linda works in a bookstore and takes yoga classes.
Linda is active in the feminist movement.
Linda is a psychiatric social worker.
Linda is a member of the League of Women Voters.
Linda is a bank teller.
Linda is an insurance salesperson.
Linda is a bank teller and is active in the feminist movement.

Adapted From: Kahneman, Daniel. Thinking, Fast and Slow (pp. 156-157). Farrar, Straus and Giroux. Kindle Edition.

Aside from the obvious datedness of the examples, it is still easy to guess the almost perfect consensus of judgments that were made. Most people think Linda is a very good fit for an active feminist, a fairly good fit for someone who works in a bookstore and takes yoga classes—and a very poor fit for a bank teller or an insurance salesperson. But here it comes, does Linda look more like a bank teller, or more like a bank teller who is active in the feminist movement?

Most people will agree that Linda fits the idea of a “feminist bank teller” better than she fits our stereotype of bank tellers. The stereotypical bank teller is not a feminist activist, and adding that detail to the description makes for a more coherent story. The story is what triggers our brains. Our brains don’t bother to do the math once the story makes sense. Enter the venerable Venn diagram.

It should be visually clear that the probability that Linda is a feminist bank teller is absolutely lower than the probability of her being a bank teller. Whenever you specify a possible event in greater detail, you can only lower its probability through the conjunction of the probabilities involved in the specificity. This is the fundemental conflict between our intuition (via representativeness) and mathematics of probability.

As Kahneman and Tversky (1971) wrote: “We submit that people view a sample randomly drawn from a population as highly representative, that is, similar to a population in all essential characteristics.” This concretely demonstrates the consequences of the inductive fallacy: overconfidence in the ability to infer general properties from observed facts; or, as they said, “undue confidence in early trends.”

As many (expertly Hume and Popper) have observed, in science, verification is not really possible. Verificationism is probably one of the most dangerous trends in human discovery right now. I think when it comes to scientific discovery we’re much better off recognizing the limits of induction and attempting to build systems that take a more human approach – one size fits one.

To me, this highlights the fundamental issue with trying to continue down Taylor’s road leveraging AI and ML to improve the efficiencies. In the end, you will have to constrain human behavior to limit the range of inputs to get the predictions right (for all models miss something and the way we currently make them better is through the use of assumptions and constraints). The pressure will grow on the masters of the big data universe to deliver. In the end, the messy and asymmetry of the real world is going to push them to make it easier on the machines by curbing the variance in the world.

Beware the dangers (and understand the limitations) of the inductive path. Lest the machines treat us like Turkeys in the weeks before Thanksgiving …

Leave a Reply