Reality is complex. We, as humans, seek to better understand it. Why? Because knowing how the world works can inform better standards of living, can satisfy socioeconomic interests, and, last but not least, because it’s fun. There’s this thrill of the hunt for answers to our most difficult questions. What pulls us towards the ground? What causes a rich harvest? What makes us sick? How come that we are here in the first place?

But reality is chaotic, and we’ve had quite some trouble grasping its infinite complexity. Therefore, in pursuing a better understanding, we set out to build models. Simplified descriptions of how the world works. Models describing the movements of the stars. Models describing the unfolding of weather. Models describing the underlying components of matter. These models are far from being perfect, but they manage to provide a decently accurate picture of the real thing. We embraced models because they are manageable, they consist of mathematical equations, numerical tables, clear taxonomies, fundamental principles etc. We know how to deal with those.

How did we build them? How did we keep on improving them? By making sure that the models provide a description of reality that is as close as possible to the ground truth. A model predicts that the stars will move as they will actually do, the weather will unfold as it actually will etc. Scientists used their intuition to fine-tune models so that they got closer and closer to perfectly describing the world.

It worked great. Putting people on the surface on the moon, connecting people on the internet, curing countless diseases. However, we are still clueless about many things, and our models are a long way from being perfect descriptions of reality.

A Curious Situation

Let me now guide you through a different, but ultimately converging, train of thought. There are many fields in which we managed to build agents which outperform humans at various tasks, albeit in specific conditions. For example: object detection software is better at detecting objects than humans, speech recognition software is better at transcribing speech than humans, and computer vision software is better at diagnosing medical imaging than humans. How do these agents work? We probably used the scientific method to iteratively propose more and more accurate models of, for example, human speech, right? Wrong. We don’t know how these agents work. We didn’t provide them with a unified theory of human speech. We don’t understand the theories they seem use. Many people find that confusing, even unsettling.

But if we didn’t incorporate our well-earned domain-specific knowledge into these agents, how do they manage to not only rise to human performance, but even often go beyond that? The answer is that we made them able to learn by themselves. We taught them how to fish, instead of giving them the fish. We created models that are able to optimize themselves, without domain-specific human guidelines. They fine-tune themselves based on empirical data, based on the ground truth of reality. It is an ongoing effort based on algebra, statistics, calculus, engineering, and other fields, but we’re getting there, slowly but surely.

That’s pretty neat, isn’t it? We managed to build models that can eventually incorporate very powerful descriptions of reality. What’s the problem? The problem is that we don’t know how they work. We can apply them in spectacular industrial feats, but we don’t know how they do it. We understand how the optimization process generally works, because we built it, but we can’t grasp the result, the final model. There’s no linguist on the planet that understands the complexity of the speech recognition model after optimization. But, at the same time, we are able to wield its power in practical applications. An interesting situation we find ourselves in.

The Bitter Lesson

An important approach to Artificial Intelligence is based on the creation of models of the human mind. Rich Sutton, a distinguished researcher in AI who’s been active in the field for several decades, expressed an intriguing observation in a short essay called The Bitter Lesson. In it, he describes the limited success of models whose knowledge was explicitly articulated by humans compared to models which started with zero domain-specific knowledge, only to learn by themselves:

The actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

However, until recently, scientific models where created by scientists themselves, with no intermediate entity. Now, in a growing number of fields, there are general models which outperform any human-made alternative, and we didn’t even have to tell them how to do it. What gives? How can we reconcile the respectable track record of human-made models with the outstanding performance of self-improving models? To answer that question we may have to rethink how we do science.

A New Way?

In the traditional research cycle, scientists start by proposing a research question and a hypothesis, a tentative answer to the initial question. Then, they design a method of testing whether the hypothesis really is the answer to the question. They conduct the tests, analyze the results, and finish with a bit of reflection on the study.

Usually, the hypothesis describes a specific way in which things presumably are, a way in which reality is. They may sound like: “Students who eat breakfast will perform better on a math exam than students who do not eat breakfast”, “Drinking sugary drinks daily leads to obesity”, or “Roses watered with liquid Vitamin B grow faster than roses watered with liquid Vitamin E”. The problem is that from the very beginning of a study, the proposed models are influenced by the researchers themselves. But they are only human, and therefore the models that later developed are strongly influenced by humans. We have seen that this is not the only way of building powerful models.

How about instead of the researchers proposing a hypothesis in the very first stage of the study, they limit themselves to just posing the research question? Then, they throw in a general model that is expressive enough to capture the complex dynamics of reality in the context of the problem at hand. Finally, they extract the hypothesis from the optimized model. The model takes on a big chunk of the responsibilities of the researcher, but the latter doesn’t lose her job, because she can now move her efforts to interpreting the optimized model, rather than building it from the ground up. The student becomes the master.

Implications

It is a humbling idea, moving past the paradigm of humans incrementally improving their models of reality through the scientific method. It means accepting that reality is maybe a bit too complex for our minds to comprehensively grasp. It means recognizing that algorithms can incorporate a better understanding of reality into models than we can. It sure is a bitter lesson, but it may just be a powerful new way of driving human progress. Instead of trying to simplify the complexity of reality, we fight fire with fire: we throw complex general models at it which can get it right, even if we don’t understand them.

Is it worth it? It largely depends on what you think the ultimate purpose of science is. If you think that the goal of conducting research is to eventually apply it, you should get excited that we can now use these general models in practical applications. The end justifies the means, just skip the human scientists. If you think that the goal of science is for humanity to gain a deeper understanding of reality, then there’s a bumpy road ahead, because interpreting such complex models seems to be a challenging thing in itself. If you think that science is just for fun, brace yourselves because this paradigm shift will surely be one hell of a ride.