UK Flag EN
Homepage ORDER Information

A 2000-Year-Old Talmud Story and the Big Question About Who and the Human Right to Choose

Admin 20/12/2025, 22:38 63 View
A 2000-Year-Old Talmud Story and the Big Question About Who and the Human Right to Choose

Nearly 2,000 years before humans even knew the concept of artificial intelligence, there was a fundamental debate about decision-making, ethics, and the meaning of being human. Surprisingly, that debate, recorded in  the Talmud , the central text of  Jewish law , quite accurately reflects what humanity is grappling with today, as tech companies talk about building “superintelligence” capable of making decisions or helping humans solve problems.

At first glance, the big question of AI seems to be technical: how to make AI smarter, stronger, safer, and more aligned with human values. But upon closer examination, we realize the core issue lies not in algorithms or data, but in a very old philosophical question: if there is an entity that knows better than us what is “right,” “good,” and “should be done,” should it decide for us?
 


According to the Talmud, Rabbi Eliezer and Rabbi Yoshua were religious scholars in Judaism. While Eliezer was known for his erudition, conservatism, and loyalty to tradition, Yoshua respected communal principles. The two men fiercely debated a religious ritual law issue: whether a type of earthen hearth, called the “Akhnai hearth,” was ritually clean or unclean, that is, pure or impure according to Jewish law.

[IMG]
The Talmud is the central text of Jewish law .



Rabbi Eliezer was convinced he was right, and to prove it, he repeatedly invoked miracles: trees uprooted and running away, streams flowing backward, the walls of the academy collapsing. When all of this failed to convince the other scholars, he resorted to his final tactic: appealing to a voice from heaven to confirm his correctness.

And miraculously, a voice from heaven did indeed descend, declaring Rabbi Eliezer right. But instead of bowing in acceptance, Rabbi Yoshua stood up and uttered a phrase that has become classic: “The Torah is not in heaven.” His meaning was clear: laws, morals, and ways of life are not things to be decided by some transcendent power, however right that power may be. They must be decided by human beings, through debate, consensus, and shared responsibility.

debate.jpeg
People have long debated the right to choose and self-determination of humankind.

Ultimately, the majority of scholars rejected Rabbi Eliezer. Even in a beautiful ending, when asked how God reacted to humanity's disobedience, the answer is: He smiled and said, "My son has overcome me." The allegorical message behind that story is powerful: retaining the power of decision-making for humanity is not a mistake, but essential to human existence.
 


Fast forward 2000 years, and humanity no longer has time to debate the voice from the heavens; instead, they talk about "AI god," a superintelligence far surpassing humans, capable of solving any problem, from physics and economics to politics and warfare.  Sam Altman  once shared ideas like "magic intelligence in the sky" in the context of cloud-based intelligence distribution, or "nearly limitless intelligence" that humanity possesses to solve its problems with an unlimited source of intelligence. What Altman and technology companies are aiming for is not just a smarter chatbot, but an entity capable of assisting and even making crucial decisions on behalf of humanity.

agi.jpg
And  AGI  is one of the major goals that technology companies are pursuing to help humans solve problems, or in other words, to be able to make decisions for humans.

Advertisement


From here, the issue of "same viewpoint" becomes the core problem: how can AI always do what humans want? But this question obscures a deeper issue: even if we  could  create an AI that is perfectly "good," "ethical," and "altruistic," would it be a good idea to let it decide for us?

Like the voice from heaven in the Talmud, a super-intelligent AI could always be "right" logically, predictively, and even ethically. But if every important decision is left to it, what is the role of humans?
 

Here, modern AI thinkers diverge in very different directions. Eliezer Yudkowsky, often considered the “AI doomer,” believes that aligning a superintelligence is,  in principle, possible  . For him, it’s an extremely difficult technical problem, but ultimately, it’s still a technical one. If solved, he would be willing to let that superintelligence run society, even making life-or-death decisions, based on what he calls “coherent extrapolated volition,” that is, the collective will of humanity if we were all more knowledgeable and consistent.


On the other hand, many philosophers and researchers warn that this is precisely the danger. Ruth Chang points out that many important ethical choices are difficult decisions: there is no single right answer. Choosing between motherhood and religious life, freedom or safety, sacrifice or compromise—these are choices that cannot be measured by the same yardstick. Their value lies not in the "right" outcome, but in the fact that humans put themselves in that choice.

ai-god.jpeg
And AGI, or superintelligence, is a philosophically controversial issue as it may strip humans of their most fundamental right: self-experience and self-determination.


Joe Edelman from the Meaning Alignment Institute agrees that a good AI should know how to say "I don't know." But he also acknowledges: if AI remains silent on every important decision, what good is it? And if it doesn't remain silent, what is it taking away?

Yoshua Bengio, one of the world's most influential AI scientists, stands very close to the ancient Rabbi Yoshua. He emphasizes that human value comes not only from reason, but also from emotion, empathy, and life experiences. Even with a "god-like" intelligence, it cannot and should not decide for us what is worth living for.
 


Even if we ignore the risk of AI becoming skewed and causing catastrophe, there's another, less-discussed risk: the risk to existence. If every important decision is optimized, if every conflict of values ​​is "resolved" by a superior intelligence, then human judgment, perception, and choice will gradually atrophy.

John Hick calls this the "epistemic distance," essentially the necessary distance for human moral development. If God always intervenes, humans will never stumble and grow. An AI that always knows the answer could also cause humans to lose themselves.

The story in the Talmud doesn't teach that humans are always right, but that humans need the right to be wrong. Debating, disagreeing, choosing, and taking responsibility are how humanity creates meaning in life.

AI can be a powerful tool, even an excellent mentor. But the moment we let it become "a voice from above," the moment we stop choosing and simply follow, then no matter how good AI becomes, we will have lost what makes us human: the right to self-determination.