A Roman matron asked Rabbi Yosi, “In how many days did God create the world?” “In six,” he answered. “And since then,” she asked, “what has God been doing?” “Matching couples for marriage,” responded R. Yosi. “That’s it!” she said dismissively. “Even I can do that. I have many slaves, both male and female. In no time at all, I can match them for marriage.” To which R. Yosi countered, “Though this may be an easy thing for you to do, for God it is as difficult as splitting the Sea of Reeds.”
Whereupon, she took her leave. The next day the aristocrat lined up a thousand male and a thousand female slaves and paired them off before nightfall. The morning after, her estate resembled a battlefield. One slave had his head bashed in, another had lost an eye, while a third hobbled because of a broken leg. No one seemed to want his or her assigned mate. Quickly, she summoned R. Yosi and acknowledged. “Your God is unique and your Torah is true, pleasing and praiseworthy. You spoke wisely.” (Bereishit Rabbah 68:4)
Let’s say you have a thousand balls in a bathtub and you pick one at random and ask “What color is it?” What are the chances you get the color right? Well, you don’t know what the set of all colors are that any ball could be, and you don’t know how many of each color there are in the tub. It’s possible that all the balls are puke-colored but you’d never guess? Or perhaps they are ultra-violet, a kind of hue that only bees can see. Now, suppose you know that half the balls are blue and half are green. Now, suppose you know that all the balls are red. The point is that knowledge changes your odds. The color of any given ball at any one time has not changed, but your ability to get the right one is determined by what you know of the set of possibilities.
In Superforecasting: The Art and Science of Prediction, Dan Gardner and Philip Tetlock argue that AI will disrupt Tom Friedman style opinion pieces (which are directional, but imprecise), but will have a harder time disrupting prediction. I agree. And I’m excited for this. For example, an AI might be able to write an unfalsifiable piece on how Tunisia is an unstable region, but it won’t be able to predict when, how, or if at all, an Arab Spring will be triggered. If you predict one thing, you may be lucky. If you predict lots of things, you are likely skilled, which means, per my initial point, that you have asymmetric knowledge. One person looks at the bathtub and just sees balls. The other person looks and knows, through experience, hypothesis, and deduction, that some of the balls are green and that none are yellow.
The above being said, if you give an AI enough chances to roll the dice, surely one might randomly be great at prediction, and that doesn’t mean it has knowledge of what it is doing. If you run billions of AI on trillions of data, one is going to guess a lot of uncanny things and be right. Still, we don’t know in advance which AI can and should be relied upon.
What does any of this have to do with the idea that God is a matchmaker and that the Roman slave-owner underestimates the difficulty of match-making? The Roman slave-owner treats human partnership merely in terms of breeding, merely in terms of numbers. She comes in cold, looking at human beings like balls in a bathtub. But God has asymmetric knowledge, and this means that although God also plays a numbers-game the probabilities are better for God than for a random matchmaker. Compatibility may have elements of luck or randomness, but there is also a science to it.
GPT can write the op-ed, but it can’t predict the future. Likewise, it can plausibly make matches, but it can’t bring together soulmates. (Or if it does, this is because a broken clock is sometimes right).
You might counter that AI could do as well as God if it had enough data, after all, isn’t God just the Being with all the data. But that sweeps away the problem—which is human complexity. The mystery of falling in love (granted, a cultural trend only since the troubadours) has yet to be solved by any algorithm, be it Tinder or traditional matchmaking. The reason is because knowledge of a person is different than knowledge of the color of a ball in a bathtub. I’ll believe in AGI when it predicts the most unpredictable thing of all—the birth, maintenance, and flourishing of a soulmate relationship. I’ll believe in AGI when it can treat human data in a way that is categorically distinct from data about balls in a bathtub.