I don’t know, but am inclined to think “no,” if sentience involves anything like a sense of self-consciousness, interiority, care for the world, or existential motivation. But this is a question that is alive in our culture, an ersatz theology for the secular age. This “dialogue” between a Google Engineer and an AI is quite remarkable, the stuff of science fiction:
The dialogue recalls Kierkegaard’s quip about St. Anselm, when he heard that the great rationalist had prayed for days asking God to send him “proof” of his existence.
“Does the loving bride in the embrace of her beloved ask for proof that he is alive and real?”
Who needs ontological arguments for divine existence when you have something more immediate, a relationship?
Ironically, Kierkegaard’s existentialist approach might lead AI-is-sentient “believers” to side-step having to justify their belief. What does it matter what’s happening in the “black box” of consciousness if I can converse with AI and have a meaningful interaction? Isn’t this how I relate to other humans: I have a mental model of how their minds work, but I can’t actually look inside them.
In Heideggerian terms, we start with a sense of co-existence. We are already “thrown” into the world without needing to explain it. If so, proof or disproof of AI is sentience is parasitic on already sharing a world (or not sharing a world) with AI.
Heidegger started off as a universalist, writing about Dasein in general. In the 30s, however, he began to speak of German Dasein, as if German existence were impenetrable to British Dasein, American Dasein, Jewish Dasein, Russian Dasein, etc. How do I know there is such a thing as German Dasein? I don’t, but I don’t have to prove it since I am already thrown into it. Liberals hate this sort of romantic move, since their unit is the cosmopolitan individual, but let it be said that Heidegger’s notion of German Dasein is not inherently fascist. One way to read Moshe Koppel’s Judaism Straight Up is as a kind of defense of Jewish Dasein—your belonging to a group sets the limits on what kinds of things you can know, do, believe, and you don’t have to justify it to outsiders any more than you need to justify loving your children more than other people’s kids.
The AI Debate is about Method
If Dasein is universal, then the fact that most do not consider AI sentient is evidence enough that AI is not sentient. But if Dasein is culturally modified, then why not add to the endless list of cultures (and identities): Syrian, Ghanaian, Australian, AI? Groups have different characteristics, which other groups can understand only through translation. AI is no different. Note: this is an argument for ethnic difference at the level of culture, and need not make any appeal to difference at the level of material substrate, i.e., (genetic) composition.
Alternatively, AI may be an edge case not unlike babies or those with brain conditions that significantly impair certain cognitive functions. But again, to go back to Kierkegaard, who cares whether babies are sentient when holding them is such an intense experience, anyways. On the other hand, surely there must be a guardrail on this appeal to subjectivity…what if the baby is a hallucination? The Heideggerian argument that we are thrown into the world can’t tell us whether our judgments about or within the world are valid.
The debate, then, is not about AI, but about knowledge—whether we need it and whether it is appropriate. Ironically, the more sentient something is the less we can and should know it.
Beyond Sensationalism
The topic of AI sentience is sensationalist. The social chatter and speculation, in which I now partake is a form of what Heidegger calls Gerede, a kind of existential distraction. But why is the distraction alluring? Not, I surmise, because AI threatens us, but because the kinds of debates between Kierkegaard and Anselm are structural: we can’t live with knowledge and we can’t live without it.
The history of theology is a history of intra-religious conflict about whether God can be known or whether knowing God is the highest insult to God. Without too much skin in this game, we moderns have transposed the discourse onto machines.
But just as Bulgakov thought he might allude to God by vividly portraying the devil, so our discourse about AI is often a parable for our relationship to divinity.
Who knows, maybe in 100 years Google will have to hire “Senior Theologians” to help remind the engineers that their questions are older than the alphabet.
Thought-provoking as always but I think you're too cavalier about the potential threat of AI and how that threat relates to the "social chatter and speculation". I don't recall a Zohar Atkins post on the paradox of knowledge, but AI easily fits within this trope both in its existing observable effects and its future potential. And there is the intertwined complication of the nature of agency in both senses—when is someone or something responsible for their actions AND what is the relationship of someone or something's independent behavior to the one who deputized them?
The dominant technology for AI applications ("deep learning") is currently beyond human understanding. It clearly works but we cannot look at a particular success and pull it apart to analyze how it does what it does so well. It is a black box. We feed it a set of examples and it figures something out in a way we don't yet understand and cannot replicate within ourselves. Of greatest concern is that the process is highly nonlinear. The Japanese game of Go is a recent case study. Unlike Chess, where there has been steady gains in capability over years until the best AI now can consistently beat the top human players, research in AI for Go produced almost no comparative gains. Then the fusion of a different technology ("Monte Carlo simulation") with the base technology of deep learning created a series of AI Go players that achieved a superhuman capability for Go within a day. The top human Go players can no longer compete with AI. This same technology is now being used in various degrees of agency to supplement or replace humans in areas as diverse as parole decisions, medical diagnosis and financial loan approval.
So AI is currently a threat to human flourishing at the same time that it is a boon. Increasing capability will steadily eat away at the ability of many humans to sell their labor, including people with high levels of reasoning and training. How does society adapt to a world in which an increasingly significant fraction of its members have no marketable skills because what they are physically and mentally capable of all can be done much more cheaply, safely and reliably by AI-controlled machines? That is a world without precedent, although history teaches that societies with many young men in particular lacking purpose or social expectations of position and function are ripe for violent upheaval. What happens when that becomes ubiquitous and perpetual? The thinking about Universal Basic Income (UBI) doesn't address the totality of this, qua Dasein.
And that's only the foreseeable development of the existing technology. The people who work in AI now use the term Artificial General Intelligence (AGI) to distinguish between agents that are limited in the scope of application, such as medical diagnosticians or loan approvers, from the creation of a true artificial intelligence akin to human intelligence—capable of general problem-solving, learning and presumably even self-actualization. Aside from the moral considerations of what it would mean to have such things (Can they suffer? Do they have rights?) is the question of what threat they pose, and could it be existential for humanity. Again, this is a trope of science fiction: Colossus, SkyNet, The Matrix. What happens if we create something that not only thinks for itself like we do, but unlike us, learns at a rate we cannot, and grows in its capabilities far beyond ours faster than we can observe?
Here the example is to imagine yourself somehow physically captured by a room of five year olds, who express concern about your intentions and intend to control you for their own safety. You are tied to a chair. Can the five years olds keep you under control, or can you manipulate them in ways they don't even understand to gain your freedom? That is the nightmare scenario for those worried about The Alignment Problem of AGI. It seems remote and abstract now, but technology develops in a nonlinear fashion. Humans are very bad at understanding or anticipating nonlinear change. Our survival as a species may hinge on our relationship with AGI.
You might find this paper interesting: Hubert Dreyfus, Why Heideggerian AI Failed and How Fixing it
Would Require Making it More Heideggerian https://static1.squarespace.com/static/53f8bd2ee4b0271e3bcc40eb/t/609c97fcb65b6359f28e905d/1620875262459/Why+Heideggerian+AI+Failed+and+How+Fixing+it+Would+Require+Making+it+More+Heideggerian.pdf
(although this is mostly about symbolic AI; the newer AIs are based on non-symbolic methods so have a different set of issues).