Can AI Be Self-Conscious?
I don’t know what “consciousness” is. But I do know that for thinkers ranging from Hegel to Sartre, the pinnacle of consciousness is self-consciousness.
Machines can learn how to perform surgeries, process payments, win at chess, clean the floors, and even learn how to learn new skills, but can they ever experience shame or pride, guilt or fulfillment? How would you possibly model that mathematically? Might we ever live in a world in which “intelligent machines” feel ambivalence? Suffer from having to make hard decisions? Perhaps you don’t care. What matters is that the machines obey us. Their inner lives—or lack thereof—is their business.
For me, intersubjectivity—the experience of oneself in the presence of others—is the hardest problem facing those who would seek to define or model consciousness.
For functionalists, consciousness is just a matter of being able to follow rules. If the robot obeys the order, it’s intelligent. But is there a rule governing not behavior, but interiority?
How can interiority possibly be modeled if it is that which eludes observation? How can interiority be known if it is that which does the knowing?
Whether we call interiority the soul or something else, it seems that the mathematical and scientific approaches to consciousness have a long—if not eternal—way to go before they’ll demystify it.
Why am I wrong? And if I am not wrong, what are the implications?
P.S.—Nice profile today of my Weekly Newsletter in Jewish Insider.
In case you missed it, check out my mega thread on Hannah Arendt.
What is Called Thinking? is a practice of asking a daily question on the belief that self-reflection brings awe, joy, and enrichment to one’s life. Consider becoming a subscriber to support this project and access subscriber-only content.
You can read my weekly Torah commentary here.