2 Comments
Jun 13, 2022·edited Jun 13, 2022

Thought-provoking as always but I think you're too cavalier about the potential threat of AI and how that threat relates to the "social chatter and speculation". I don't recall a Zohar Atkins post on the paradox of knowledge, but AI easily fits within this trope both in its existing observable effects and its future potential. And there is the intertwined complication of the nature of agency in both senses—when is someone or something responsible for their actions AND what is the relationship of someone or something's independent behavior to the one who deputized them?

The dominant technology for AI applications ("deep learning") is currently beyond human understanding. It clearly works but we cannot look at a particular success and pull it apart to analyze how it does what it does so well. It is a black box. We feed it a set of examples and it figures something out in a way we don't yet understand and cannot replicate within ourselves. Of greatest concern is that the process is highly nonlinear. The Japanese game of Go is a recent case study. Unlike Chess, where there has been steady gains in capability over years until the best AI now can consistently beat the top human players, research in AI for Go produced almost no comparative gains. Then the fusion of a different technology ("Monte Carlo simulation") with the base technology of deep learning created a series of AI Go players that achieved a superhuman capability for Go within a day. The top human Go players can no longer compete with AI. This same technology is now being used in various degrees of agency to supplement or replace humans in areas as diverse as parole decisions, medical diagnosis and financial loan approval.

So AI is currently a threat to human flourishing at the same time that it is a boon. Increasing capability will steadily eat away at the ability of many humans to sell their labor, including people with high levels of reasoning and training. How does society adapt to a world in which an increasingly significant fraction of its members have no marketable skills because what they are physically and mentally capable of all can be done much more cheaply, safely and reliably by AI-controlled machines? That is a world without precedent, although history teaches that societies with many young men in particular lacking purpose or social expectations of position and function are ripe for violent upheaval. What happens when that becomes ubiquitous and perpetual? The thinking about Universal Basic Income (UBI) doesn't address the totality of this, qua Dasein.

And that's only the foreseeable development of the existing technology. The people who work in AI now use the term Artificial General Intelligence (AGI) to distinguish between agents that are limited in the scope of application, such as medical diagnosticians or loan approvers, from the creation of a true artificial intelligence akin to human intelligence—capable of general problem-solving, learning and presumably even self-actualization. Aside from the moral considerations of what it would mean to have such things (Can they suffer? Do they have rights?) is the question of what threat they pose, and could it be existential for humanity. Again, this is a trope of science fiction: Colossus, SkyNet, The Matrix. What happens if we create something that not only thinks for itself like we do, but unlike us, learns at a rate we cannot, and grows in its capabilities far beyond ours faster than we can observe?

Here the example is to imagine yourself somehow physically captured by a room of five year olds, who express concern about your intentions and intend to control you for their own safety. You are tied to a chair. Can the five years olds keep you under control, or can you manipulate them in ways they don't even understand to gain your freedom? That is the nightmare scenario for those worried about The Alignment Problem of AGI. It seems remote and abstract now, but technology develops in a nonlinear fashion. Humans are very bad at understanding or anticipating nonlinear change. Our survival as a species may hinge on our relationship with AGI.

Expand full comment

You might find this paper interesting: Hubert Dreyfus, Why Heideggerian AI Failed and How Fixing it

Would Require Making it More Heideggerian https://static1.squarespace.com/static/53f8bd2ee4b0271e3bcc40eb/t/609c97fcb65b6359f28e905d/1620875262459/Why+Heideggerian+AI+Failed+and+How+Fixing+it+Would+Require+Making+it+More+Heideggerian.pdf

(although this is mostly about symbolic AI; the newer AIs are based on non-symbolic methods so have a different set of issues).

Expand full comment