Discover more from What Is Called Thinking?
Of Ponzis and Polyamory: SBF, FTX, and EA
How Art, Religion, and the Humanities Can Save Us from Wiping Out
I. Enter the Void
Last week, SBF, a young crypto billionaire and philanthropist saw his fortune and reputation crash. The celebrated effective altruist and major donor to the Democratic party proved insolvent when his rival and ex-friend, CZ, publicly dumped millions of dollars worth of FTX’s “FTT” tokens (proving them worthless). In some ways, the fall from grace is a typical Aristotelian tragedy. But much still remains unclear—how much of SBF’s folly was the result of regular human weakness (a gambling addiction spun out of control, leading to a death spiral), typical grift and hypocrisy (using donations and moral language as cover for a premeditated scam), or ideological purity (the aggressive approach to risk was “justified” by some expected value calculus intended to help the world)? As Finance writer Matt Levine tells us, SBF has been frank for a long time that he is “in the ponzi business”—in some speculative industries like crypto this is par for the course, as you only succeed if you get millions of users to trust you before you can genuinely assure them that they should. Finance writer Byrne Hobart has a good rule of thumb: “There are no hard and fast rules, but Read Matt Levine and try not to do things that would lead you to getting written about by Matt Levine.”
If you dig into the story, there are some oddities about the company culture at FTX and Alameda (the hedge fund to which he was self-dealing), which was based out of a luxury property in the Bahamas: a widespread encouragement of Amphetamines and sleeping pills, polyamory, and a cult-like org structure. You don’t need to impute malice to see how a cocktail of delayed adolescence, combined with a sense of being “off-shore,” a feeling of (self-)righteousness, and a commitment to disagreeableness, might lead to delusions of grandeur, and eventual disgrace. Although it is not so different a story than that of any financial scam, what stands out is the image of youth. These were not people in suits trading mortgage debt, but college nerds in Hawaii shirts convincing celebrities from Tom Brady to Bill Clinton that their tokens are the future.
II. The Consequences of Consequentialism
I’m telling you the story of SBF and FTX as I understand it, not because I enjoy gossip and want to draw you into it, but because there’s a philosophical and sociological feature to the story that makes it worthy of attention. SBF was the poster-child for the Effective Altruist movement, a school of thought devoted to “doing the most good.” On the face of it, scamming people and erasing 32 billion dollars of wealth, including money tagged for philanthropic causes, is probably neither effective nor altruistic. In terms of PR for the effective altruism movement, alone, it’s a drag. Still, you might argue the other side—if SBF thought he had a 10% chance at a trillion dollars or more and planned to give it all away, maybe he thought it was worth the risk. Surely, $100 Billion in probabilistic terms to fight pandemics and prevent existential risk is worth rolling the dice? He got unlucky this time, but if you run the simulation, he still did the rational (and noble) thing.
SBF’s EA allies have been quick to denounce. Will MacAskill, who purportedly convinced Sam as an MIT undergrad to join the EA cause, writes that dishonesty is bad.
But is dishonesty bad? The whole thrust of the “effective” in effective altruism is that you should do what works. Why, from a consequentialist point of view, is lying bad? It seems like it would be bad only if you got caught or if it led you to become comfortable lying too often which then eventually led to getting caught. Bryan Caplan argues that Peter Singer must believe it is ok to lie to save lives, and that includes lying about the repugnant conclusions to which one’s philosophy leads (for example, in the case of a doctor who knowingly but surreptitiously kills one person during a surgery to save five others by using the dead man’s organs). My point here is not to doubt MacAskill’s genuineness or to suggest that all consequentialists support esoteric morality, but to question how the effective altruism movement can reasonably have it both ways. Why should common sense morality matter to a strict utilitarian? Is there a way to throw common sense morality to the winds without becoming an SBF? How is that line drawn?
Here is Tyler Cowen on how bad optics for Effective Altruism will lead it to scale back its ambition:
I do anticipate a boring short-run trend, where most of the EA people scurry to signal their personal association with virtue ethics. Fine, I understand the reasons for doing that, but at the same time grandma, in her attachment to common sense morality, is not telling you to fly to Africa to save the starving children (though you should finish everything on your plate). Nor would she sign off on Singer (1972). While I disagree with the sharper forms of EA, I also find them more useful and interesting than the namby-pamby versions.
I think Cowen’s point is that effective altruism is going to have high variance, and it’s not clear that you can avoid an SBF without also defanging and neutering the other impulses of the movement. It’s a package.
III. Utilitarianism as Secular Religion
One reason I’m interested in this stuff is that I think Effective Altruism as a kind of secular religion (which isn’t to say there aren’t some religious EAs). It offers meaning, purpose, a set of directives, and a set of ceremonies and narratives and leaders that come to fill the hole left by traditional religion, particularly amongst so-called “rationalists.” In some ways, EA and utilitarianism, generally conceived, are just a particular expression of Enlightenment values. But in other ways they are responses to Nietzsche’s “death of God”—instead of accepting that life is meaningless now that science and rationality have thrown out “superstition,” they’ve gone ahead and turned themselves into gods. If the classical God of medieval times was once omnipotent, omnibenevolent and omniscient, now the goal is to attain that potency (efficacy) and goodness for oneself. As Nietzsche wrote, “If God exists, how can I bear not to be him?”
A basic premise of both Effective Altruism and Utilitarianism is that human beings have the ability to know and measure what’s good. This premise is challenged by traditional pre-Enlightenment societies in several ways. 1) Only God has total knowledge, so human attempts to engineer the good will fail as we oversimplify. 2) Many goods are non-fungible. You can’t abstract all goods into a common currency and then speak of them as having a common value, whether that’s in dollars, utils, or FTT tokens. Value pluralism comes out of the notion that various goods are incommensurate. You can’t just put everything into a spreadsheet and see what it spits out, and the reason isn’t just limited knowledge (1) but also ontology (2), the world was designed to have a certain level of diversity. The desire to overcome that diversity through a common language or system of value is just a recapitulation of the Tower of Babel.
Here is Tyler Cowen again:
Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.
The biggest existential risk to humanity may be the hubris in thinking we can build something like a common project that might mitigate it. This is one reason why the Humanities matter—the study of stories like Babel help protect us from erring too much to the scientific and technocratic.
Cowen recently wrote a book on Talent in which he says that picking and cultivating talent is an art not a science. Paired with the above point, we learn that the bottleneck to progress is not scientific progress, but humanistic progress, the ability to produce great art, to think like an artist, to interpret the world as though it were a work of art, not a math problem.
This is one of the central teachings of Hans Georg-Gadamer whose Truth and Method bemoans our culture of treating cultural challenges as engineering ones.
If EA is to be successful, ironically put, it must temper its own confidence. It must commit to learning the art of interpretation. It must commit to admitting the world’s problems as interpretive ones. That means going beyond the logic of how many malaria nets can I buy with this many hours of work on Jane Street. (Gadamer won’t say this is wrong, just that it can’t be exhaustive of the way we think about value, work, or bettering the world).
I am more convinced, as Tyler is, and here I am talking my own book, that art and religion matter and that an over-reliance on statistics and so-called rationalism just leads to bad religion and bad art.
IV. The Case For Moderation
The Enlightenment project, if it can be properly executed, has many pros, and I’m a beneficiary of many of them. But let’s not forget that the Enlightenment, in practice, is rarely executed well. Kant, Hegel, and Marx didn’t usher in a world of radical egalitarianism. Instead they provided an intellectual prologue to several world wars + Stalinism. The Enlightenment wasn’t a straight line of moral progress, but also created a backlash in the counter-Enlightenment. Reason, over-emphasized, gave rise to an equally diffident fideism, emphasizing blind and absurd faith (Jacobi, Kierkegaard), or reliance on nationalist spirit (Herder, Fichte). Perhaps a middle way might have allowed reason and faith, critique and custom to co-exist rather than lead to a zero-sum battle.
My argument is not that we should welcome irrationality per se, but that we should know our human propensity for it as deeply ingrained. It is irrational, in fact, and paradoxically, to think we can extricate ourselves from irrationality, and it is dangerous, moreover, because it leads to denial. To the extent that SBF presided over a cult it was a unique one—the cult of justifying oneself not in terms of divine prophecy, but in terms of Enlightenment values. The shock here is that we expect religious scandal and hypocrisy involving the sacred. But what happens when rationality itself is so warped that it becomes no different?
The spiritual solution is not failsafe, but involves whatever practice and worldview can lead to humility. Typically, this role is played by some sense of God as transcendent. In Jewish terms, this is what is meant by “yirat shamayim.” But the practical-social solution is that we should probably tolerate a bit more irrationality in ourselves and others so as to accept upon ourselves the recognition that we are limited and so as to protect ourselves from the greater risk of thinking we can totally shake off irrationality. I won’t prescribe the dosage here, but we should be pluralistic—it’s time we stop judging communities through Enlightened eyes and ask ourselves if perhaps the inefficacy and even lack of altruism in those communities might not be more effective and altruistic in the end than living a life in accordance with the principles of Peter Singer.
A set of contradictory and competing values might provide the necessary set of checks and balances needed to offset the maniacal elevation of any one ethical theory or style, be it virtue ethics, deontology, or effective altruism. Perhaps the debate we need to have is about the equilibrium between these, rather than the assumption that any of them alone is optimal.
I’m probably over-generalizing but I don’t think the polyamory is incidental to the story of SBF or FTX (and the “rationality” community more broadly). I think that monogamy is an acceptance of limits in many senses, and that this is good for character and good for society. Polyamory may work for a handful of young individuals in the Bay Area—I’ll keep an open mind here—but it would be a terrible way to organize society as a whole. To the extent that the principles behind it are idealistic, utopian even, it seems like it is ripe for abuse and cult-like manipulation. Free-love in theory will not overcome human vice, but simply provide cover for it, just as EA as an ideology ended up leading to SBF’s self-justification of fraud in the name of holiness. I’d be surprised if Polyamory endures beyond a certain age bracket, which makes me feel that it is largely a form of extended college culture. Perhaps the same can be said of much of crypto in general. To grow up, the crypto and polyamory crowd will have to realize that they need not invent the wheel or rebuild society from first principles; folks in traditional finance and traditional marriage have reckoned with the same problems over centuries and have come up with some relatively wise solutions. From my limited and biased position, I contend that both Polyamory and Crypto do not appreciate Chesterton’s Fence. This is neither financial nor relationship advice. I’d be willing to say the same about effective altruism as well. Optimizing for efficacy and altruism alone will likely take us to zero. But we won’t know we’re in a ponzi until it’s too late. The collapse of civilization is the cultural version of a liquidity crunch. Let’s make sure we are holding some trusted and stable cultural assets in reserve.