By Victor Haghani and James White 1
There’s much that’s being written about Sam Bankman-Fried (SBF) and the choices he made. We think there’s one particular aspect of his thinking and actions that has received less attention than it deserves, and perhaps explains better than anything else the arc of his narrative.2
In a range of interviews and Twitter threads (see links and excerpts below), SBF explained that he approached financial decisions with little or no aversion to risk. That’s a valid personal choice, but it’s highly unusual. In our own experience, we’ve never met anyone who made important financial decisions consistent with being anywhere in the ballpark of zero risk aversion.
To see why, it’s helpful to take a look at where risk-aversion comes from. It arises from the fact that most people derive less and less incremental satisfaction from progressive increases in wealth – or, as economists like to say: most people exhibit diminishing marginal utility of wealth. This naturally leads to risk aversion because a loss hurts more than the equivalent gain feels good. The classic Theory of Choice Under Uncertainty recommends making decisions that maximize Expected Utility, which is the probability-weighted average of all possible utility outcomes.
SBF explained on multiple occasions that his level of risk-aversion was so low that he didn’t need to think about maximizing Expected Utility, but could instead just make his decisions based on maximizing the Expected Value of his wealth directly. So what does this mean in practice? Let’s say you find an investment which has a 1% chance of a 10,000x payoff, but a 99% chance of winding up worth zero. It has a very high expected return, but it’s also very risky.3 How much of your total wealth would you want to invest in it?4
There’s no right or wrong answer; it’s down to your own personal preferences. However, we think most affluent people would invest somewhere between 0.1% and 1% of their wealth in this investment, based on observing other risky choices such people make and surveys we’ve conducted (e.g. here). We suspect that range sounds reasonable to you.5
SBF on the other hand, making his decision strictly according to his stated preferences, would choose to invest 100% of his wealth in this investment, because it maximizes the Expected Value of his wealth. In one of his interviews, he did suggest that perhaps he wouldn’t go all the way to 100%, but that he’d still invest way, way more than the typical choice of 0.1% to 1%. However, in other interviews, he didn’t back off of the implications of maximizing Expected Value – as in, for example, his conversation with the economist Tyler Cowen (March 9, 2022).
Tyler Cowen (TC): Should a Benthamite6 be risk-neutral with regard to social welfare?
SBF: Yes, that I feel very strongly about.
TC: Ok, but let’s say there’s a game: 51% [chance] you double the earth out somewhere else, 49% it all disappears. And would you keep on playing that game, double or nothing?
SBF: Yeah…take the pure hypothetical… yeah.
TC: So then you keep on playing the game. What’s the chance we’re left with anything? Don’t I just St. Petersburg Paradox7 you into non-existence?
SBF: No, not necessarily – maybe [we’re] St. Petersburg paradox-ed [sic] into an enormously valuable existence. That’s the other option.
We’re all entitled to our own preferences, but our preferences have consequences – and there’s a lot of evidence, both philosophical and practical, that when SBF’s stated preferences encounter the real world, it results in almost surely going bust at some point, and pretty quickly for someone who knows their way around financial markets.
Such a person won’t have to search for special investment opportunities, like doing leveraged crypto arbitrage or founding a crypto exchange, to find risks that have positive expected value with low probabilities of big payoffs. For example, most would agree that the stock market has a positive expected return in excess of the risk-free rate. If out-of-the-money call options are fairly priced, repeatedly buying them would give the Expected Value maximizer ample opportunity to lose all their wealth in short order, offset by a vanishingly small chance of becoming the richest person in the world.
Below are a few examples of SBF laying out his decision-making framework.
Interview with Jacob Goldstein on What’s Your Problem, May 24, 2022:
Jacob Goldstein: I’m Jacob Goldstein and this is What’s Your Problem… My guest today is Sam Bankman-Fried and his problem is this: how do you save the world? Before we get to the interview, I just want to take a minute here and set up this one big idea, this really useful intellectual framework that drives almost everything Sam does. It’s called Expected Value.
SBF: I try to use it a lot because I think it sort of is the default correct way in some senses to calculate something. Like, if you’re just trying to do a generic calculation I think it’s usually the right thing to use… One of the sort of takeaways that often ends up coming from really thinking hard and critically about Expected Values is that you should go for it way more than is generally understood.
JG: Go big. You should really go really big, even if you probably will fail and wind up with zero.
SBF: That’s absolutely right… if you really do care linearly about money, if you really do think that getting that marginal you know dollars worth a lot – um, you know, even once you already have a lot of money, um then, it – it should lead you to think that… And so, anytime that, like, there is some non-zero and non-negligible chance of a really really good outcome are times when you’re gonna be incentivized more than seems natural probably to choose extreme outcomes.
Conversation with Rob Wiblin on the 80,000 Hours podcast, April 14, 2022:
SBF: Yeah. I think the way I saw it was like, let’s maximize EV: whatever is the highest net expected value thing is what we should do. As opposed to some super sublinear utility function, which is like, make sure that you continue on a moderately good path above all else, and then anything beyond that is gravy.
If you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good. It’s not like you did some good, so good doesn’t matter anymore…
That means that you should be pretty aggressive with what you’re doing, and really trying to hit home runs rather than just have some impact – because the upside is just absolutely enormous.
Better is Bigger, SBF Twitter Thread. 11:19 PM · Dec 10, 2020, @SBF_FTX
SBF: …What about a wackier bet? How about you only win 10% of the time, but if you do you get paid out 10,000x your bet size?
[So, if you have $100k,] Kelly suggests you only bet $10k: you’ll almost certainly lose. And if you kept doing this much more than $10k at a time, you’d probably blow out.
…this bet is great Expected Value; you win [more precisely, your Expected Value is] 1,000x your bet size.
…In many cases I think $10k is a reasonable bet. But I, personally, would do more. I’d probably do more like $50k.
Why? Because ultimately my utility function isn’t really logarithmic. It’s closer to linear.
…Kelly tells you that when the backdrop is trillions of dollars, there’s essentially no risk aversion on the scale of thousands or millions.
Put another way: if you’re maximizing EV(log(W+$1,000,000,000,000)) and W is much less than a trillion, this is very similar to just maximizing EV(W).
Does this mean you should be willing to accept a significant chance of failing to do much good sometimes?
Yes, it does. And that’s ok. If it was the right play in EV, sometimes you win and sometimes you lose.
It seems like SBF was essentially telling anyone who was listening that he’d either wind up with all the money in the world, which he’d then redistribute according to his Effective Altruist principles – or, much more likely, he’d die trying.
-  This not is not an offer or solicitation to invest, nor should this be construed in any way as tax advice. Past returns are not indicative of future performance.
Thank you to Rich Dewey, Antti Ilmanen, John Karubian and Jeff Rosenbluth for their help with this article. For a deeper dive, read this excellent article by Byrne Hobart at The Diff.
-  This article is all about SBF applying his personal risk preferences with respect to his own money. We don’t know yet the facts of what actually happened, but nothing we discuss herein justifies any type of fraud or improper use of client or investor money.
-  Its standard deviation, a conventional measure of risk used for more symmetric payoff outcomes, is about ten times bigger than the expected return of the investment. Hence, its Sharpe Ratio is just 0.1 – not particularly high – although we emphasize that these conventional metrics of risk, return and quality of investment are not designed for evaluating investments such as these.
-  For the purpose of this thought experiment, assume that all of your wealth is financial wealth and that you are considering this as a one-time investment in isolation of other opportunities.
-  But if it doesn’t, just consider what it would mean to bet, say, 20% of your wealth on such an opportunity multiple times. After twenty such investments, you’d have an 82% chance of having lost 99% of your wealth, and just an 18% chance of having won at least once. We think most people wouldn’t find that distribution of outcomes very attractive.
-  A Benthamite Utilitarian. Sadly, there is much confusion between “utility” in the Benthamite Utilitarianism SBF has discussed a fair amount with respect to social-welfare choices, and “utility” as a tool for financial decision-making in classical economics and the Decision-Making Under Uncertainty context. These are really disparate ideas, but various issues with Benthamite Utility have (unfairly) tainted von Neumann-Morgenstern Expected Utility.
-  From Wikipedia: “The St. Petersburg paradox, or St. Petersburg lottery, is a paradox involving the game of flipping a coin where the expected payoff of the theoretical lottery game approaches infinity but nevertheless seems to be worth only a very small amount to the participants.”