Short Stack: Economics of gaming

You can learn a lot about games from economics, and vice versa.
Short Stack: Economics of gaming
Photo by MariaPolna from PxHere

One of my favorite hobbies is strategy games⁠. Rather than enjoying action-oriented video games, I usually go for simulations that are more like glorified spreadsheets, and occasional board games. The games I like invariably have economic components. Sometimes these work well, and sometimes they don’t. And when they do fail, it’s interesting to think about why. This week’s Short Stack is about what gaming can teach us about economics, and vice versa.

Games can teach you the perils of too-high working capital

One thing a lot of games do well is simulating the implicit costs of high working capital. The concept sounds pretty technical, but is actually straightforward: if you have inventory on the shelves or in warehouses somewhere, there’s a hidden cost from the fact that resources or cash are “locked up” and unavailable for other uses.

Of course, it’s good to have some goods on the shelves. But if you create a good for $96 and sell it for $100 a year later, and interest rates are 4 percent, you actually didn’t gain anything at all from your business. You could have just invested the $96 in bonds instead.

Or if you prefer to think in less financial terms, consider that you spent time and effort a whole year before you needed to. You could have worked on a project that would bear immediate fruit, and then return to creating the good when it’s actually needed.

Perhaps this seems like a small effect. But over large companies like Toyota or Walmart, there can be tens of billions of dollars in “working capital.” Reducing that number can be extremely valuable.

It turns out that games are really good at exhibiting this principle. In many strategy games, you build something⁠—an empire, a company, a city, a factory⁠—that grows over time, using the returns from previous projects to fund even larger new projects. These games have working capital all over the place.

For example, consider Stellaris, a computer game that puts you in charge of a spacefaring empire that expands to other worlds. In order to do this, your population gathers or produces resources like food, minerals, alloys, and energy. At a first pass, you might think it’s beneficial to have steadily growing stockpiles of these goods. Trading between different resources is often expensive and inefficient, so best to make a surplus of all of them!

But this is a trap: Stockpiled resources don’t do you any good while they’re sitting there, and the futuristic technologies you can research have powerful exponential effects. You’re best off building an expensive infrastructure for technological research, even if that means your resource stockpiles are low or diminishing, and even if that means you occasionally have to scramble to cover a deficit in one of the basic resources. Yes, those scrambles are inefficient; you might have to trade energy for food at a bad exchange rate. But huge unused stockpiles are costly in a subtler and far worse way.

Once you see this opportunity cost-based argument in games, you see it everywhere. In real-time strategy games like Starcraft, players use “build orders:” carefully drawn-up plans for the early game that use every resource, unit, or building the moment it is gathered or created. What they’re doing is minimizing working capital.

Why there aren’t any good stock market games

One thing I think games don’t really do well is financial markets. Building a game with capital markets that are both realistic and fun is hard⁠—maybe impossible. Games are meant to be replayable. You try it once, figure out where you made mistakes, and try to do better the next time. This gives you a sort of foreknowledge that essentially ruins any investing minigame you could possibly imagine.

One of the more sophisticated financial markets I’ve seen in a game was that of Railroad Tycoon 2, which I played as a kid about twenty years ago. Its financial markets were pretty robust: your company could issue shares or buy them back, and it could issue bonds as well. Furthermore, you had a player character with a personal fortune, and they could buy shares, sell shares, and even buy on margin or sell short.

It had a lot of features! But the best way to play the game looked nothing like smart investing in real life. Since you knew you were a good manager, you’d buy shares in your own company on margin. And at the corporate level, you’d issue as many bonds as possible in order to grow as quickly as you could, since you were great at finding investments that yielded a return above the interest rate. Eventually you’d even have your company buy back shares in itself, on borrowed cash, in order to make sure none of the other players could get rich from your company.

Was this kind of fun? Sure. But the game didn’t really capture anything resembling the experience of building investing skill, per se; you just used the financial markets to win faster than you would otherwise.

You could try to play Railroad Tycoon 2 without building your own company, and instead just invest in computer players’ companies. But even then, with your strong knowledge of which routes are profitable, you could lever up and invest in the right companies fairly easily.

The replayability of games means you eventually learn the parameters of the world you’re given, and you get chances to do things over. And in a world with do-overs, the best investing strategy really is to mortgage your house in 2012 to put all your money in call options on Netflix, Facebook, and Tesla. You know, like a lunatic on reddit.

But the beauty and the difficulty of investing in real life is that you don’t get to go back in time. You’re actually there in the moment, where lots of people are saying that the Facebook IPO is evidence of tech bubble 2.0, and a $100 billion market cap sure seems like a lot of money for a company that only maybe can sell a few ads. The parameters of your world are forever changing and finding the right way to evaluate them is incredibly difficult, except⁠—uselessly⁠—in retrospect.

I am not sure how to capture this in a game. It might not be possible. But if you find one that does it well, let me know.

Offbeat chess openings are part of a mixed strategy equilibrium

For those who know algebraic notation, I’ll put some chess moves in parentheses, but this section requires no chess knowledge to enjoy.

Eric Rosen, an international master in chess, is famous for playing a bad opening with the black pieces. Three moves into a game (1. e4 e5 2. Nf3 Nf6 3. Nxe5 Nc6) he is often already down by a pawn. Sometimes, by the sixth move, he compounds his disadvantage, throwing out chess principles to advance a pawn on the edge of the board (4. Nxc6 dxc6 5.d3 Bc5 6.Be2 h5).

Eric Rosen has no right to win in this position. But he does, time and time again.

This position, which Rosen has reached hundreds of times, is atrocious. You can prove it if you put it into a chess computer or give a grandmaster a day to study it. And yet, Rosen wins with it a stunning 73% of the time.

This is the metagame of opening preparation, one of my favorite things about chess. Opening preparation is a genuine mixed strategy Nash equilibrium: that is, players find it optimal to vary their choices rather than doing the same best move every time.

What’s particularly cool about this is that you wouldn’t think chess has mixed strategy. You set up the board the same time each way, and you and your opponent have perfect knowledge of each others’ moves. You might think there’s a best move in every situation, and you should just play that best move. Nothing mixed-strategy about it.

And to some extent that’s true. Computers can evaluate chess positions with an astonishing degree of foresight. They aren’t perfect, because it’s impossible to play out every move. But they’re close enough to perfect that a human might as well treat them as such. In every position there is an objectively correct move (or at least, a handful that are about equally good) and computers know what those moves are.

So why don’t players just memorize the computationally perfect game, and play it every time? Partly because there’s no one perfect game, really. Often a player will have several moves that are almost equally good. For example, on the first move of the game, white can move any of three central pawns forward two squares or his kingside knight (1. e4, d4, c4 or Nf3), and computers will deem these options almost perfectly equal. After white’s first move, black usually has a variety of acceptable responses. The permutations grow exponentially, and there are too many possible variations to memorize. Invariably, people play solidly for a while before running into a position they don’t know, and then they begin making mistakes accidentally.

But sometimes, they make mistakes deliberately. Eric Rosen knows his opening is bad, and he plays it anyway. While it’s true that a well-prepared player with the white pieces can force a strong advantage in the opening, Rosen’s moves are rare enough that people don’t actually know how to punish his play⁠—and instead, they find themselves at a disadvantage against his arsenal of traps and tricks in the opening. By offering up a pawn as bait, he is leading you away from the paths you know, and into a dark forest. In theory, you have the advantage on that path. But he knows the way better than you do.

There are limits to Rosen's tricky opening, the Stafford Gambit. He can’t play it in a game with generous time controls against a grandmaster. They’ll take the time, find the right moves, and refute it. But the opening works against weaker players, and it even works against grandmasters under shorter time controls when people have less time to think.

The Stafford Gambit is an unusually dramatic example, but milder Rosenesque gambles happen at all levels of chess: people play subtly suboptimal moves to get positions they know better than their opponents do. For example, in the Candidates Tournament, where top players compete for the right to challenge Magnus Carlsen for the world championship, you could see some suboptimal play as early as move four. In the second round, Richard Rapport played the Chekhover variation of the Sicilian (1. e4 c5 2. Nf3 d6 3. d4 cxd4 4. Qxd4)  against Alireza Firouzja, bringing his queen out early to recapture a pawn.

A computer would tell you that a capture with the knight (4.Nxd4) is the only right move. Armed with books and databases, chess scholars could explain why the traditional open Sicilian is better than the Chekhover. (Mostly, the Chekhover opens up your queen to obvious harassment, losing the time advantage white gets from playing first.) At high levels of chess, the Nxd4 is played more than 95% of the time. But that’s precisely what makes Qxd4 appealing. By playing it, Rapport was putting Firouzja in a position he had likely seen twenty times less often. Rapport did create winning chances in the midgame, but Firouzja ultimately escaped with a draw.

The key insight from these “objectively bad” chess openings is that you’re not playing against the objective standard of perfect preparation. Instead, you’re playing against a fallible person with limited time and ability. There’s a meta-game to chess, one where you can spread people’s intellectual resources thin by throwing variety at them. And for that reason⁠—the limited ability to calculate or memorize⁠—chess played by humans has an emergent property of mixed strategy Nash equilibrium.


Only members can comment.
Please subscribe to a free or paid plan or sign in to join the conversation.