How can board game publishers predict which games will sell?
Before I joined the game industry, spent 14 years designing table games with no intent to publish them. I did it because designing them is, for reasons unknown, a reliable kind of ecstasy.
However, to give myself a new challenge, I’ve pivoted (at least for a while) to designing games with commercial intent.
To understand the challenge, I want to know how the board games industry and market work. So I’ve been studying them, and writing about them to force careful thinking (which means what I write is sometimes wrong; you’ve been warned).
One thing I’ve learned: it’s a hit-driven business. Successful publishers usually make much of their money on one or two big-sellers, and the rest are a wash. But because it’s hard to predict which game will be a hit, it’s hard to avoid publishing duds, which is costly.
In an earlier essay, I discussed a strategy for dealing with this unpredictability. I argued that, rather that make a bunch of games and see what sticks, as many publishers do, a hit-making strategy might work.
The argument originated in the observation that games can become perennial best-sellers even if they’re not distinguished as games per se, as long as they find some way into the public eye. Just being in the public eye seems to be a big leg up.
I suggested that, instead of spreading resources over many games, a company might concentrate resources on one or two games in an attempt to drive broad awareness for them. This strategy would include a) focusing on games with potential for mass-market adoption; b) instituting best-in-class production values and branding; and c) aggressive/creative marketing and distribution (more aggressive than most game publishers currently engage in). The idea’s similar to what the movie industry’s doing now – studios have learned they can boost their chances of a hit by making big-tent superhero movies with stratospheric production values and promoting them to death.
My essay took heat from gamers, who understandably don’t want companies to pivot away from them to serving the mass-market. Also, gamers would rather games succeed on merit than through marketing.
From a player’s point of view, I prefer that too. I can’t stand the movie industry now because so many production companies have adopted a similar hit-making strategy. Theaters are dominated by bloated orgies of brimstone which leave me with a headache and a vendetta.
But note the strategy doesn’t have to produce dreck. Apple employs a similar strategy and their products are wonderful. In board games, Days of Wonder does as well (The company says it doesn’t put money into marketing beyond what goes into the box, but I don’t think that’s true, because it spends big to make high-quality mobile apps, which act as effective if non-traditional marketing for the physical games. It just so happens that this tactic is also a source of revenue, which blurs the line between marketing and product.)
So I stand behind my proposal, but acknowledge that, in the hands of the wrong companies it could make the world a drab place for gamers. As often happens, what’s good for a business can be bad for others.
Thankfully, whatever its merits, there are other strategies to consider, and that’s what I want to discuss here.
In this case, I’m going to discuss ideas inspired by “fast failure” strategies, which are in vogue thanks to books like The Lean Startup and advocacy from tech industry titans such as Facebook’s Mark Zuckerberg (whose motto is “Move fast and break things”).
Generally speaking, fast failure means inventing ways to make cheap, accurate predictions about whether consumers will adopt your product without committing heavy resources to making it.
The practice tends to work for software companies, because distributing digital products is so cheap you can make and sell an actual, if minimal, version of your product, to see how consumers respond. If they don’t like it, you can quickly move on to the next iteration or next product. It doesn’t work in other industries, such as drug development, where a prototype can cost hundreds of millions of dollars and failure can result in manslaughter.
Table game publishers, who are somewhere between these two extremes, already have methods to assess whether their games will sell of course. The question I want to focus on here is: is it possible to do it better?
The motivation is that many publishers’ evaluation processes are not as systematic or scientific as they could be. A publisher typically tests a game internally and with play-testers and if everybody likes it enough and it satisfies some rules-of-thumb (related to rules simplicity, player-range, playing time, component cost, etc), they publish it.
But how predictive is that kind of evaluation? Enjoyment, per se, for example, may not predict whether a game will fly off retail shelves. I know many games that don’t sell well but which nonetheless earn rave reviews from the folks I teach them to, gamers and non-gamers alike.
What we really want to know is:
a) if a person walks into a store or browses online and sees your game mixed in with all the others, will she buy it preferentially over the others?
b) how likely is someone to teach your game to others (this being the most common way games spread)?
Ideally, we would observe people in these situations to see how they behave. How can we come as close as possible to doing that, as cheaply and quickly as possible?
I don’t have firm answers but I’ll share a few thoughts. Here’s one possible answer to the first question:
In-Store Comparison Test – A publisher could partner with a bricks-and-mortar store to place a retail-quality prototype on store shelves. Then they could recruit targeted customers to visit the store, browse shelves, and discuss which games they’re most/least interested in purchasing and why.
This one is nice because you’ll probably learn stuff every time out based on what the testers say not just about your prototype but about all the games they look at. Publishers could learn valuable stuff by doing this even without a prototype to show, including ideas about how to come up with better prediction heuristics.
I emphasize the prototype must be retail-quality, complete with professional packaging and graphic design, because it’ll be sitting on a shelf next to real retail games. It makes prototyping more expensive but I think it’s necessary because those elements seem essential to a game’s success (Compare, for example, Bananagrams with other versions of the same game -Bananagrams is a huge hit and the others aren’t.)
If it’s true packaging/graphics/branding are essential, then two other things may be true as well:
1. Publishers might benefit by developing methods to quickly create retail-quality prototypes in-house, since they have so much more predictive value than the alternative.
2. Publishers might benefit by thinking of themselves as a branding/graphic design/packaging design companies as much as game developers (no doubt some already do).
I like the In-Store Comparison Test, but doubt it’s enough, since many game purchases aren’t impulse buys. Many buyers already know what they’re after when they go to the store, because someone taught them a game and they’ve decided to buy it.
That means we also need to predict how readily a game will be transmitted from person to person. That’s why the “easy to teach” and “short play time” rules-of-thumb are so valuable to publishers – those qualities generally make games more transmittable.
But while those qualities help, I doubt they’re very predictive by themselves. Some short, easy-to-teach games don’t transmit and other longer, harder-to-teach games do, thanks to compensating qualities. How to make better predictions about transmitability? Here’s the only answer I’ve been able to muster:
Post-Play Comparison Test – Here again you’ll need a retail-quality prototype. Bring a few games, your prototype among them, to a target customer’s house and teach them all (possibly over a few days; more on that in a moment). Make sure the other games are real, commercially successful games against which your game would compete if brought to market, and that the participants don’t know which of the games is yours.
After participants have played all the games once, ask which games they’re most likely/eager to teach their friends/family, and have them write down a ranked list of all the games, ordered by this criterion. Also, make sure they include in their rankings games they already know and love, but which you didn’t teach or play with them, because your game will compete against those too. You may have to compensate your subjects for their time (though many people are happy to learn games for the experience alone).
The weakness of this process is it’s time-intensive – you’ll have to repeat the experiment several times to get a handle on how your game rates generally, which is slow. Unfortunately, I can’t think of any other way to do it. How can a person know if they’d like to teach a friend a game without first playing it themselves?
It might be possible to carry out this experiment in one evening if you only bring two games – your prototype and a competitor game. If you’re confident the competitor game is a good comparison case, that might be enough when included with comparisons with games the participants already know. If you teach/play more than one game in a session, however, there might be a play-order effect due to player fatigue, so you’ll have to control for that over the course of multiple tests.
Notice both the In-Store Comparison Test and the Post-Play Comparison Test involve…comparisons…between retail-quality prototypes and known, successful products. Even if the specifics of my proposals turn out to be suboptimal, I think this “direct-comparison principle” will prove critical to good testing, whatever the specific form. It appears most game publishers don’t do such comparison testing (or do they?)
The Big Question: are there better ways to test the commercial prospects of a game than I’ve described here?
I’m also interested in ideas regarding how smaller publishers can make retail-quality prototypes cheaply/efficiently, since I doubt truly effective testing can be done without them. Alternatively, if you think this is assumption is wrong, I’d like to know your reasons for thinking so.