This week, most of the major figures in film-making will gather in Hollywood for the 89th annual Oscars ceremony. You can bank on seeing a few painfully inane red carpet interviews, several fawning acceptance speeches and some jokes that fall flat. In all likelihood, there will be one more certainty on the night – an award or two the logic of which will be questioned for years to come.
It’s now over a decade since race-relations melodrama Crash pipped Brokeback Mountain to the 2006 Best Picture award and it still leads most lists as one of history’s least explicable choices. But despite the occasional curve ball, the Oscars are actually remarkably predictable - if you look in the right place for information.
You’re just so predictable
If you want to know who’s going to win the awards, your best bet is the bookmakers - especially if you leave it late enough. By the time the ceremony rolls around (after the Golden Globes, BAFTAs and Screen Actors Guild Awards have been and gone) the betting agencies generally have a great handle on who the Academy will recognise.
For example, since 2004, the bookmakers’ favourite has won Best Actor every year apart from one (in 2009, Sean Penn was narrow second favourite but won for Milk.) Over the same period, only two Best Actress favourites have missed out on the Oscar, and both of those winners were second favourites.
In fact, across the six main categories - Best Picture, Best Director, Best Actor, Best Actress, Best Supporting Actor and Best Supporting Actress – you have to go back a full nine years to find the last time an award was not won by the favourite or second favourite.
Much of the perception that the Academy makes unpredictable decisions is simply people forgetting what popular opinion was at the time. Looking back at the legendary “upset” win of Crash in 2006, it was actually still second favourite. It also had a lot of momentum in the public’s eyes, with its odds shifting from a huge A$9 to just A$2.50 in the days before the ceremony.
You can see this effect in the chart below. The data were collected from a variety of sources as close to the awards ceremony as possible for each year. Across the six major categories since 2004, over 82% of the awards have gone to the bookmakers’ favourite. When there’s a red hot (A$1.20 or below) favourite, the awards have been even more predictable. In the last 13 years, no such heavily-favoured nominee has ever failed to take home the award in one of these categories.
This is a remarkable run of predictability. By comparison, looking at Australia’s major sporting leagues, even contests with A$1.20 or below favourites are much more uncertain. Over the past four years, around 11% of heavily-favoured AFL games have ended in upsets. In the NRL, the rate is even higher at almost 28%. In this context, the Oscars seem to be a relative “sure thing”.
The Oscars are chosen by more than 6,000 voting members of the 17 branches of the Academy of Motion Pictures and Sciences. Why are they so predictable? Bookmakers derive their odds from public opinion - where people are putting their money. Perhaps the Oscars are so certain because previous awards tip off the public, or maybe people are good at sensing broader public opinion. Perhaps also, there’s a good old-fashioned Oscar voter leaking their ballot to influence the odds.
You can figure out approximately how likely the bookmakers are rating a nominee to win by doing the following calculation: A$1/odds x 100%. For example, with odds of A$2.50, 2006 Best Picture Crash was thought to have about a 40% chance of success.
Over the period of this dataset the biggest upset was Tilda Swinton’s Best Supporting Actress win for 2008’s Michael Clayton. The bookmakers thought she had a below 10% chance of winning (with odds set at A$11).
Why everyone else gets it wrong
What’s even more remarkable about the predictability of the Oscars is the number of people who overthink things and get it wrong.
Last year, Nate Silver’s data science site, FiveThirtyEight collated nine different mathematical models which crunched available data to produce predictions of the Oscar winners.
Some of these models were by amateur data scientists (albeit amateurs with PhDs or with Harvard degrees) and others by professionals, including teams at Ernst and Young, at predictive analytics operation Solution by Simulation, and at FiveThirtyEight itself.
Each model used different datasets – some from Twitter mentions, others from box office performance and others from themes of historical winners or recent film reviews.
So how did these mathematical models do…? Well, overall, their performance could only be described as miserable. Of 48 predictions made across the main six categories only 50% of these were correct. Some of them even missed absolute certainties such as Leonardo DiCaprio (A$1.01 or 99% to win) and Brie Larson (A$1.04 or 96% to win).
Why did these models perform so poorly? You’ve probably heard the term “big data” and the idea that large datasets can be searched for patterns that allow us to predict the future. While nobody can ever quite define what “big” means, in this context, the Oscar datasets are certainly not “big”.
One datapoint per category per year for less than a century is not much to overcome any other randomness or unpredictability in the system. For example, there are often short-term trends in the tastes of Oscar voters.
In the 1960s, four musicals won Best Picture. The 1980s seemed to favour films dealing with colonialism and its aftermath. Around the turn of the millennium, the Academy lauded safe, uncontroversial box office hits. From the point of calibrating a mathematical model, though, by the time a popular trend has influenced the model, tastes have likely already moved on.
Spoiler alert
This year in the main six categories, there are five short-priced (A$1.20 or below) favourites. As I’ve shown above, it’s well over a decade since any such favourites left empty-handed.
If history repeats itself, it seems safe to assume that the cast and crew of La La Land might just skip, twirl and dance away from Hollywood Boulevard with a little bit more gold for their mantelpieces. The film itself, plus actress Emma Stone, and director Damien Chazelle are all heavily-tipped for success.
Similarly, Mahershala Ali for Supporting Actor in Moonlight, and Viola Davis for Supporting Actress in Fences look to have every reason to feel confident. According to the bookmakers, only this year’s Best Actor race should be difficult to predict. Casey Affleck’s performance in Manchester by the Sea is favoured at A$1.57, barely ahead of Denzel Washington at A$2.10.
Do remember, however, that odds can change leading right up to the night. A week before the 2006 ceremony, the longstanding confidence around Brokeback Mountain started to crumble and it drifted from a near-certain A$1.10 to a more doubtful A$1.50. With hindsight, the creeping doubts about its success proved correct.
About The Author
Stephen Woodcock, Senior Lecturer in Mathematics, University of Technology Sydney
This article was originally published on The Conversation. Read the original article.
Related Books
at InnerSelf Market and Amazon