Oscar Predictions Analysis: How did we do?

[Editor's Note] If you like what we do, please consider becoming a patron. Thank you.

Become a Patron!

For those who don’t spend an egregious amount of time on mostly pointless award shows, the 89th Academy Awards were last night. In the week leading up, I attempted to use statistical analyses to create an Oscar predictions system to see who would take home a statue. Now that we know who really walked away with a win (including the most surprising award in Academy Awards history), it’s time to see if it was all worth it. Did you do better than our prediction system? Let’s find out!

The Record

First off, calling “the prediction system” was getting pretty boring, so I decided to name it after the director of the Best Picture 2017 winner: Jenkins.

Using data from guild and critic awards from the past 25 years, Jenkins was able to predict 21 of the total 24 awards. The remaining three, Best Documentary, Live-Action, and Animated Short, were predicted off the cuff. I disagreed with three of Jenkins’ predictions, for Best Picture, Sound Editing, and Adapted Screenplay.

Jenkins successfully predicted 62% of the awards, failing on: Best Picture, Actor, Actress, Adapted Screenplay, Sound Mixing, Production Design, Costume Design, and Film Editing.

So why did Jenkins fail?

There are very few groups that have awards for Costume Design, Production Design, and Sound Mixing, so there’s not much data to process here. For Film Editing, Arrival and Hacksaw Ridge had equal scores, so it was a coin toss. Arrival had the best Adapted Screenplay score, but that’s because Moonlight won best original screenplay awards due to the strange categorization this year. I can give Jenkins a pass in all of these categories.

Jenkins was convinced that Isabelle Huppert would win Best Actress, though the margin between her and Emma Stone was small. Huppert took the predictive critics awards (London, Boston, LA, and Florida) and the Golden Globe in drama, but Stone took the SAG and the BAFTA. Here I think the loss stems from the weight of who is voting: while Huppert may have been the choice of critic’s societies, Stone was the favorite among actors, who makeup the largest block of the Academy.

I can’t blame Jenkins for failing to predict Best Actor. While there was not a consensus on whether Denzel Washington would take home the win, it seemed incredibly likely given his SAG award. Since 2000, only three actors have lost the SAG but won the Oscar: Russel Crowe, Adrien Brody, Sean Penn, and ironically, Denzel Washington. The problem here may lie with the weight given to the SAG: if they take it, Jenkins gives them the win. Next year I may add a weight for first-time winners, since only nine actors in the history of the awards have won Best Actor more than once.

This brings us to the final category, Best Picture. La La Land was Jenkins’ favorite by a mile, to no one’s surprise. When you have a predictive system based on other awards, and a film wins almost all of those awards, they are going to be the lead.

And therein lies the drawback to solely using statistics, past performance, and other awards to predict the Oscar winners. Jenkins doesn’t take other variables into account, like changing demographics, progressive ideological shifts in the consumer base, external pressure, etc. And that’s when you have to rely on your gut.

So was it all worth it?

This is up for debate. Ignoring Jenkins on the two categories I felt most strongly that he would be wrong on netted me 67% accuracy. Not bad at all, and in theory, better than guessing at random. Here’s how Jenkins and I compared to others:

My Oscar Pool: Average 46% correct

Golden Derby Public: 67% correct

Golden Derby Experts: 67% correct

Vanity Fair: 75% correct

So basically, good not great.

Here’s what I’ll say: don’t spend 5 years trying to put together a predictive system. It is a good way to have a foundation, but in the end there’s too many variables to get much better than ~70%. It’s just not worth it.

Well, until I build an even better Jenkins 2.0 next year, that is.


How did you do with your predictions? Let us know if you beat Jenkins, and your thoughts on the 89th Academy Awards!

Eric Moraleshttps://oneticketpleaseblog.wordpress.com
Eric Morales is from the bear-ridden schools of Wyoming, but in his 5th year in Chicago. More importantly, he achieved minor Twitter fame once and hasn't stopped bringing it up since. He has a healthy obsession with Star Wars, Wonder Woman, Avatar: The Last Airbender, and Bulbasaur. Please validate him by following him on Twitter, @ericsmorals

2 COMMENTS

  1. One thing that couldn’t be considered statistically is the impact of the OscarsSoWhite campaign from last year and the voters’ need to prove themselves to people of color. Unless you did adjust for that somehow? I’m curious to know your thoughts.

    • You are absolutely correct, I don’t believe that it’s statistically possible to account for a massive socio-political shift, at least in this simple of a model. So Jenkins called La La Land for Best Picture, and rightfully so by the numbers. However, I adjusted my own prediction based on the shifting makeup of the academy and voting patterns, in addition to the financial and PR blowback from #OscarsSoWhite. Well, that and acknowledging that Moonlight really was the best film of the year. Thanks for reading!

Comments are closed.