Here’s the full breakdown of Oscar forecasts from me, and the 3 others interviewed in the Wall Street Journal. Should make for an interesting post mortem!
This post has been covered on the Wall Street Journal!
Last post, I described my mildly obsessive strategy for making predictions in my Oscar pool this year. I’ve been driven to such measures by the repeated and humiliating losses to my brother Jon for the past quarter-score.
To recap that post: the common wisdom among Oscar pundits is that the “precursor” awards which happen before the Academy Awards (e.g. the Golden Globes, Screen Actors / Directors / Producers Guild, BAFTA, and the Critics Choice awards) tend to correlate with who wins Oscars. From what I understand, Jon looks over these when choosing a winner. I tried the same thing last year, and was on track to win until Meryl Streep won Best Actress in an upset (jerk). That night i made a vow – never again.
So, I decided to take the same approach this year, but make it more systematic. I grabbed 20 years worth of award data from the Internet Movie Database, and built a model that takes into account the degree to which each precursor Ceremony predicts the Oscars (different ceremonies do better in different categories). My theory is that this might provide an edge for close calls. Our Oscar pool also incorporates an interesting twist, in that we have some freedom to down-weight predictions that we aren’t sure of. My model makes probabilistic estimates, and thus also gives a strategy for how to weight each prediction.
Now that all of the precursor awards have taken place, I’m able to apply the model for the 2013 awards. Here are the main results (who really cares about Best Live Action short? Sorry, guy nominated for Best Live Action Short), with some punditry for good measure:
Les Miserables (12%)
Argo has swept the dramatic awards, and Les Mis won for best Golden Globe (Comedy/Musical). The last time a movie swept the dramatic awards and lost Best Picture was when Brokeback Mountain lost to Crash.
Daniel Day Lewis (65%)
Hugh Jackman / Denzel Washington (10%)
Another straightforward call, as Daniel Day Lewis has swept the dramatic acting categories this year. Plus, people love that guy. A DDL loss would be a repeat of 2002, when Denzel Washington (Training Day) unexpectedly beat Russel Crowe (A Beautiful Mind), who also swept the dramatic awards. The SAG and Critics Choice awards best predict this category.
Jennifer Lawrence (70%)
Jessica Chastain (20%)
Sorry, Quvenzhané Wallis. You may be who the Earth is for, but you aren’t who this award is for. Choosing between Lawrence and Chastain is tricky. The former won the Golden Globe comedy award and the SAG award. Jessica Chastain won the Golden Globe drama award and the critics choice award. The SAG is the best predictor, and thus the model prefers Jennifer Lawrence. If I hadn’t used a model, I would have guessed Jessica Chastain, thinking that dramatic movies have a better shot at winning Oscars than rom-coms. C’mon, math…
Anne Hathaway (87%)
Everyone else (2-5%)
The easiest prediction of the bunch. She’s been unbeatable in other ceremonies, so there’s no reason not to pick her based on the data.
Tommy Lee Jones (47%)
Christoph Waltz (30%)
This seems to be the most controversial category. The New York Times is predicting that Robert DeNiro will win, based on his aggressive oscar campaigning and his icon status (two factors not present in my model, which thinks he has a 5% shot). Tommy Lee Jones won the SAG, which best predicts the acting categories. Christoph Waltz won both the Golden Globe and BAFTA. Historically, this is a difficult category to predict based on precursor ceremonies.
Who knows? Ben Affleck has swept the other ceremonies, but was notoriously not nominated for an Oscar this year. Thus, there’s very little award information to go off of.
My model gives a slight preference to Ang Lee/Life of Pi (45%) over Steven Spielberg/Lincoln (40%), based on which ceremonies each was nominated for. My model isn’t really precise to within 5%, so it isn’t a statistically significant edge. Personally, I’m inclined to think that Steven Spielberg will win (since, you know, he’s Steven Spielberg, and it’s been a while since he won. Poor little guy.)
Another tossup between Wreck-it Ralph (47%) and Brave (40%). Both BAFTA and the critics choice awards tend to predict the correct winner about 90% of the time but, this year, they awarded different films (BAFTA->Brave, and Critics Choice -> Wreck-It Ralph). I love Pixar, but their recent movies aren’t as good as they were 5 years ago, and I loved Wreck It Ralph. I’m rooting for that movie, and its sweet 80s Nintendo soundtrack.
A Royal Affair (15%)
Historically, this is a hard award to predict from precursor ceremonies. This year Amour won BAFTA, the Critics Choice Award, and the Golden Globe, so I think its odds are pretty good. It is also nominated for Best Picture, for which it doesn’t stand a chance. I think voters will feel bad and give it the Foreign Oscar instead.
(73%) (58%) Flight (15%) Zero Dark 30 (36%)
Update (Feb 24, 7PM): I noticed an error in my Writers Guild Award data (1 hour before the ceremony!). I had incorrectly stored the Original Screenplay winner as Flight instead of Zero Dark 30, and the Adapted category as The Silver Linings Playbook instead of Argo. This changes the predictions in the two writing categories
The Silver Linings Playbook (65%) Argo (65%) Argo, Lincoln (12% each) Lincoln, Silver Linings Playbook (11% each)
Update: See above.
Thats about it for the major-ish categories. Most of the minor categories don’t have many equivalent awards in other ceremonies, so the model predictions aren’t very compelling (sorry, lady nominated for Best Makeup and Hairstyling)
This was an interesting exercise — one of the key lessons (if you see this stuff as teaching moment material) is that there is a fairly high amount of unpredictability in predicting Oscar winners based on the other awards — typical forecast accuracies are around 60% (clearly better than random guessing from a field of 5-7 nominees, but nowhere near a lock).
Perhaps models with more information could do better (genre information seems particularly relevant). However, even the people who do this stuff for a living usually only get ~75% of the categories right. So maybe it’s just hard to predict.
Or maybe we just haven’t seen the Nate Silver of Oscar forecasting yet.
Apparently, Nate Silver is the Nate Silver of Oscar forecasting. His method and conclusions are largely the same as what’s posted here. That’s encouraging.