Our Updated Methodology

While our forecast for state legislative elections was not as strong as we had hoped (primarily due to the massive polling errors), we at CNalysis are satisfied with our success. We predicted 95.08% of state legislative races correctly in 2020, and 3 out of 4 races identified as competitive. We still endeavor to improve our predictions and have subsequently identified each reason we got a state legislative race wrong.

Our qualitative ratings, assembled by Charles Nuttycombe, our Director, have undergone several much needed adjustments.

What will be weighed more:

  • Electoral trends and demographics
  • Statewide election results in each state legislative districts
  • Incumbency (or lack thereof)

What will be weighed less:

  • National and statewide polling
  • Campaign finance
  • Scandals

We are confident that these changes will help our accuracy in forecasting state legislative elections.

Most of the 2020 modeling, done by Jackson Martin, our Head Oddsmaker, will change very little moving forward. We have, however, made some minor (but important) changes to how the model functions, which will in turn assist in delivering an accurate model. 

The model takes in the collective ratings for an entire chamber and runs 500,000 simulations of individual elections to get the probabilities for different outcomes, being a Supermajority, Majority, or Tie for both parties. The model previously performed 100,000 simulations, but we decided that 500,000 would improve the accuracy enough to justify the additional time it takes to run the simulations. We’ve also added another rating category and adjusted our Solid rating to reflect the misses we had in 2020. 

Very Likely90%

From Jackson:

A major factor that I overlooked in previous models was an aspect I was unaware of until Jack Kersting pointed it out to me. The model works by generating two random variables, one for the uncertainty in the chamber-wide environment, and another for the uncertainty in individual seats. 

These are then added together and compared to the rating to determine which party won the seat in a given simulation. Say, for example, the chamber-wide variable ends up being 40%, and the individual seat variable is 20% (both numbers are on a 0-50% scale). If those numbers added together are less than the Democratic win chance for the seat, then Democrats won the seat in that simulation. 

When using a model that works this way, you need to adjust the input values to make the simulation’s results for individual seats match the rating they’re given– this was my oversight.

I was not doing this in 2020 (because I was unaware of its necessity), and this led to the model having inflated certainty in Tilt-Likely range chambers. I am happy to report that this error has been resolved and our model will abide by these changes as we move forward.