How Our College Football Playoff Predictions Work

ReferencesAssociated Press Top 25 poll / College Football Playoff selection committee’s rankings / Elo rating / ESPN’s Football Power Index The DetailsThe goal of any statistical model is to represent events in a formal, mathematical way — ideally, with a few relatively simple mathematical functions. Simpler is usually better when it comes to model-building. That doesn’t really work, however, in the case of the College Football Playoff’s selection committee, the group tasked with picking the nation’s four best teams at the end of each season. As you might imagine from a bunch of former coaches and college-administration types, they can sometimes resist the clean logic that an algorithm would love to impose. So while we’ve found that our model can do a reasonably good job of anticipating their decisions, it has to account for the group behaving in somewhat complicated ways.That’s one of the challenges our College Football Playoff forecast faces, but one of the fun parts, too. Unlike our other prediction models, which only really try to predict the outcomes of games, it also tries to predict the behavior of the humans on the selection committee instead. Here’s a rundown of how we go about doing that.The key characteristics of the model are that it’s iterative and probabilistic. It’s iterative in that it simulates the rest of the college season one game (and one week) at a time, instead of jumping directly from the current playoff committee standings to national championship chances. And it’s probabilistic in that it aims to account for the considerable uncertainty in the playoff picture, both in terms of how the games will turn out and in how the humans on the selection committee might react to them.Games are simulated mostly using ESPN’s Football Power Index. We say “mostly” because we’ve also found that giving a little weight to the playoff committee’s weekly rankings of the top 25 teams helps add to the predictions’ accuracy. (We use the Associated Press Top 25 poll as a proxy for the committee’s rankings until the first set of rankings is released in the second half of the season.) Specifically, the model’s game-by-game forecasts are based on a combination of FPI ratings and committee (or AP) rankings — 75 percent on FPI and 25 percent on the rankings.1Because the committee ranks only the top 25 teams, we estimate how it would rate the remaining Football Bowl Subdivision teams based on our Elo ratings, which we’ll discuss a little later.In many ways, that’s the simple part. While predicting games isn’t always the easiest endeavor, there’s a science to it that we’ve applied across our many sports interactives over the years. But the next part, the process of predicting the human committee, is unique to our college football model.After each set of simulated games, our system begins to guess how the committee will handle those results. These predictions account for the potential margin of victory in each game and for the fact that some wins and losses matter more than others. To assist with this part of the process, alongside a separate formula based simply on wins and losses, we use a version of our old friend the Elo rating. In other sports, we use Elo to help predict the games, but in this case, we mainly rely on it to model how college football’s powers that be tend to react to which teams won and how they did it. This special version of Elo is designed to try to mimic the committee’s behavior.We’ve calculated these Elo ratings back to the 1869 college football season. Between each season, ratings are reverted partly to the mean, to account for roster turnover and so forth. We revert teams to the mean of all teams in their conference, rather than to the mean of all Football Bowl Subdivision teams. Thus, teams from the Power Five conferences2The ACC, the Big Ten, the Big 12, the Pac-12 and the SEC. — especially the SEC — start out with a higher default rating.3To be more precise, our model treats conferences as existing along a spectrum, rather than in binary groups of “power” and “minor” conferences. As a consequence of this, our system also gives teams from power conferences more advantages, because that’s how human voters tend to see them.This conference-centric approach both yields more accurate predictions of game results and better mimics how committee and AP voters rank the teams. For better or worse, teams from non-power conferences (except Notre Dame, that special snowflake among independents) rarely got the benefit of the doubt under the old BCS system, and that’s been the case under the selection committee as well.Some of the model’s complexity comes in trying to model when the selection committee might choose to break its own seemingly established rules. For example, we discovered in 2014 — when the committee excluded TCU from the playoff even though the Horned Frogs held the No. 3 spot in the committee’s penultimate rankings and won their final game by 52 points — that the committee isn’t always consistent from week to week. Instead, it can re-evaluate the evidence as it goes. For example, if the committee has an 8-0 team ranked behind a 7-1 team, there’s a reasonable chance that the 8-0 team will leapfrog the other in the next set of rankings even if both teams win their next game in equally impressive fashion. That’s because the committee defaults toward looking mostly at wins and losses among power conference teams while putting some emphasis on strength of schedule and less on margin of victory or “game control.”We’ve had to add other wrinkles to the system over the years. Before the 2015 season, for example, we added a bonus for teams that win their conference championships, since the committee explicitly says that it accounts for conference championships in its rankings (although exactly how much it weights them is difficult to say).4Determining how much a conference championship matters is tricky because a team that wins a championship game has a lot of other things going for it — for instance, by virtue of winning its conference’s championship game, a team gets an additional head-to-head win against another strong team, something the committee (and our model) already values highly. And late in 2016, we added an adjustment for head-to-head results, another factor that the committee explicitly says it considers. If two teams have roughly equal résumés but one of them won a head-to-head matchup earlier in the season, it’s a reasonably safe bet that the winner will end up ranked higher.Going into 2019, we also added a tweak to how we treat independents — most notably Notre Dame (remember, special snowflakes and all that) — when they have a strong season. In previous years, our model handled a team like the Fighting Irish by assessing their résumé using the tools above but not giving them any kind of conference championship bonus … since they are, you know, not in a conference. This ended up somewhat significantly underrating Notre Dame’s chances of making the playoff, because the selection committee effectively treats the Irish like they had won a conference (or similar to it) if they make it to the end of the season undefeated or with just one loss.To deal with this piece of college football reality, we now assign the conference championship bonus to independents on a fractional basis, depending on their W-L record. These fractions are based on how often different win and loss totals (up through and including championship week, but not bowl games) are associated with the probability of winning conference championships in the CFP era, in conferences that have championship games. Here are those percentages: How to give conference-champion credit to independentsChance of winning a conference championship based on both wins and losses (through championship week but excluding bowls), 2014-18 131000100 1164231 1289174 Model CreatorNate Silver FiveThirtyEight’s founder and editor in chief. | @NateSilver538 Source: ESPN 1026314 <=9<1>=4<1 For an independent team with a given record, we average together the fractional chance of winning a conference based on its wins with the chance based on its losses. So an 11-1 Notre Dame team would receive (0.64 + 0.74) / 2 = 0.69 of a conference championship bonus added to its playoff bona fides.One last note here: The value of our conference championship bonus depends on the quality of a school’s conference. So in the case of independents, Notre Dame is treated as being in the equivalent of an average-strength power conference. For other independents, their “conference” strength is estimated based on their Elo rating.Even after all of these adjustments, there are no guarantees. So not only do we account for the uncertainty in the results of the games themselves, but we also account for the error in how accurately we can predict the committee’s ratings. Because the potential for error is greater the further you are from the playoff, uncertainty is higher the earlier you are in the regular season. In early October, for example, as many as 15 or 20 teams will still belong in the playoff “conversation.” That number will gradually be whittled down — probably to around five to seven teams before the committee releases its final rankings.Editor’s note: This article is adapted from previous articles about how our College Football Playoff predictions work. Based on Wins:Based on Losses: Related ArticlesJust Win, Baby (And You’ll Probably Make The College Football Playoff)Should Alabama’s Résumé Have Trumped Ohio State’s Conference Crown? WinsConference Title %LossesConference Title % Version History1.6 Forecast updated for 2019 season; conference champion adjustment added for independents.Sept. 19, 20191.5 Forecast updated for 2018 season.Oct. 4, 20181.4 Forecast published for 2017 season; game-by-game forecasts incorporate team rankings, power conferences given a boost, AP poll used before committee releases rankings.Oct. 5, 20171.3 Head-to-head results incorporated into model.Dec. 2, 20161.2 Forecast published for 2016 season.Nov. 1, 20161.1 Forecast published for 2015 season; conference champion bonus added, uncertainty increased.Nov. 3, 20151.0 College Football Playoff model first published for the 2014 season.Nov. 21, 2014 read more

adminSeptember 29, 2019oxytgLeave a Comment

Read More

GE and Hitachi want to use nuclear waste as a fuel

first_img © 2010 PhysOrg.com Sustainable nuclear energy moves a step closer Citation: GE and Hitachi want to use nuclear waste as a fuel (2010, February 18) retrieved 18 August 2019 from https://phys.org/news/2010-02-ge-hitachi-nuclear-fuel.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Conventional nuclear power plants in the US only harness around five percent of the energy of nuclear fuels. The reprocessing technique would separate nuclear waste into different types of fuels, some of which can be used in conventional nuclear power plants, and some of which can only be used in advanced fast neutron reactors. Reprocessing of nuclear waste to extract more useable fuel has been criticized in the US because it produces pure plutonium, which could be stolen and used to make nuclear weapons. To get around this difficulty, GE Hitachi’s proposed method produces a fuel that is much harder to steal.The GE Hitachi process separates wastes from conventional nuclear power plants into three streams, by applying voltage to a molten salt. The first waste material consists of the products of fission, which cannot be further used as fuel and will need to be stored, but the storage time required is reduced from tens of thousands of years to a few hundred years (although a small fraction of the material will still need to be stored for over 10,000 years). The second material is uranium that does not have enough fissile material to be used in the light water uranium reactors in the US, which need enriched uranium, but it can be used by deuterium (heavy water) uranium reactors, which are used in Canada. The final group of waste products is a mixture of transuranic elements including plutonium and neptunium. The plutonium is not separated from the other elements, and the mixture releases 1,000 times more heat and 10,000 times more neutrons than pure plutonium. This makes it much harder to steal, and therefore less of a security risk, and it is also much easier to detect. The mixture of transuranic elements can be used in nuclear reactors that use molten sodium as the coolant rather than water, and this type is used in Japan and a few other countries. GE Hitachi has designed a reactor known as the PRISM reactor that would be able to use the mixed fuel, but sodium cooled reactors have not been approved for use in the US.A GE Hitachi spokesman said previous US administrations had little interest in re-using spent nuclear fuel, but the Obama administration is increasing support for nuclear power and looking at possibilities such as reprocessing. If adopted, the proposal would significantly decrease the amount of dangerous nuclear waste that needs to be stored. Explore further (PhysOrg.com) — One of the world’s biggest providers of nuclear reactors, GE Hitachi Nuclear Energy (a joint venture of General Electric and Hitachi), wants to reprocess nuclear waste for use as a fuel in advanced nuclear power plants, instead of burying it in waste repositories such as that proposed at Nevada’s Yucca Mountain.last_img read more

Read More

Power tariff remains unchanged in West Bengal

first_imgKolkata: The power tariff in West Bengal remained unchanged for 2017-18 despite rise in fuel costs, the state’s Electricity Regulatory Commission (WBERC) said today. “After review, we have not increased the tariff for 2017-18 keeping the interest of consumers in mind, in spite of huge pressure from the major utilities to increase the tariff,” WBERC chairman R N Sen told PTI. Private utility the CESC and the state run West Bengal State Electricity Distribution Company Limited (WBSEDCL) have put pressure on the Commission to hike the tariff due to rise in fuel costs, he said. Also Read – Speeding Jaguar crashes into Merc, 2 B’deshi bystanders killed The CESC covers Kolkata and Howrah, while the WBSEDCL caters to the consumers of rest of the state. The WBERC had earlier declared the 2017-18 tariff as per multi-year tariff scheme, but based on demand by the utilities, the tariff is reviewed but the tariffs remained unchanged. The CESC had asked for a revised tariff of around Rs 8.40 per unit and WBSEDCL had sought about Rs 7.70 per unit. But the average power tariff of WBSEDCL remained unaltered at Rs 7.12 per unit, while that of the CESC remained unchanged at Rs 7.31 per unit. Tariff for the 2018-19 is yet to be announced. State Power Minister Shobhandeb Chatterjee said, “We are happy that the Commission has not put any additional burden on consumers. We will still be paying subsidy for earlier rise, upto 300 units.”last_img read more

Read More

Researchers input rabbitduck illusion to Google Cloud Vision API and conclude it

first_imgWhen last week, Janelle Shane, a Research Scientist in optics, fed the infamous rabbit-duck illusion example to the Google Cloud Vision API, it gave “rabbit” as a result. However, when the image was rotated at a different angle, the Google Cloud Vision API predicted a “duck”. Inspired by this, Max Woolf, a data scientist at Buzzfeed, further tested and concluded that the result really varies based on the orientation of the image: Google Cloud Vision provides pretrained API models that allow you to derive insights from input images. The API classifies images into thousands of categories, detects individual objects and faces within images, and reads printed words within images. You can also train custom vision models with AutoML Vision Beta. Woolf used Python for rotating the image and get predictions from the API for each rotation. He built the animations with R, ggplot2, and gganimate. To render these animations he used ffmpeg. Many times, in deep learning, a model is trained using a strategy in which the input images are rotated to help the model better generalize. Seeing the results of the experiment, Woolf concluded, “I suppose the dataset for the Vision API didn’t do that as much / there may be an orientation bias of ducks/rabbits in the training datasets.” The reaction to this experiment was pretty torn. While many Reddit users felt that there might be an orientation bias in the model, others felt that as the image is ambiguous there is no “right answer” and hence there is no problem with the model. One of the Redditor said, “I think this shows how poorly many neural networks are at handling ambiguity.” Another Redditor commented, “This has nothing to do with a shortcoming of deep learning, failure to generalize, or something not being in the training set. It’s an optical illusion drawing meant to be visually ambiguous. Big surprise, it’s visually ambiguous to computer vision as well. There’s not ‘correct’ answer, it’s both a duck and a rabbit, that’s how it was drawn. The fact that the Cloud vision API can see both is actually a strength, not a shortcoming.” Woolf has open-sourced the code used to generate this visualization on his GitHub page, which also includes a CSV of the prediction results at every rotation. In case you are more curious, you can test the Cloud Vision API with the drag-and-drop UI provided by Google. Read Next Google Cloud security launches three new services for better threat detection and protection in enterprises Generating automated image captions using NLP and computer vision [Tutorial] Google Cloud Firestore, the serverless, NoSQL document database, is now generally availablelast_img read more

Read More