Results for “model this”
3181 found

Is this about the poll or about the people? (model this)

Model this, NBA rebounding edition

There has staggeringly been only one U.S. representative among the league’s top 10 rebounders for each of the past three seasons.

That is from Marc Stein.  How did that happen?  Where is the current-day Charles Oakley?  Moses Malone, our nation turns it lonely eyes to you!  Dennis Rodman would do as well. Yet here is the list of the top rebounders from last year in the NBA, and yes Rudy Gobert is French.

Of course, you all know that in the key FIBA games the U.S. squad was badly rebounded by a number of nations, including the tiny Lithuania (their population is tiny, not their size per person).  And Bam Adebayo was tired from the Finals and was not available.

Why has the balance of rebounding power turned so seriously against U.S. basketball players?  Is it that all the tall ones are nowadays being hired by Goldman Sachs or Open AI?  Somehow that doesn’t seem right to me.

One hypothesis is that today the game demands a broader set of skills, and more teamwork, than in earlier times.  Charles Oakley still would make the NBA today, but perhaps as an 11th man, rather than as a regular player who could hone his skills and become a leading figure.  In other words, the return to training big men has (maybe) gone up a lot.  Simply being big and strong yields a smaller return than before, because on offense they are counting on you to hit that open three-point shot.  On defense, they are counting on you to rotate on perimeter defense in a manner that Oakley did not have to worry about so much.  And so on.

And maybe the European and other teams do a better job training their big men at younger ages.  The European big guys do in fact have excellent long-distance shooting and often higher quality passing skills.  The U.S. players (mostly) leaving college after their first year does not help with this.

And thus, in that equilibrium, the better shooting makes the teams as a whole, better rebounders as well.  That is a modestly counterintuitive conclusion.

Or is there some other, better model for why U.S. rebounding prowess has declined in recent years?

Model this upper atmospheric infrasound

A solar-powered balloon mission launched by researchers from Sandia National Laboratories carried a microphone to a region of Earth’s atmosphere found around 31 miles (50 km) above the planet called the stratosphere. This region is relatively calm and free of storms, turbulence and commercial air traffic, meaning microphones in this layer of the atmosphere can eavesdrop on the sounds of our planet, both natural and human-made.

However, the microphone in this particular study also heard strange sounds that repeat a few times per hour. Their source has yet to be identified. The sounds were recorded in the infrasound range, meaning they were at frequencies of 20 hertz (Hz) and lower, well below the range of the human ear. “There are mysterious infrasound signals that occur a few times per hour on some flights, but the source of these is completely unknown,” Daniel Bowman of Sandia National Laboratories said in a statement.(opens in new tab)

No, I don’t think it is “UFOs,” but perhaps by now it should be clear we really don’t know what is going on up there?  And it really would e good if we did.  Here is the full story.

Does natural selection favor AIs over humans? Model this!

Dan Hendrycks argues it probably favors the AIs, paper here.  He is a serious person, well known in the area, home page here, and he gives a probability of doom above 80%.

I genuinely do not understand why he sees so much force in his own paper.  I am hardly “Mr. Journal of Economic Theory,” and I have plenty of papers that you could describe as a string of verbal arguments, but here is an instance where I would find an actual model very useful.  Evolutionary biology is full of them, as is economics.  Why not apply them to the AI Darwinian process?  Why leap to such extreme conclusions in the meantime?

Here are two very simple ideas I would like to see incorporated into any model:

1. At least in the early days of AIs, humans will reproduce and recommend those AIs that please them.  Really!  We already see this with people preferring GPT-4 to GPT 3.5, the popularity of Midjourney 5, and so on.  So, at least for a while, AIs will evolve to please us.  What that means over time is perhaps unclear (maybe some of us opt for ruthless?  But do we all seek to hire ruthless employees and RAs?  I for one do not), but surely it should be incorporated into the basic model.  How much ruthlessness do we seek to inject into the agents who do our bidding?  It depends on context, and so is it the finance bots who will end the world?  Or perhaps the system will be tolerably decentralized and cooperative to a fair degree.  If you are skeptical there, OK, but isn’t that the main question you need to address?  And please do leave in the comments references to models that deploy these two assumptions.  (With the world at stake, surely you can do better than those bikers did!)

2. Humans can apply principal-agent contracts to the AI (again, at least for some while into the evolutionary process).  Keep in mind if the AIs are risk-neutral (are they?), perhaps humans can achieve a first-best result from the AIs, just as they can with other humans.  If the AIs are risk-averse, in the final equilibrium they will shirk too much, but they still do a fair amount of work under many parameter values.  If they shirk altogether, we might stop investing in them, bringing us back to the evolutionary point.

Neither of those points are the proverbial “rocket science,” rather they are super-basic.  Yet neither plays much if any role in the Hendrycks paper.  There are some mentions of various points on for instance p.17, but I don’t see a clear presentation of modeling the human choices in a decentralized process.  p.21 does consider the decentralized incentives point a bit more, but it consists mostly of two quite anomalous examples, such as a dog pushing kids into the Seine to later save them (how often?), and “the India cobra story,” which is likely outright false.  It doesn’t offer sound anecdotal empirics, or much theoretical analysis of which kinds of assistants we will choose to invest in, again set within a decentralized process.

Dan Hendryks, why are you so pessimistic?  Have you built such models, fleshing out these two assumptions, and simply not shown them to us?  Please show!

If the very future of the world is at stake, why not build such models?  Surely they might help us find some “outs,” but of course the initial problem has to be properly specified.

And more generally, what is your risk communication strategy here?  How secure, robust, and validated does your model have to be before you, a well-known figure in the field and Director at the Center for AI Safety, would feel justified in publicly announcing the > 80% figure?  Which model of risk communication practices (as say validated by risk communication professionals) are you following, if I may ask?

In the meantime, may I talk you down to 79% chance of doom?

Model this newsroom estimator

The New York Times’s performance review system has for years given significantly lower ratings to employees of color, an analysis by Times journalists in the NewsGuild shows.

The analysis, which relied on data provided by the company on performance ratings for all Guild-represented employees, found that in 2021, being Hispanic reduced the odds of receiving a high score by about 60 percent, and being Black cut the chances of high scores by nearly 50 percent. Asians were also less likely than white employees to get high scores.

In 2020, zero Black employees received the highest rating, while white employees accounted for more than 90 percent of the roughly 50 people who received the top score.

The disparities have been statistically significant in every year for which the company provided data, according to the journalists’ study, which was reviewed by several leading academic economists and statisticians, as well as performance evaluation experts.

…Management has denied the discrepancies in the performance ratings for nearly two years…

And from the economists:

Multiple outside experts consulted by the reporters consistently said the methodology used in the Guild’s most recent analysis was reasonable and appropriate and that the approach used by the company appeared either flawed or incomplete. Some went further, suggesting the company’s approach seemed tailor-made to avoid detecting any evidence of bias.

Rachael Meager, an economist at the London School of Economics, was blunt: “LMAO, that’s so dumb,” she wrote when Guild journalists described the company’s methodology to her. “That’s what you would do if you want to obliterate signal,” she added, using a word that in economics refers to meaningful information.

“This is so stupid as to border on negligence,” added Dr. Meager, who has published papers on evaluating statistical evidence in leading economics journals.

Peter Hull, a Brown University economist who has studied statistical techniques for detecting racial bias, also questioned the company’s approach and recommended a way to test it: running simulations in which bias was intentionally added. The company’s method repeatedly failed to detect racial disparities in those tests.

Here is the full article, prepared by the NYT Guild Equity Committee, including Ben Casselman.  Of course we now live in a world where very few people will be surprised by this.  Where exactly does the moral authority lie here for making editorial judgments about content concerning race?

Is The Army racially egalitarian? (model this)

This paper links the universe of Army applicants between 1990 and 2011 to their federal tax records and other administrative data and uses two eligibility thresholds in the Armed Forces Qualification Test (AFQT) in a regression discontinuity design to estimate the effects of Army enlistment on earnings and related outcomes. In the 19 years following application, Army service increases average annual earnings by over $$4,000 at both cutoffs. However, whether service increases long-run earnings varies significantly by race. Black servicemembers experience annual gains of $$5,500 to $$15,000 11–19 years after applying while White servicemembers do not experience significant changes. By providing Black servicemembers a stable and well-paying Army job and by opening doors to higher-paid postservice employment, the Army significantly closes the Black-White earnings gap in our sample.

Here is the full paper by Kyle Greenberg, et.al., via the excellent Kevin Lewis.

Sentences to ponder model this

Consistent with beauty-blind admissions, alumni’s beauty is uncorrelated with the rank of the school they attended in China. In the US, White men who attended high-ranked schools are better looking, especially attendees of private schools. A one percentage point increase in beauty rank corresponds to a half-point increase in the school rank.

Here is more, via the excellent Kevin Lewis.

Model this Afghanistan policy

But in both the sanctions and the seizures, you can see an almost Kafka-esque madness in the American position. They are expending all this effort to ameliorate the consequences of a sanctions regime they are implementing. They are desperately brokering deals to preserve foreign reserves that they are freezing. When I ask why they continue to impose these policies at all, the administration says that the Taliban has American prisoners, that it is a brutal regime that murders opponents and represses women, that it has links to terrorists, and that our sanctions grant us much-needed leverage.

Here is more from Ezra Klein (NYT) on the debacle of starvation unfolding in Afghanistan.

Model this and who are the real liberals anyway?

– Fifty-nine percent (59%) of Democratic voters would favor a government policy requiring that citizens remain confined to their homes at all times, except for emergencies, if they refuse to get a COVID-19 vaccine. Such a proposal is opposed by 61% of all likely voters, including 79% of Republicans and 71% of unaffiliated voters.

– Nearly half (48%) of Democratic voters think federal and state governments should be able to fine or imprison individuals who publicly question the efficacy of the existing COVID-19 vaccines on social media, television, radio, or in online or digital publications. Only 27% of all voters – including just 14% of Republicans and 18% of unaffiliated voters – favor criminal punishment of vaccine critics.

– Forty-five percent (45%) of Democrats would favor governments requiring citizens to temporarily live in designated facilities or locations if they refuse to get a COVID-19 vaccine. Such a policy would be opposed by a strong majority (71%) of all voters, with 78% of Republicans and 64% of unaffiliated voters saying they would Strongly Oppose putting the unvaccinated in “designated facilities.”

That is from a Rasmussen poll.  You might consider Rasmussen a right-leaning institution, but these kinds of results should not be possible even in somewhat slanted polls (methodology here).  Furthermore, this poll came out January 13, and it hasn’t exactly received a ton of attention from mainstream media, can you model that too?  Wouldn’t it be awful even if this poll were off by 2x?

One lesson is that it is not always good for your party if it is on the winning side of the culture wars.

Model this what is wrong with physicians?

Compared to differences among their male patient counterparts, female patients randomly assigned a female doctor rather than a male doctor are 5.0% more likely to be evaluated as disabled and receive 8.5% more subsequent cash benefits on average. There is no analogous gender-match effect for male patients.

And is it the male or female physicians who are at fault here?  Or is this diagnostic differential somehow optimal?

Here is the full NBER paper by Marika Cabral and Marcus Dillender.

Model this Apple pricing decision

Apple has one new product that’s already so back-ordered it won’t arrive in time for Christmas. It’s a polishing cloth. Priced at $19.

Unveiled in October after Apple showed off its new line of gadgets, the soft, light gray square is made of “nonabrasive material” and embossed with Apple’s logo. During tests, the rag worked like other microfiber cloths that list for less than half that price. So…why $19?

As it happens, Apple’s pricing strategy rarely allows accessories to fall below that threshold. The 6.3-inch swatch of fabric sits beside 17 other Apple-branded items on the company’s website—a mélange of charging cables, dongles and adapters—each priced at $19. Some, such as the wired earbuds and charging adapter, were once included with new iPhones.

Those $19 Apple items—together with the Apple Watch, AirPods and other small gadgets—are part of the company’s growing Wearables, Home and Accessories category, which had more than $8 billion in revenue in the quarter that ended in October.

Almost every Apple price ends in the number “9.”  Would it matter if we all carried around $30 bills?  There is further discussion in this Galvin Brown WSJ piece.

Via the excellent Samir Varma.