Results for “hanania”
48 found

Steven Pinker on existential risk

He is harsh, but my view is not far from his:

The AI-existential-threat discussions are unmoored from evolutionary biology, cognitive psychology, real AI, sociology, the history of technology and other sources of knowledge outside the theater of the imagination. I think this points to a meta-problem. The AI-ET community shares a bad epistemic habit (not to mention membership) with parts of the Rationality and EA communities, at least since they jumped the shark from preventing malaria in the developing world to seeding the galaxy with supercomputers hosting trillions of consciousnesses from uploaded connectomes. They start with a couple of assumptions, and lay out a chain of abstract reasoning, throwing in one dubious assumption after another, till they end up way beyond the land of experience or plausibility. The whole deduction exponentiates our ignorance with each link in the chain of hypotheticals, and depends on blowing off the countless messy and unanticipatable nuisances of the human and physical world. It’s an occupational hazard of belonging to a “community” that distinguishes itself by raw brainpower. OK, enough for today – hope you find some of it interesting.

That is by no means the only harsh paragraph.  Here is the entire dialogue with Richard Hanania.  And be careful what you write in the MR comments section, the AIs are reading you!

The Capacity for Moral Self-Correction in Large Language Models

We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to “morally self-correct” — to avoid producing harmful outputs — if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.

By Deep Ganguli, et.al., many authors, here is the link.  Via Aran.

If you worry about AGI risk, isn’t the potential for upside here far greater, under the assumption (which I would not accept) that AI can become super-powerful?  Such an AI could create many more worlds and populate them with many more people, and so on.  Is the chance of the evil demi-urge really so high?

Monday assorted links

1. India’s central bank wants an outright ban on crypto.

2. Hanania interviews Joseph Henrich.

3. Victoria Chick has passed away.

4. Peter McLaughlin on Northern Ireland and demographic change.

5. Why the right-wing cannot beat ESG, good piece by Julius Krein.  In the broader world, Julius remains undervalued.

6. “My prediction: #ChatGPT will make old-school competent human secretaries more important than ever.”  Link here.  In my model of this process, the competent quality operators will undertake more projects than before.

7. Programmer salaries in an age of LLMs.  And cybercriminals starting to use GPT.

Liberal Democracy Strong

Bravo to Richard Hanania for revisiting some beliefs:

In February, I argued that Russia’s imminent successful invasion of Ukraine was a sign heralding in a new era of multipolarity. By October, I declared every challenge to liberal democracy dead and Fukuyama the prophet of our time. It’s embarrassing to have two contradictory pieces written seven months apart. But it would’ve been more embarrassing to persist in believing false things. If there’s any time to change one’s mind, it’s in the aftermath of large, historical events that went in ways you didn’t expect. Russia’s failure in Ukraine and China’s Zero Covid insanity provided extremely clear and vivid demonstrations of what democratic triumphalists have been saying about the flaws of autocracy. Nothing that the US or Europe have done – from the Iraq War to our own overly hysterical response to the coronavirus – have been in the same ballpark as these Chinese and Russian mistakes. Perhaps the war on terror comes close in terms of total destruction and lives lost, but we could afford to be stupid and it didn’t end up hurting Americans all that much.

In the land of the blind, the one-eyed man is king but still it’s good to see liberal democracy put some points on the scoreboard.

What Do Think Tanks Think?

From Richard Hanania and Max Abrahms:

Through the use of survey methods, the study presents the first systematic comparison of America-based international relations professors to think tank employees (TTEs) in terms of their preferred conduct of the United States in international affairs. The difference between the two groups in their support for military intervention is stark. TTEs are 0.47 standard deviations more hawkish than professors based on a standard measure of militant internationalism (MI). Controlling for self-described ideology mitigates this effect although it remains statistically significant. Beyond quantifying their relative foreign policy preferences, this study helps to resolve why TTEs tend to assume more hawkish policies. The authors find evidence that hawkishness is associated with proximity to power. Professors who have worked for the federal government score higher on MI, as do TTEs based at institutions located closer to Capitol Hill. In general, the results point to a self-selection mechanism whereby those who favor interventionist policies are more likely to pursue positions to increase their policy influence, perhaps because they know that powerful institutions are more likely to hire hawks. Alternative explanations for differences, such as levels or kinds of foreign policy expertise, have weaker empirical support.

It remains remarkable to me how few people even ask such questions.  Both think tanks and foreign policy opinion remain critically understudied, at least in the appropriate serious ways and involving considerations of “public choice” (there is of course a massive dull literature on foreign policy opinion…if you think I am missing some massive literature that you know all about I suspect you do not grok what I actually am asking for).

And here is the Richard Hanania 2022 update.

Who are the odious figures?

Tyler: But even that phrase, odious figures, I’m made uncomfortable by. Like okay, you can cite Hitler. Hitler’s odious. I think we make ourselves stupider. I like to ask this question: does this person favor price controls on prescription drugs? Well, they might, they might not. To me, that’s a terrible view that will kill many thousands of people, maybe more. But I don’t think of those people as odious. I think, they’re wrong about something. If someone’s called odious, I’ll just ask myself, well, are their views worse than the views of someone who wants price controls on prescription drugs? [laughter] Like, who’s odious? I know there’s the Hitler thing, and Godwin’s law, but we have got to mostly move past that, and just focus on the ideas and being more analytical.

That is from my podcast dialogue with Richard Hanania from a few weeks ago.  Don’t forget that Columbia researcher Frank Lichtenberg calculates the average pharmaceutical expenditure per life-year saved (globally) at $2,837.  Let’s spend more on this one!  And in multiple ways.  In the meantime, don’t be so obsessed with what other people think and write — focus on the issues themselves.

Tuesday assorted links

1. Motorcycles and ferries are dangerous, in that order.

2. More on civil defense in Taiwan.

3. The great Dervla Murphy has passed away.

4. Last payphone in NYC to be removed.

5. Emily Oster makes Time 100 most influential people list.

6. To what extent is tritium a limit on nuclear fusion?

7. RH being provocative (in some regards I am the opposite of his approach as outlined here).

8. Profile of Anita Summers.  And in praise of Steph Curry.

9. New, skeptical study on Long Covid.  And Derek Lowe with more on that.

10. Not laser-induced plasma sorry people.

Monday assorted links

1. Interesting thread on Russian army considerations.

2. “We accept gender double standards, and tolerate more aggression towards men than we do towards women. We also tolerate more hyper-emotionalism from women than men.”  And “The strength of any anti-wokeness movement depends in large part on the strengths of its antibodies to a certain kind of female emotionalism.”  Link here.

3. Substack of TheZvi.

4. Multicultural London English (The New Yorker).

5. Dickens decoded, by a team of amateurs.

6. “Many of Tyler’s trademark ideas seem strange or trivial on the surface. Naturally, this makes it easy to mock them on the internet.

7. Zeihan on Canada.

Who will fight?

Even setting aside the geography of the country, there is no instance I’m aware of in which a country or region with a total fertility rate below replacement has fought a serious insurgency. Once you’re the kind of people who can’t inconvenience yourselves enough to have kids, you are not going to risk your lives for a political ideal. When the US invaded Afghanistan and Iraq, their total fertility rates were 7.4 and 4.7, respectively…Ukraine is at 1.2. We see numbers like this and don’t stop to appreciate the wide chasm that separates the spiritual lives of nations where the average person has 1 kid from those with 3 or more, much less 6 or 7, each. On fertility, Russia isn’t that much better than Ukraine, but it’s got the tanks and a powerful air force, and the side that wants to fight a guerrilla war has to be the one that is willing to take a much larger number of casualties. There’s a consistent pattern of history where there’s a connection between making life and being willing to sacrifice it. This, by the way, is also why Hong Kong was easily pacified when China started clamping down, and why Taiwan will fold and not fight an insurgency if it ever comes down to it.

That is by Richard Hanania, via Zach Valenta.