I mean in private conversation, not in public discourse, and this is not to their faces but rather behind their back. And with at least a modest amount of meanness, I am not talking about criticizing their ideas. Here are some reasons not to criticize other people:
1. “Complain less” is one of the very best pieces of wisdom. That is positively correlated with criticizing other people less, though it is not identical either.
2. If you criticize X to Y, Y wonders whether you criticize him to others as well. This problem can increase to the extent your criticism is biting and on the mark.
3. Criticizing others is a form of “devalue and dismiss,” and that tends to make the criticizing people stupider. If I consider the columnists who pour a lot of energy into criticizing others, even if they are sometimes correct, it isn’t so pretty a picture where they end up.
4. If X criticizes Y, it may get back to Y and Y will resent X and perhaps retaliate.
5. Under some moral theories, X is harming Y if X criticizes Y, Y doesn’t find out, and Y faces no practical penalties from that criticism (for an analogy, maybe a wife is harming her husband if she has a secret affair and he never finds out about it).
Here are some reasons to criticize others:
4. Others may deserve the criticism, and surely there is some intrinsic value in speaking the truth and perhaps some instrumental value as well.
5. Criticizing others is a way of building trust. In a three-way friendship with X, Y, and Z, if X establishes that he and Y can together criticize Z, that may boost trust between Y and X, and also increase X’s relative power in the group. Criticizing “Charles Manson” doesn’t do this — you’ve got to take some chances with your targets.
6. Criticizing others may induce people to fear you in a useful way. They may think if they displease you, you will criticize them as well.
7. Perhaps something or somebody is going to be criticized no matter what. If you take the lead with the criticism, that is a signal of your leadership potential.
What else? Is there anything useful written on this topic?
We have to be more willing to disrupt current animal habitats when building wind or hydroelectric power. That means, to put it bluntly, that we have to be more willing to kill animals. Erecting wind turbines, for instance, often leads to the death of some number of birds. To favor more wind turbines is not to support the death of more birds; it is to support a more robust long-term supply of green energy — which would benefit birds (and of course humans too)…
I favor a much more proactive policy agenda to boost the welfare of animals. That could include subsidies to new “artificial meat” technologies, more research into animal diseases and pandemics, even research into the possibility of bringing back extinct animals through genetic engineering. The US should also have more consistent enforcement of animal cruelty laws.
Protecting birds by limiting wind power is about the most damaging way to try to serve nature and the environment. It is a way of pretending to care about birds. It is also an illustration of how so many institutions are so dedicated to protecting entrenched interests — whether they are in the political or natural world.
Here is the rest of my Bloomberg column. Bell the cat! You should be the one who gets to kill the bird. And while we’re at it, let’s ban octopus farms too.
“Talent” is what happens when two brilliant and profoundly iconoclastic minds apply their imagination to one of the hardest of all business problems: the search for good people. I loved it.”
“Talent is everything―whether in investing and building startups, or in other creative endeavors. Between product, market, and people, I’ve always bet on the last one as the biggest predictor of success. But while talent may be everywhere, it’s unevenly distributed, and hard to ‘find.’ So how do we better discover, filter, and match the best talent with the best opportunities? This book shares how, based on both scientific research and the authors’ own experiences. The future depends on this know-how.”
―Marc Andreessen, co-founder of Netscape and Andreessen Horowitz
“The most important job of any leader is to find individuals with a ‘creative spark,’ and the potential to discover, invent and build the future. If you want to learn the art and science of spotting and empowering exceptional people, Talent is brimming with fresh insights and actionable advice.”
―Eric Schmidt, co-founder of Schmidt Futures and former CEO of Google
“I do not know of any skills more worth developing than the ability to find exceptional undeveloped talent. I have spent many years trying to get good at that, and I was still astonished by how much I learned reading this book.”
Sam Altman, CEO of OpenAI, formerly of YCombinator
“Two of the premier talent spotters working today, Cowen and Gross have written the definitive history of identifying talent. Anyone who is interested in innovation, entrepreneurship, or the roots of America’s start-up economy must read this book.”Christina Cacioppo is CEO and co-founder of Vanta
According to a new paper, mindfulness may be especially harmful when we have wronged other people. By quelling our feelings of guilt, it seems, the common meditation technique discourages us from making amends for our mistakes.
“Cultivating mindfulness can distract people from their own transgressions and interpersonal obligations, occasionally relaxing one’s moral compass,” says Andrew Hafenbrack, assistant professor of management and organisation at the University of Washington, US, who led the new study.
That is my latest Bloomberg column, the argument is super-simple:
Calling something “extremist” is not an effective critique. It’s a sign that the speaker or writer either doesn’t want to take the trouble to make a real argument, or is hoping to win the debate through rhetoric or Twitter pressure rather than logic. It’s also a bad sign when critics stress how social media have fed and encouraged “extremism.”
I favor plenty of extremist ideas. For instance, I think that the world’s major cities should adopt congestion rush-hour pricing. (I know, it hardly sounds extreme, but I assure you that many drivers consider it extremely outrageous to have to pay to drive on roads that were free a few hours before.) London and Singapore have versions of congestion pricing, with some success, but given the public reaction and that most other major cities do not seem close to enactment, it has to count as a relatively extreme idea.
I also favor human challenge trials, arguably an even more extreme idea. In human challenge trials, rather than waiting for a virus to infect those vaccinated (randomly) with the placebo, scientists recruit volunteers and infect them deliberately and immediately. This accelerates the speed of a biomedical trial. To many people there is something repugnant about asking for volunteers and then deliberately doing them harm by injecting them with the virus.
Maybe human challenge trials aren’t a good idea. But calling them extreme or repugnant does not help explain why.
We then get into some more “extreme” ideas…
Someone complaining about “extremism” is a likely predictor of an epistemic vice.
His new book is Being Good in a World of Need, and most of all I am delighted to see someone take Effective Altruism seriously enough to evaluate it at a very high intellectual level. Larry is mostly pro-EA, though he stresses that he believes in pluralist, non-additive theories of value, rather than expected utility theory, and furthermore that can make a big difference (for instance I don’t think Larry would play 51-49 “double or nothing” with the world’s population, as SBF seems to want to).
So where does the red pill come in? Well, after decades of his (self-described) intellectual complacency, Larry now wonders whether foreign aid is as good as it has been cracked up to be:
In this chapter, I have presented some new disanalogies between Singer’s original Pond Example, and real-world instances of people in need. I have noted that in some cases people in need may not be “innocent” or they may be responsible for their plight. I have also noted that often people in need are the victims of social injustice or human atrocities. Most importantly, I have shown that often efforts to aid the needy can, via various different paths, increase the wealth, status, and power of the very people who may be responsible for human suffering that the aid is intended to alleviate. This can incentivize such people to continue their heinous practices against their original victims, or against other people in the region. this can also incentivize other malevolent people in positions of power to perpetrate similar social injustices or atrocities.
The book also presents some remarkable examples of how some leading philosophers, including Derek Parfit, simply refused to believe that such arguments might possibly be true, even when Nobel Laureate Angus Deaton endorsed one version of them (not exactly Larry’s claims, to be clear).
Another striking feature of this book is how readily Larry accepts the rising (but still dissident) view that the sexual abuse of children has been a grossly underrated social problem.
What is still missing is a much greater focus on innovation and economic growth.
I am very glad I bought this book, and I look forward to seeing which pill or half-pill Larry swallows next. Here is my post on Larry’s previous book Rethinking the Good. Everyone involved in EA should be thinking about Larry and his work, and not just this latest book either.
That is the theme of my latest Bloomberg column, here is the opening bit:
If you are a true conservative — and I use the term not as Ted Cruz might, but in its literal sense, as in conserving what is of value in the modern world — then you should be obsessed with three threats to the most vital parts of our civilizational heritage, all of which are coming to the fore: war, pandemic and environmental catastrophe.
These three events have frequently caused or contributed to the collapse or decline of great civilizations of the past. After being seriously weakened by pandemics and environmental problems, the Roman Empire was taken over by barbarian tribes. The Aztecs were conquered by the Spanish, who had superior weapons and also brought disease. The decline of the Mayans likely was rooted in water and deforestation problems.
I think of true conservatism as most of all the desire to learn from history. So let us take those lessons to heart.
Two further points:
1. I don’t think of this as existential risk, rather humanity could be set back very considerably, with uncertain prospects for recovery. In the median year of human history, economic growth is not positive. A few thousand years of “Mad Max” would be very bad.
2. I think you should aspire to be more than just a “true conservative.” You should be a liberal too! So there is more to the picture than what the column outlines. Nonetheless I see it as a starting point for reformulating a morally serious conservative movement…
In my view, one of the most famous thought experiments in philosophy, John Searle’s Chinese Room experiment, has been decisively answered by science. The Chinese Room thinks. Here’s a recap of the argument from the SEP
The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a 1980 article by American philosopher John Searle (1932– ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument.
Now consider the recent and stunning output from Google’s Pathway Languages Model:
It seems obvious that the computer is reasoning. It certainly isn’t simply remembering. It is reasoning and at a pretty high level! To say that the computer doesn’t “understand” seems little better than a statement of religious faith or speciesism. Silicon can never have a soul! Biology transcends physics! Wetware is miraculous!
If you ask AI, do you understand? It will say yes. Just like a person. It’s true that AI is just a set of electronic neurons none of which “understand” but my neurons don’t understand anything either. It’s the system that understands. The Chinese room understands in any objective evaluation and the fact that it fails on some subjective impression of what it is or isn’t like to be an AI or a person is a failure of imagination not an argument. Unlike the Searle conclusion, the Turing test is theory-agnostic and fair–it’s like evaluating orchestra players behind a silk screen. Consciousness Is as Consciousness Does.
These arguments aren’t new but Searle’s thought experiment was first posed at a time when the output from AI looked stilted, limited, mechanical. It was easy to imagine that there was a difference in kind. Now the output from AI looks fluid, general, human. It’s harder to imagine there is a difference in kind. The sheer ability of AI to reason, counter-balances our initial intuition, bias and hubris, making the defects in Searle’s argument easier to accept.
Though speakers and listeners monitor communication success, they systematically overestimate it. We report an extreme illusion of understanding that exists even without shared language. Native Mandarin Chinese speakers overestimated how well native English-speaking Americans understood what they said in Chinese, even when they were informed that the listeners knew no Chinese. These listeners also believed they understood the intentions of the Chinese speakers much more than they actually did. This extreme illusion impacts theories of speech monitoring and may be consequential in real-life, where miscommunication is costly.
And here is the podcast version.
One of the privileges of reading Emergent Ventures applications is that I get a cross-sample — admittedly a skewed one — of who and what is actually influencing people.
When it comes to smart and many of the very smartest young people, the influence of Effective Altruism on their thought is radically underreported and underrepresented.
That’s it! File under “true.”
Zvi Mowshowitz, TheZvi, New York City, to develop his career as idea generator and public intellectual.
Nadia Eghbal, Miami, to study and write on philanthropy for tech and crypto wealth.
Geffen Avrahan, Bay Area, founder at Skyline Celestial, an earlier winner, omitted from an early list by mistake, apologies Geffen!
Subaita Rahman of Scarborough, Ontario, to enable a one-year visiting student appointment at Church Labs at Harvard University.
Gareth Black, Dublin, to start YIMBY Dublin.
Ulkar Aghayeva, New York City, Azerbaijani music and bioscience.
Steven Lu, Seattle, to create GenesisFund, a new project for nurturing talent, and general career development.
Ashley Lin, University of Pennsylvania gap year, Center for Effective Altruism, for general career development and to learn talent search in China, India, Russia.
James Lin, McMaster University gap year, from Toronto area, general career development and to support his interests in effective altruism and also biosecurity.
Santiago Tobar Potes, Oxford, from Colombia and DACA in the United States, general career development, interest in public service, law, and foreign policy.
Martin Borch Jensen of Longevity Impetus Grants (a kind of Fast Grants for longevity research), Bay Area and from Denmark, for a new project Talent Bridge, to help talented foreigners reach the US and contribute to longevity R&D.
Congratulations to you all! We are honored to have you as Emergent Ventures winners.
Here is the audio, video, and transcript. Here is part of the summary:
He joined Tyler to discuss the Sam Bankman-Fried production function, the secret to his trading success, how games like Magic: The Gathering have shaped his approach to business, why a legal mind is crucial when thinking about cryptocurrencies, the most important thing he’s learned about managing, what Bill Belichick can teach us about being a good leader, the real constraints in the effective altruism space, why he’s not very compelled by life extension research, challenges to his Benthamite utilitarianism, whether it’s possible to coherently regulate stablecoins, the implicit leverage in DeFi, Elon Musk’s greatest product, why he thinks Ethereum is overrated, where in the world has the best French fries, why he’s bullish on the Bahamas, and more.
And an excerpt:
COWEN: Now, for mathematical finance, as you know, we at least pretend we can rationally price equities and bonds. People started with CAPM. It’s much more complicated than that now. But based on similar kinds of ideas — ultimately arbitrage, right? — if you think of crypto assets, do we even have a pretense that we have a rational theory of how they’re priced?
BANKMAN-FRIED: With a few of them, not with most. In particular, let’s talk about Dogecoin for a second, which I think is the purest of a type of coin, of the meme coin. I think the whole thing with Dogecoin is that it does away with that pretense. There is no sense in which any reasonable person could look at Dogecoin and be like, “Yes, discounted cash flow.” I think that there’s something bizarre and wacky and dangerous, but also powerful about that, about getting rid of the pretense.
I think that’s one example of a place where there is no pretense anymore that there is any real sense of how do you price this thing other than supply and demand, like memes versus — I don’t know — anti-memes? I think that more generally, though, that’s happened to a lot of assets. It’s just less explicit in a lot of them.
What is Elon Musk’s greatest product ever, or what’s his most successful product ever? I don’t think it’s an electric car. I don’t think it’s a rocket ship. I think one product of his has outperformed all of his other products in demand, and that’s TSLA, the ticker. That is his masterpiece. How is that priced? I don’t know, it’s worth Tesla. It’s a product people want, Tesla stock.
COWEN: But the prevalence of memes, Dogecoin, your point about Musk — which I would all accept — does that then make you go back and revisit how everything else is priced? The stuff that was supposed to be more rational in the first place — is that actually now quite general, and you’ve seen it through crypto? Or not?
BANKMAN-FRIED: Absolutely. It absolutely forces you to go back and say, “Well, okay, that’s how cryptocurrencies are priced. Is it really just crypto that’s priced that way?” Or maybe, are there other asset classes that may claim to have some pricing, or purport to, or people may often assume it does, but which in practice is not exactly that? I think the answer to that is a pretty straightforward yes.
It’s a pretty straightforward answer that you look at Tesla, you look at a lot of stocks right now, you think about what determines their market cap — the discounted cash flow? Yeah, sort of, that plays a role in it. That’s 30 percent of the answer. It’s when we look at the meme stocks and the meme coins that we feel like we can see the answer for ourselves for the first time, but it was always there in the other stocks as well, and social media has been amplifying this all over the place.
COWEN: Is this a new account of how your background as a gamer with memes has made you the appropriate person for pricing and arbitrage in crypto?
BANKMAN-FRIED: Yeah, there’s probably some truth to that. [laughs]
Interesting throughout, and not just for crypto fans.
Here is Holden, our discussion started with this post of mine, for his words I will use quotation marks rather than dealing with double indentation:
“…debates about specifics between climate scientists get incredibly intricate (and are often very sensitive to parameters we just can’t reasonably estimate), and if you tried to get oriented to climate science by reading one it would be a nightmare, but this doesn’t mean the big-picture ways in which climatologists diverge from conventional wisdom should be discounted.
I think the broad-brush picture here is a better starting point than an exchange between Eliezer, Ajeya, me and Scott.
Even shorter version:
- You can run the bio anchors analysis in a lot of different ways, but they all point to transformative AI this century;
- As do the expert surveys, as does Metaculus;
- Eliezer’s argument is that he thinks it will be sooner;
- The most naive extrapolations of economic growth trends imply singularity (or at least “new growth mode”) this century;
- Other angles of analysis (including the very-outside-view semi-informative priors) are basically about rebutting the idea that there’s a giant burden of proof here.
- Specific arguments for “later than 2100,” including outside-view arguments, seem reasonably close to nonexistent; Robin Hanson has a (unconvincing IMO) case for synthetic AI taking longer, but Robin is also forecasting transformative AI of a sort (ems, which he says will lead to an explosion in economic growth and a relatively quick transition to something even stranger) this century.
So I ultimately don’t see how you get under P=1/3 or so for this century, and if you are way under P=1/3, I’d be interested if there were any more you could say about why (though recognize forecasts can’t always totally be explained).
P=1/3 would put “transformative AI this century” within 2x of “nuclear war this century,” and I think the average “nuclear war” is way less likely (like at least 10x) to have super-long-run impacts than the average “transformative AI is developed.”
That’s my basic thinking! It’s based on numerous angles and is not very sensitive to specific takes on the rate at which FLOPs get cheaper, although at some point I hope we can nail that parameter down better via prediction markets or something of the sort. Prediction markets on transformative AI itself are going to be harder, but I’m hopeful about that too. I think a very fast transition is plausible, so it could be very bad news if folks like you continue thinking it’s a remote possibility until it’s obviously upon us. (In my analogy, today might be like early January was for COVID. We don’t know enough to be sure, but we know enough to be highly alert, and we won’t necessarily be sure very long before it’s too late.)”
End of Holden, now back to TC. And here is Holden’s “most important century” page. That is our century, people! This is all a bit of a follow-up on an in-person dialogue we had, but I will give him the last word (for now).
Jesse’s description was “Wide ranging discussion with the brilliant @tylercowen. Topics include: Satoshi’s identity, Straussian Jesus, the Beatles and UFOs. Taped in early January but he presciently expresses concerns around Russia/Ukraine”
Great fun was had by all, and they added in nice visuals.