Category: Web/Tech
My excellent Conversation with Paul Bloom
Here is the audio, video, and transcript. Here is part of the episode summary:
Together Paul and Tyler explore whether psychologists understand day-to-day human behavior any better than normal folk, how babies can tell if you’re a jerk, at what age children have the capacity to believe in God, why the trend in religion is toward monotheism, the morality of getting paid to strangle cats, whether disgust should be built into LLMs, the possibilities of AI therapists, the best test for a theory of mind, why people overestimate Paul’s (and Tyler’s) intelligence, why flattery is undersupplied, why we should train flattery and tax empathy, Carl Jung, Big Five personality theory, Principles of Psychology by William James, the social psychology of the Hebrew Bible, his most successful unusual work habit, what he’ll work on next, and more.
And here is one excerpt:
COWEN: I have some questions about intelligence for you. If we think of large language models, should we let them feel disgust so that they avoid left-wing bias?
BLOOM: [laughs] Why would disgust make them avoid left-wing bias?
COWEN: Maybe we’re not sure it would, but there are various claims in the literature that for people on the right, disgust is a more fundamental emotion, and that a greater capacity to feel disgust encourages people in some ways to be more socially conservative. Debatable, but I don’t think it’s a crazy view. So, if you build LLMs, and you give them, say, a lot of empathy and not much or any disgust, you’re going to get left-leaning LLMs, which you might say, “Well, that was my goal.” But obviously, not everyone will accept that conclusion either.
BLOOM: I wouldn’t want woke LLMs. I think there’s a lot in extreme —
COWEN: You’ve got them, of course.
BLOOM: I’ve got them. I think Gemini is the one, if I wanted to go — the woke LLM of choice. Because I think the doctrine called wokeness leads to a lot of moral problems and makes the world worse in certain ways, but I wouldn’t mind left-wing LLMs.
In fact, I’m not a fan of disgust. You’re right that disgust is often associated with right-wing, but in the very worst instantiation of it. Disgust is what drives hatred towards gay people. It involves hatred of interracial marriage, the exclusion of immigrants, the exclusion of other races. If there’s one emotion I would take away from people, it would be disgust, at least disgust in the moral realm. They could keep their disgust towards rotten food and that sort of thing. That’s the one thing I wouldn’t put into LLMs. I’d rather put anger, pity, gratitude. Disgust is the one thing I’d keep away.
COWEN: So, you wouldn’t just cut back on it at the margin. You would just take disgust out of people if you could?
And:
COWEN: I think at the margin, I’ve moved against empathy more being a podcast host, that I’ll ask a question —
BLOOM: Wait. Why being a podcast host?
COWEN: Well, I’ll ask a question, and a lot of guests think it’s high status simply to signal empathy rather than giving a substantive answer. The signaling-empathy answers I find quite uninteresting, and I think a lot of my listeners do, too. Yet people will just keep on doing this, and I get frustrated. Then I think, “Well, Tyler, you should turn a bit more against empathy for this reason.” And I think that’s correct.
Paul cannot be blamed for doing that, however. So substantive, interesting, and entertaining throughout.
Arena magazine
Introducing: Arena Magazine. It’s a print & digital magazine about tech, capitalism, and the USA – that’s actually on the side of tech, capitalism, and the USA! Subscribe: arenamag.com/join
Here is a link and a thread.
From Google DeepMind (it’s happening)
We’re presenting the first AI to solve International Mathematical Olympiad problems at a silver medalist level. It combines AlphaProof, a new breakthrough model for formal reasoning, and AlphaGeometry 2, an improved version of our previous system.
Here is further information.
From the NYT three days ago “A.I. Can Write Poetry, but It Struggles With Math.” From the NYT today: “Move Over, Mathematicians, Here Comes AlphaProof.” And here is one opinion: “This type of A.I. learns by itself and can scale indefinitely, said Dr. Silver, who is Google DeepMind’s vice-president of reinforcement learning.” Okie-dokie!
Who uses ChatGPT?
By Anders Humlum and Emilie Vestergaard:
We study the adoption of ChatGPT, the icon of Generative AI, using a large-scale survey experiment linked to comprehensive register data in Denmark. Surveying 100,000 workers from 11 exposed occupations, we document that ChatGPT is widespread, but substantial inequalities have emerged. Women are 20 percentage points less likely to have used the tool. Furthermore, despite its potential to lift workers with less expertise, users of ChatGPT earned more already before its arrival. Workers see a substantial productivity potential in ChatGPT but are often hindered by employer restrictions and the need for training. Informing workers about expert assessments of ChatGPT shifts workers’ beliefs but has limited impacts on actual adoption.
Here is the link to the full paper.
Mark Zuckerberg on open source AI
Introducing Llama 3.1
Read about it here, use it here. Here is some video. Bravo!
*Why Machines Learn: The Elegant Math Behind Modern AI*
By Anil Ananthaswamy, an excellent book, it will make the end of year best non-fiction list. It focuses on machine learning and its offshoots, and you can read it for the story even if you don’t followall of the matrix algebra and the calculus. It is also the best book I know on how science advances by laying different “bricks,” and later bringing them all together toward a practicable solution. Recommended.
Will technology improve animal welfare?
That is the topic of my latest Bloomberg column, here is one excerpt:
There is, however, some better news on the animal welfare front. The cause is on the verge of some major victories — and they have been earned through technology rather than rhetoric.
The first major development is Ozempic and the other weight-loss drugs in the GLP-1 category. By one estimate, 25,000 Americans start taking these weight-loss drugs every week, and 93 million Americans may meet the criteria for using them. The spread of such drugs to many other countries is likely, especially since they seem to produce health gains above and beyond weight loss.
The logic is simple: People lose weight on these drugs because they eat less, and eating less usually means eating less meat. And less meat consumption results in less factory farming.
This should count as a major victory for animal welfare advocates, even though it did not come about through their efforts. No one had to be converted to vegetarianism, and since these drugs offer other benefits, this change in the equilibrium is self-sustaining and likely to grow considerably. Yes, it is only a partial victory, but total victory was unlikely anyway.
And this:
There is yet a third reason for animal welfare advocates to be optimistic. It is more speculative, but now seems less crazy than it used to: Super-powered AI could help us observe and learn animal languages, thus enabling humans to converse with at least some of the smarter (or at least more articulate?) animals. There is already a project at UC-Berkeley to converse with sperm whales by decoding their language and translating it to English, using techniques drawn from large-language models.
If we could talk with animals — and hear their complaints and descriptions of their own suffering — would we be less likely to eat them and treat them badly? How would we respond to the pleas of dolphins to stop using our nets to catch tuna, a process which kills many dolphins?
This is some chance this strategy could backfire; dolphins, for instance, may not be as charming as people think. Nonetheless, it holds at least some chance of a revolution in how we humans think of our relations with the rest of the animal kingdom.
Do you think there are any animals we could talk into vegetarianism, if only for marginal changes? If not, why be so optimistic that humans will change? Or maybe underneath it all, you do think that humans are somewhat special?
Competition
way back in 2022, the best model in the world was text-davinci-003. it was much, much worse than this new model. it cost 100x more.
That is from Sam Altman. And here is a good picture.
Andrej Karpathy is founding Eureka Labs, an AI education company
Here is the tweet, which connects to various links of relevance.
Website: eurekalabs.ai GitHub: github.com/EurekaLabsAI 𝕏:@EurekaLabsAI
Managing AI Employees
Lattice is a platform for managing employees. I believe this announcement from the Lattice CEO is real:
Today, Lattice made history: We became the first company to give digital workers official employee records in Lattice. This marks a huge moment in the evolution of A1 technology and of the workplace. It takes the idea of an “AI employee” from concept to reality — and marks the start of a new journey for Lattice, to lead organizations forward in the responsible hiring of digital workers. Within Lattice’s people platform, AI employees will be securely onboarded, trained, and assigned goals, performance metrics, appropriate systems access, and even a manager. We know this process will raise a lot of questions and we don’t yet have all the answers, but we want to help our customers find them. So we will be breaking ground, bending minds, and leaning into learning. And we want to bring everyone along on this journey with us.
On the one hand, this is wild. On the other hand, it makes perfect sense to use our current system of managing employees–performance reviews, training, feedback, yearly bonuses and so on–to manage AIs. In essence, human management systems become the interface for AI workers.
Every team will soon include AI workers. At first, this will be the AI stenographer who keeps the minutes, the admin who reminds everyone of their tasks, and the PowerPoint designer who puts together presentations but quickly this will evolve and some AIs will naturally take on supervisory roles until, my boss is an AI, will not seem strange.
What should I ask Tobi Lütke?
Yes, I will be doing a Conversation with him. Here is Wikipedia:
Tobias “Tobi” Lütke (born 1981) MSC is a German/Canadian entrepreneur who is the co-founder and CEO of Shopify, an e-commerce company based in Ottawa, Ontario, Canada. He has been part of the core team of the Ruby on Rails framework and has created open source libraries such as Active Merchant
Born in Koblenz, and he is on the board of directors of Coinbase. So what should I ask him?
AI teaching assistants?
“Morehouse College is planning to use AI teaching assistants to help crack the code of education.
Why it matters: Morehouse professor Muhsinah Morris says every professor will have an AI assistant in three to five years.
- Morris says the technology has taken off in the last 24 months faster than it has in the last 24 years. Meanwhile, baby boomers are leaving the workforce amid national teacher shortages and burnout.
How it works: Morehouse professors will collaborate with technology partner VictoryXR to create virtual 3D spatial avatars. The avatars use OpenAI to have two-way oral conversations with students.
- The avatars use professor-created content and 3D models for lessons, such as molecules for chemistry lessons, to help students.
- Avatars can also respond to unrelated topics raised by students and redirect the students back to the lesson.
- Students will have 24/7 online access to the avatar, which can communicate in a student’s native language.
What they’re saying: Morris called the avatars the world’s first spatial AI teaching assistants.”
That is all from Axios. Via Anecdotal.
AI trends relative to AI predictions
In 2021, researchers were asked how smart AI would be in 2022. They predicted 12% pass-rate on the MATH dataset; AI outperformed significantly, scoring 50%. Now it’s at 90%.
Here is the link, including to background material.
Claude Sonnet 3.5, economist
TC:
“Is the Stiglitz-Shapiro 1984 efficiency wage model actually a model of sticky wages, or not? Is either the real or nominal wage sticky in that model?”
Claude:
The Stiglitz-Shapiro efficiency wage model from 1984 is not primarily a model of sticky wages, although it can have implications for wage stickiness. Let’s break this down:
- Core concept of the model: The Stiglitz-Shapiro model is fundamentally about explaining why firms might pay wages above the market-clearing level. The key insight is that higher wages can increase productivity by: a) Reducing shirking (as workers have more to lose if caught and fired) b) Lowering turnover costs c) Improving the quality of job applicants d) Boosting worker morale and effort
- Wage stickiness in the model:
- The model does not directly predict sticky wages in the traditional sense of wages that are slow to adjust to economic shocks.
- However, it does provide a rationale for why wages might not fall even in the face of unemployment. Firms have an incentive to maintain higher wages to preserve the productivity benefits.
- Real vs. Nominal wages:
- The model is primarily concerned with real wages, not nominal wages.
- The efficiency wage is set in real terms, as it’s the purchasing power of the wage that affects worker behavior and productivity.
- Implications for wage rigidity:
- While not a model of sticky wages per se, the efficiency wage concept can contribute to wage rigidity.
- Firms may be reluctant to cut real wages even during economic downturns because doing so could reduce productivity more than it saves on labor costs.
- Difference from traditional sticky wage models:
- Traditional sticky wage models often focus on nominal rigidities due to factors like long-term contracts or menu costs.
- The Stiglitz-Shapiro model provides a real, microeconomic foundation for why wages might not adjust downward, even if they’re not formally “sticky.”
In summary, while the Stiglitz-Shapiro efficiency wage model is not explicitly a model of sticky wages, it does provide insights into why wages might be rigid, particularly in the downward direction. The focus is on real wages rather than nominal wages, and the rigidity emerges from the productivity-enhancing effects of higher wages rather than from institutional or contractual factors.
TC again; What percentage of professional economists could give an answer of comparable quality and nuance?