Results for “writing”
1374 found

From the comments, on AI safety

This is from Richard Ngo, who works on the governance team at OpenAI:

A few points:
1. I agree that the alignment community has generally been remiss in not trying hard enough to clarify the arguments in more formal papers.
2. The only peer-reviewed paper making the case for AI risk that I know of is: https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064. Though note that my paper (the second you linked) is currently under review at a top ML conference.
3. I don’t think that a formal model would shed much light here. My goal in writing my paper was to establish misaligned power-seeking AGI as a credible scientific hypothesis; I think that most who think it’s credible would then agree that investigating it further should be a key priority, whether or not their credences are more like 10% or more like 90%.

From this batch of comments.  Here is Richard on Twitter.

Matt Yglesias on movies vs. TV

But I’ve gotten really disgruntled with the “prestige TV” landscape and am trying to redirect my content consumption accordingly. One thing that makes movies really great in my view is that before they shoot a movie, they write a screenplay and the screenplay has an end. Both the screenwriter and other people have read that screenplay all the way from beginning to end and they’ve tweaked and changed it and gotten it into a position where they are ready to start production. Then after a movie is filmed, the editor and director work with the footage and come up with a complete movie that has a beginning, a middle, and an end. They then ship the movie out, and it’s screened by critics who watch the entire movie before writing their review.

This does not guarantee that every movie that comes out is good. But it does guarantee that if someone tells you “‘The Menu’ is good,” they are evaluating a completed product…

By contrast, TV shows have this quasi-improvisational quality where the showrunners are constantly needing to come up with new balls to toss into the air. In old-fashioned non-prestigious “adventure of the week”-type shows, this actually works fine because the writers are not building up tension or setting unexplored plots in motion. But as serialized TV storytelling has gotten more and more common, we’re more and more often asked to show patience through early episodes or to try to find things intriguing with no ability to know whether any of it will pay off. Creators often have no idea where they’re going with the story.

Back in HBO’s heyday, the tradeoff was that The Sopranos and The Wire got to paint on a giant canvas and tell stories that are just too capacious for the movie format. But eventually networks got tired of spending that kind of money and cut back the sizes of the casts to something more normal for television.

That is from his Friday mailbag ($).  The bottom line is that, like Matt, you should watch more movies and less TV.

My excellent Conversation with Jessica Wade

Here is the audio, video, and transcript.  Here is part of the summary:

She joined Tyler to discuss if there are any useful gender stereotypes in science, distinguishing between productive and unproductive ways to encourage women in science, whether science Twitter is biased toward men, how AI will affect gender participation gaps, how Wikipedia should be improved, how she judges the effectiveness of her Wikipedia articles, how she’d improve science funding, her work on chiral materials and its near-term applications, whether writing a kid’s science book should be rewarded in academia, what she learned spending a year studying art in Florence, what she’ll do next, and more.

Here is the opening bit:

COWEN: Let’s start with women in science. We will get to your research, but your writings — why is it that women in history were so successful in astronomy so early on, compared to other fields?

WADE: Oh, that’s such a hard question [laughs] and a fascinating one. When you look back at who was allowed to be a scientist in the past, at which type of woman was allowed to be a scientist, you were probably quite wealthy, and you either had a husband who was a scientist or a father who was a scientist. And you were probably allowed to interact with science at home, potentially in things like polishing the lenses that you might use on a telescope, or something like that.

Caroline Herschel was quite big on polishing the lenses that Herschel used to go out and look at and identify comets, and was so successful in identifying these comets that she wanted to publish herself and really struggled, as a woman, to be allowed to do that at the end of the 1800s, beginning of the 1900s. I think, actually, it was just that possibility to be able to access and do that science from home, to be able to set up in your beautiful dark-sky environment without the bright lights of a city and do it alongside your quite successful husband or father.

After astronomy, women got quite big in crystallography. There were a few absolutely incredible women crystallographers throughout the 1900s. Dorothy HodgkinKathleen LonsdaleRosalind Franklin — people who really made that science possible. That was because they were provided entry into that, and the way that they were taught at school facilitated doing that kind of research. I find it fascinating they were allowed, but if only we’d had more, you could imagine what could have happened.

COWEN: So, household production you think is the key variable, plus the ability to be helped or trained by a father or husband?

The discussion of chirality and her science work is very interesting, though hard to summarize.  I very much like this part, when I asked her about her most successful unusual work habit:

But just writing the [Wikipedia] biography of the person I was going to work with meant that I was really prepped for going. And if I’m about to see someone speak, writing their biography before means I get this. That’s definitely my best work habit — write the Wikipedia page of what it is that you are working on.

I don’t agree with her on the environment/genes issue, but overall a very good CWT, with multiple distinct parts.

What I’ve been reading

1. Judith A. Green, The Normans: Power, Conquest & Culture in 11th-Century Europe.  A very clear and to the point book on a complex topic.  This is a good one to read with GPT-4 accompaniment for your queries.  In Sicily, near Palermo, the Normans produced one of my favorite sites in all of Europe.

2. John A. Mackenzie, A Cultural History of the British Empire.  “A vital characteristic of polo was that since it lacked immediate physical contact it could be jointly played by British and Indians, which of course meant elite Indians, inevitably associated with the princely states.”  A very good book on both a) early globalization, and b) actually understanding the British empire.  I hadn’t known that during the 1930s and 40s, maximum years of resistance to the British empire, cricket tournaments largely were abandoned.

3. Carmela Ciuraru, Lives of the Wives: Five Literary Marriages.  I hadn’t even known Patricia Neal was married to Roald Dahl.  Overall I enjoy intellectual/romance gossip books, and this is a good one.  Full of actual facts about the writings, not just the affairs and the marriages and divorces.  Moravia/Morante was my favorite chapter.  Here is a Guardian review, superficially you might think there is no real message in this book, but then again…

4. Lucy Wooding, Tudor England: A History.  A good book, but most of all a very good book to read with GPT-4 as your companion.

Jeanna Smialek, Limitless: The Federal Reserve Takes on a New Age of Crisis, is a good, readable, non-technical introduction to the Fed, focusing on personalities and internal mechanics, rather than macroeconomic theories.

Rainer Zitelmann, In Defense of Capitalism: Debunking the Myths.  A very good pro-capitalism book, broadly in the Milton Friedman tradition.

Peter Frankopan, The Earth Transformed: An Untold History.  Long, full of information, and well written, but somehow lacks a central organizing thesis to hold it all together.

Murray Pittock, Enlightenment in a Smart City: Edinburgh’s Civic Development 1660-1750 is an excellent book on how the built environment of Edinburgh, and its building reforms and improvements, shaped the Scottish Enlightenment.  Gives a better sense of the Edinburgh of the time than any other book I know.  I don’t mean the thinkers in the city, I mean the city itself.

Charles Dunst, Defeating the Dictators: How Democracy Can Prevail in the Age of the Strongman.  Full of true claims, common sense, and a needed dose of optimism.

I have not yet read Mark Calabria’s Shelter from the Storm: How a Covid Mortgage Meltdown was Averted, Cato Institute book.

Existential risk, AI, and the inevitable turn in human history

In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least.  For my entire life, and a bit more, there have been two essential features of the basic landscape:

1. American hegemony over much of the world, and relative physical safety for Americans.

2. An absence of truly radical technological change.

Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.

In other words, virtually all of us have been living in a bubble “outside of history.”

Now, circa 2023, at least one of those assumptions is going to unravel, namely #2.  AI represents a truly major, transformational technological advance.  Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.

#1 might unravel soon as well, depending how Ukraine and Taiwan fare.  It is fair to say we don’t know, nonetheless #1 also is under increasing strain.

Hardly anyone you know, including yourself, is prepared to live in actual “moving” history.  It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad.  In my view the good will considerably outweigh the bad (at least from losing #2, not #1), but I do understand that the absolute quantity of the bad disruptions will be high.

I am reminded of the advent of the printing press, after Gutenberg.  Of course the press brought an immense amount of good, enabling the scientific and industrial revolutions, among many other benefits.  But it also created writings by Lenin, Hitler, and Mao’s Red Book.  It is a moot point whether you can “blame” those on the printing press, nonetheless the press brought (in combination with some other innovations) a remarkable amount of true, moving history.  How about the Wars of Religion and the bloody 17th century to boot?  Still, if you were redoing world history you would take the printing press in a heartbeat.  Who needs poverty, squalor, and recurrences of Ghenghis Khan-like figures?

But since we are not used to living in moving history, and indeed most of us are psychologically unable to truly imagine living in moving history, all these new AI developments pose a great conundrum.  We don’t know how to respond psychologically, or for that matter substantively.  And just about all of the responses I am seeing I interpret as “copes,” whether from the optimists, the pessimists, or the extreme pessimists (e.g., Eliezer).  No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.

The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring.  No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring.  No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly).  No one.  Not you, not Eliezer, not Sam Altman, and not your next door neighbor.

How well did people predict the final impacts of the printing press?  How well did people predict the final impacts of fire?  We even have an expression “playing with fire.”  Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).

So when people predict a high degree of existential risk from AGI, I don’t actually think “arguing back” on their chosen terms is the correct response.  Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely.  Nonetheless I am still for people doing constructive work on the problem of alignment, just as we do with all other technologies, to improve them.  I have even funded some of this work through Emergent Ventures.

I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern.  No one can foresee those futures!  Once you keep up the arguing, you also are talking yourself into an illusion of predictability.  Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become.  It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison.  The case for destruction is so much more readily articulable — “boom!”  Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns.  (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.)  Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagineAll the possibilities are distant, I cannot stress that enough.  The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.

Given this radical uncertainty, you still might ask whether we should halt or slow down AI advances.  “Would you step into a plane if you had radical uncertainty as to whether it could land safely?” I hear some of you saying.

I would put it this way.  Our previous stasis, as represented by my #1 and #2, is going to end anyway.  We are going to face that radical uncertainty anyway.  And probably pretty soon.  So there is no “ongoing stasis” option on the table.

I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?”  And believe me, if we do nothing yes we will re-enter living history and quite possibly get nothing in return for our trouble.

With AI, do we get positives?  Absolutely, there can be immense benefits from making intelligence more freely available.  It also can help us deal with other existential risks.  Importantly, AI offers the potential promise of extending American hegemony just a bit more, a factor of critical importance, as Americans are right now the AI leaders.  And should we wait, and get a “more Chinese” version of the alignment problem?  I just don’t see the case for that, and no I really don’t think any international cooperation options are on the table.  We can’t even resurrect WTO or make the UN work or stop the Ukraine war.

Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence?  That has not the self-confidence to confidently confront a big dose of more intelligence?  Dare I wonder if such societies might not perish under their current watch, with or without AI?  Do you really want to press the button, giving us that kind of American civilization?

So we should take the plunge.  If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this.  Don’t be suckered into taking their bait.  The longer a historical perspective you take, the more obvious this point will be.  We should take the plunge.  We already have taken the plunge.  We designed/tolerated our decentralized society so we could take the plunge.

See you all on the other side.

My excellent Conversation with Tom Holland

Here is the transcript, audio, and video.  Here is part of the summary:

Historian Tom Holland joined Tyler to discuss in what ways his Christianity is influenced by Lord Byron, how the Book of Revelation precipitated a revolutionary tradition, which book of the Bible is most foundational for Western liberalism, the political differences between Paul and Jesus, why America is more pro-technology than Europe, why Herodotus is his favorite writer, why the Greeks and Persians didn’t industrialize despite having advanced technology, how he feels about devolution in the United Kingdom and the potential of Irish unification, what existential problem the Church of England faces, how the music of Ennio Morricone helps him write for a popular audience, why Jurassic Park is his favorite movie, and more.

Here is one excerpt:

COWEN: Which Gospel do you view as most foundational for Western liberalism and why?

HOLLAND: I think that that is a treacherous question to ask because it implies that there would be a coherent line of descent from any one text that can be traced like that. I think that the line of descent that leads from the Gospels and from the New Testament and from the Bible and, indeed, from the entire corpus of early Christian texts to modern liberalism is too confused, too much of a swirl of influences for us to trace it back to a particular text.

If I had to choose any one book from the Bible, it wouldn’t be a Gospel. It would probably be Paul’s Letter to the Galatians because Paul’s Letter to the Galatians contains the famous verse that there is no Jew or Greek, there is no slave or free, there is no man or woman in Christ. In a way, that text — even if you bracket out and remove the “in Christ” from it — that idea that, properly, there should be no discrimination between people of different cultural and ethnic backgrounds, based on gender, based on class, remains pretty foundational for liberalism to this day.

I think that liberalism, in so many ways, is a secularized rendering of that extraordinary verse. But I think it’s almost impossible to avoid metaphor when thinking about what the relationship is of these biblical texts, these biblical verses to the present day. I variously compared Paul, in particular in his letters and his writings, rather unoriginally, to an acorn from which a mighty oak grows.

But I think actually, more appropriately, of a depth charge released beneath the vast fabric of classical civilization. And the ripples, the reverberations of it are faint to begin with, and they become louder and louder and more and more disruptive. Those echoes from that depth charge continue to reverberate to this day.

And:

COWEN: In Genesis and Exodus, why does the older son so frequently catch it hard?

HOLLAND: Well, I’m an elder son.

COWEN: I know. Your brother’s younger, and he’s a historian.

HOLLAND: My brother is younger. It’s a question on which I’ve often pondered, because I was going to church.

COWEN: What do you expect from your brother?

HOLLAND: The truth is, I have no idea. I don’t know. I’ve often worried about it.

Quite a good CWT.

Chat Law Goes Global

PricewaterhouseCoopers (PWC), the global business services firm, has signed a deal with OpenAI for access to “Harvey”, OpenAI’s Chatbot for legal services.

Reuters: PricewaterhouseCoopers said Wednesday that it will give 4,000 of its legal professionals access to an artificial intelligence platform, becoming the latest firm to introduce generative AI technology for legal work.

PwC said it partnered with AI startup Harvey for an initial 12-month contract, which the accounting and consulting firm said will help lawyers with contract analysis, regulatory compliance work, due diligence and other legal advisory and consulting services.

PwC said it will also determine ways for tax professionals to use the technology.

IBM’s Watson was a failure so we will see but, yeah I will say it, this time feels different. For one, lawyers deal with text where GPTs excel. Second, GPTs have already revolutionized software coding and unlike Watson I am using GPTs every day for writing and researching and it works. The entire world of white collar work is going to be transformed over the next year. See also my paper with Tyler, How to Learn and Teach Economics with Large Language Models, Including GPT.

Teaching and Learning Economics with the AIs

Tyler and I have a new paper, How to Learn and Teach Economics with Large Language Models, Including GPT:

GPTs, such as ChatGPT and Bing Chat, are capable of answering economics questions, solving specific economic models, creating exams, assisting with research, generating ideas, and enhancing writing, among other tasks. This paper highlights how these innovative tools differ from prior software and necessitate novel methods of interaction. By providing examples, tips, and guidance, we aim to optimize the use of GPTs and LLMs for learning and teaching economics effectively.

Most of the paper is about how to use GPTs effectively but we also make some substantive points that many people are missing:

GPTs are not simply a chatty front end to the internet. Some GPTs like ChatGPT have no ability to search the internet. Others, like Bing Chat, can search the internet and might do so to aid in answering a question, but that is not fundamentally how they work. It is possible to ask a GPT questions that no one has ever asked before. For example, we asked how Fred Flintstone was like Hamlet, and ChatGPT responded (in part):

Fred Flintstone and Hamlet are two vastly different characters from different time periods, cultures, and mediums of storytelling. It is difficult to draw direct comparisons between the two.

However, one possible point of similarity is that both characters face existential dilemmas and struggles with their sense of purpose and identity. Hamlet is plagued by doubts about his ability to avenge his father’s murder, and his own worthiness as a human being. Similarly, Fred Flintstone often grapples with his place in society and his ability to provide for his family and live up to his own expectations.

Not a bad answer for a silly question and one that (as far as we can tell) cannot be found on the internet.

GPTs have “read” or “absorbed” a great amount of text but that text isn’t stored in a database; instead the text was used to weight the billions of parameters in the neural net. It is thus possible to run a GPT on a powerful home computer. It would be very slow, since computing each word requires billions of calculations, but unlike storing the internet on your home computer, it is feasible to run a GPT on a home computer or even (fairly soon) on a mobile device.

GPTs work by predicting the next word in a sequence. If you hear the phrase “the Star-Spangled”, for example, you and a GPT might predict that the word “Banner” is likely to come next. This is what GPTs are doing but it would be a mistake to conclude that GPTs are simply “autocompletes” or even autocompletes on steroids.

Autocompletes are primarily statistical guesses based on previously asked questions. GPTs in contrast have some understanding (recall the as if modifier) of the meaning of words. Thus GPTs understand that Red, Green, and Blue are related concepts that King, Queen, Man and Woman are related in a specific way such that a woman cannot be a King. It also understands that fast and slow are related concepts, such that a car cannot be going fast and slow at the same time but can be fast and red and so forth. Thus GPTs are able to “autocomplete” sentences which have never been written before, as we described earlier.2 More generally, it seems likely that GPTs are building internal models to help them predict the next word in a sentence (e.g. Li et al. 2023).

The paper is a work in progress so comments are welcome.

What should I ask Jonathan Swift?

Yes, I would like to do a Conversation with Jonathan “G.P.T.” Swift.  Here is Wikipedia on Swift, excerpt:

Jonathan Swift (30 November 1667 – 19 October 1745) was an Anglo-Irish satirist, author, essayist, political pamphleteer (first for the Whigs, then for the Tories), poet, and Anglican cleric who became Dean of St Patrick’s Cathedral, Dublin, hence his common sobriquet, “Dean Swift”.

Swift is remembered for works such as A Tale of a Tub (1704), An Argument Against Abolishing Christianity (1712), Gulliver’s Travels (1726), and A Modest Proposal (1729). He is regarded by the Encyclopædia Britannica as the foremost prose satirist in the English language.[1] He originally published all of his works under pseudonyms—such as Lemuel Gulliver, Isaac Bickerstaff, M. B. Drapier—or anonymously. He was a master of two styles of satire, the Horatian and Juvenalian styles.

His deadpan, ironic writing style, particularly in A Modest Proposal, has led to such satire being subsequently termed “Swiftian”.

So what should I ask him?  I thank you in advance for your suggestions.

That was then, this is now

From Taylor C. Sherman’s useful Nehru’s India: A History in Seven Myths:

Although Hindu nationalists had gained prominence in the run-up to partition, the new Congress leaders of the Government of India tried to sideline them.  After Gandhi’s assassination on 30 January 1948, members of the Rashtriya Swayamsevak Sangh were arrested, and the Hindu Mahasabha declared it would not take part in politics.  In short, though raging before partition, the flames of Hindu chauvinism were quickly doused after independence, at least according to the old nationalist narrative.  Secondly, the reform of Hinduism was seen as an essential element of secularism.  To this end, a prominent Dalit, Bhimrao Ramji Ambedkar, was put in charge of both writing the Constitution and overseeing reform of Hindu personal law.  Within a short time after independence, so the myth goes, India had a secular state, and was on course to establish a sense of security and belonging for the two groups who had raised the loudest objections to Congress’s nationalism: Muslims and Dalits.

As with so many of the myths that have arisen about this period after independence, the myth of India secularism owes a great deal to Jawaharlal Nehru.

The book is both a good focused view of the Nehru era, but excellent background for current disputes.

Statement of Commitment to Academic Freedom and to Intellectual Merit

Academic freedom and intellectual merit are under attack in the United States, from both the left and the right. The norms of the university and intellectual life are fragile and need protecting because such norms are always in tension with political and economic power.

The undersigned members of the GMU Department of Economics express their commitment to academic freedom and to intellectual merit.

Addressed to the George Mason University (GMU) community and the public at large

~~~

American universities have professed allegiance to two ideals. First, the ideal of academic freedom – the right of students and faculty to express any idea in speech or writing, without fear of university punishment, and secure in the knowledge that the university will protect dissenters from threats and violence on campus.

Second, the ideal of intellectual merit – the right and duty of academic departments to hire and promote the most brilliant, creative, and productive faculty in their fields, and admit the most intellectually promising students, without pressures from the administration.

These ideals are the cornerstones of liberal education. They protect faculty and students who hold views unpopular on university campuses. Academic freedom protects existing students and faculty who dissent from current dominant academic opinion and ideology. No matter how unpopular their views, they know the university will protect them. As stated in the University of Chicago Statement on freedom of expression and as quoted in GMU’s “Free Speech at Mason” Statement:

[We must hold a fundamental commitment to] the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed.

Intellectual merit protects prospective students and faculty who speak and write against current dominant viewpoints. No matter how unpopular their views, they know that university administration will not obstruct or prejudice their admission, hiring, or promotion.

Recently, both of these ideals have come under attack. Pressure for conformity has intensified and universities have increasingly interfered with departments’ personnel decisions. For example, at some universities, one of the more egregious new practices is the requiring of written “diversity” statements by prospective students, staff, or faculty, then used to discriminate among candidates, often by quarters of the university with interests other than those of the department or unit. Such methods recall arrogations of the past, such as The Levering Act of 1950, used against radicals.

We strongly believe the attacks on academic freedom and intellectual merit are deeply mistaken. The classic rationales in favor of these ideals are sound. To protect them, viewpoint diversity must be celebrated and academic departments must maintain their ability to select, hire, and promote students and personnel based on intellectual merit. We insist that the degree of institutional autonomy that the GMU Department of Economics has traditionally enjoyed is vital to the health of viewpoint diversity not only within the university but within the academy writ large.

It is vital that every department in a university enjoys independence, so it can dare to be different and keep viewpoint diversity alive. George Mason University has excelled in supporting viewpoint diversity with a variety of diverse departments, centers and organizations. Viewpoint diversity at George Mason has benefited the university, the United States, and the wider intellectual world.

Indeed, some of the Department’s chief contributions have taught that all forms of authority can exert power to excess, and that guarding against such excess calls for the very ideals affirmed here, respect for dissent and intellectual merit.

We, the undersigned members of the GMU Department of Economics, look forward to continuing our independence to do good economics according to our judgment, guided by the ideals of academic freedom and intellectual merit.

Signed by the following GMU Department of Economics faculty (full-time & emeritus):

1. Jonathan P. Beauchamp
2. James T. Bennett
3. Donald J. Boudreaux
4. Bryan D. Caplan
5. Vincent J. Geloso
6. Timothy Groseclose
7. Robin D. Hanson
8. Garett Jones
9. Daniel B. Klein
10. Mark Koyama
11. David M. Levy
12. Cesar A. Martinelli
13. John V.C. Nye
14. Thomas C. Rustici
15. Vernon L. Smith
16. Alex Tabarrok
17. Karen I. Vaughn
18. Richard E. Wagner
19. Lawrence H. White

Why AI will not create unimaginable fortunes

From my Bloomberg column from last week:

A small number of AI services, possibly even a single one, likely will end up better than the others for a wide variety of purposes. Such companies might buy the best hardware, hire the best talent and manage their brands relatively well. But they will face competition from other companies offering lesser (but still good) services at a lower price. When it comes to LLMs, there is already a proliferation of services, with Baidu, Google and Anthropic products due in the market. The market for AI image generation is more crowded yet.

In economic terms, the dominant AI company might turn out to be something like Salesforce. Salesforce is a major seller of business and institutional software, and its products are extremely popular. Yet the valuation of the company, as of this writing, is about $170 billion. That’s hardly chump change, but it does not come close to the $1 trillion valuations elsewhere in the tech sector.

OpenAI, a current market leader, has received a private valuation of $29 billion. Again, that’s not a reason to feel sorry for anyone — but there are plenty of companies you might not have heard of that are worth far more. AbbVie, a biopharmaceutical corporation, has a valuation of about $271 billion, almost 10 times higher than OpenAI’s.

To be clear, none of this is evidence that AI will peter out. Instead, AI services will enter almost everyone’s workflow and percolate through the entire economy. Everyone will be wealthier, most of all the workers and consumers who use the thing. The key ideas behind AI will spread and be replicated — and the major AI companies of the future will face plenty of competition, limiting their profits.

In fact, AI’s ubiquity may degrade its value, at least from a market perspective. It’s likely the AI boom has yet to peak, but the speculative fervor is almost palpable. Share prices have responded to AI developments enthusiastically. Buzzfeed shares rose 150% in one day last month, for example, after the company announced it would use AI to generate content. Does that really make sense, given all the competition BuzzFeed faces?

It’s when those prices and valuations start falling that you will know the AI revolution has truly arrived. In the end, the greatest impact of AI may be on its users, not its investors or even its inventors.

We’ll see how those predictions hold up.