Category: Web/Tech
How did China’s internet become so cool amongst America’s youth?
That is the topic of my latest Bloomberg column. Here is part of the argument:
TikTok was briefly shut down earlier this month, and the site faces an uncertain legal future. America’s internet youth started to look elsewhere — and where did they choose? They flocked to a Chinese video site called RedNote, also known as Xiaohongshu, the name of the parent company. RedNote has more than 300 million users in China, but until recently barely received attention in the US.
And when young Americans visited RedNote, they were undoubtedly struck by an obvious fact: It is not the kind of site their parents would frequent. The opening page is full of Chinese characters, as well as shots of provocatively dressed women, weird animal and baby photos, and many images that, at least to this American viewer, make no sense whatsoever. Yet Chinese and American youth interact frequently there, for example trading tips for making steamed eggs properly.
I don’t plan on spending much of my time there, but that’s part of the point — and helps explain its appeal to American youth.
And this:
As for the AI large-language models, DeepSeek is a marvel. Quite aside from its technical achievements and low cost, the model has real flair. Its written answers can be moody, whimsical, arbitrary and playful. Of all the major LLMs, I find it the most fun to chat with. It wrote this version of John Milton’s Paradise Lost — as a creation myth for the AIs. Or here is DeepSeek commenting on ChatGPT, which it views as too square. It is hardly surprising that this week DeepSeek was the top download on Apple’s app store.
The model also has a scrappy and unusual history, having been birthed as a side project from a Chinese hedge fund. Whether or not that counts as “cool,” it does sound like something a scriptwriter would have come up with. And at least on American topics, DeepSeek seems more candid than the major US models. That qualifier is important: Don’t ask DeepSeek about Taiwan, the Uighurs or Tiananmen Square.
The most fundamental reason China is seen as cool is that…China is cool, at least in some subset of products.
The forward march of computer use, AI edition
I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native.
HudZah enjoys reading the old-fashioned way, but he now finds that he gets more out of the experience by reading alongside an AI. He puts PDFs of books into Claude or ChatGPT and then queries the books as he moves through the text. He uses Granola to listen in on meetings so that he can query an AI after the chats as well. His friend built Globe Explorer, which can instantly break down, say, the history of rockets, as if you had a professional researcher at your disposal. And, of course, HudZah has all manner of AI tools for coding and interacting with his computer via voice.
It’s not that I don’t use these things. I do. It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.
It also excited me. Just spending a couple of hours with HudZah left me convinced that we’re on the verge of someone, somewhere creating a new type of computer with AI built into its core. I believe that laptops and PCs will give way to a more novel device rather soon.
That is from Ashlee Vance, the entire story is very interesting.
Chris Barber asks me to give AI-related advice
…for people in various stages and situations of life. You will find the discussion here.
Will transformative AI raise interest rates?
We want to know if AGI is coming. Chow, Halperin, and Mazlish have a paper called “Transformative AI, Existential Risk, and Real Interest Rates” arguing that, if we believe the markets, it is not coming for some time. The reasoning is simple. If we expect to consume much more in the future, and people engage in smoothing their incomes over time, then people will want to borrow more now. The real interest rate would rise. The reasoning also works if AI is unaligned, and has a chance of destroying all of us. People would want to spend what they have now. They would be disinclined to save, and real interest rates would have to rise in order to induce people to lend.
The trouble is that “economic growth” is not really one thing. It consists both of expanding our quantity of units consumed for a given amount of resources, but also in expanding what we are capable of consuming at all. Take the television – it has simultaneously become cheaper and greatly improved in quality. One can easily imagine a world in which the goods stay the same price, but greatly improve in quality. Thus, the marginal utility gained from one dollar increases in the future, and we would want to save more, not less. The coming of AGI could be heralded by falling interest rates and high levels of saving.
Questions about LLMs (from my email)
From Naveen:
So much talk of “AI safety” and too little in the way of practical questions like these that are going to be important in the near future.
Should law enforcement be able to subpoena AI assistants about your information? For example, I use the free GPT-3.5/4 version and it already has a lot of my personal information on it.
The other day, when I asked an insurance claims related question in a new chat window without reminding it of the fact that my car was recently totaled, it includes in the answer that “but that wouldn’t apply to you, since your car was declared non-repairable and you were declared as not at-fault.” So it remembers personal information I mentioned weeks ago even though I never told it to commit to its memory.
ChatGPT is such a rudimentary free AI system compared to the personal AI assistants we will get in the near future which will have all my travel data, health data, financial data, mental health data, personal data and what I’ve been up to.
Should law enforcement be allowed to subpoena such AI assistants? Should there be legislation mandating data retention so law enforcement can access it much like telephone records or the opposite — mandating data encryption so it can’t be accessed?
OpenAI Operator research preview is now out
I am pleased to have been given an early look at this new project, I think in less than a year’s time many of us will be using an updated version for many ordinary tasks: “Operator is one of our first agents, which are AIs capable of doing work for you independently—you give it a task and it will execute it.” And:
Operator is powered by a new model called Computer-Using Agent (CUA). Combining GPT-4o’s vision capabilities with advanced reasoning through reinforcement learning, CUA is trained to interact with graphical user interfaces (GUIs)—the buttons, menus, and text fields people see on a screen.
Operator can “see” (through screenshots) and “interact” (using all the actions a mouse and keyboard allow) with a browser, enabling it to take action on the web without requiring custom API integrations.
Here is the associated OpenAI blog post. Exciting times.
*Superagency: What Could Possibly Go Right with Our AI Future*
By Reid Hoffman and Greg Beato. As you might expect, I am in synch with the basic message of this book.
I received a review copy which on the front says “Tyler Cowen edition.” There is a forward, made out to me personally and highly intelligent, relating the book to my own work. There is then a gallery of images of me, very well done by AI image generators.
This is all yet another way in which many books will change, I am all for the innovation.
The Stargate Project
The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.
The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.
Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.
As part of Stargate, Oracle, NVIDIA, and OpenAI will closely collaborate to build and operate this computing system. This builds on a deep collaboration between OpenAI and NVIDIA going back to 2016 and a newer partnership between OpenAI and Oracle.
This also builds on the existing OpenAI partnership with Microsoft. OpenAI will continue to increase its consumption of Azure as OpenAI continues its work with Microsoft with this additional compute to train leading models and deliver great products and services.
All of us look forward to continuing to build and develop AI—and in particular AGI—for the benefit of all of humanity. We believe that this new step is critical on the path, and will enable creative people to figure out how to use AI to elevate humanity.
Here is the full OpenAI tweet, at the very least these are interesting times to be alive. Here are some comments from Jeff Stein.
Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity
As artificial intelligence (AI) transforms society, understanding factors that influence AI receptivity is increasingly important. The current research investigates which types of consumers have greater AI receptivity. Contrary to expectations revealed in four surveys, cross country data and six additional studies find that people with lower AI literacy are typically more receptive to AI. This lower literacy-greater receptivity link is not explained by differences in perceptions of AI’s capability, ethicality, or feared impact on humanity. Instead, this link occurs because people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes. In line with this theorizing, the lower literacy-higher receptivity link is mediated by perceptions of AI as magical and is moderated among tasks not assumed to require distinctly human attributes. These findings suggest that companies may benefit from shifting their marketing efforts and product development towards consumers with lower AI literacy. Additionally, efforts to demystify AI may inadvertently reduce its appeal, indicating that maintaining an aura of magic around AI could be beneficial for adoption.
That is from a new paper by Tully, Longoni, and Appel, via the magical Kevin Lewis.
What should AI policy learn from DeepSeek?
That is a Bloomberg column of mine from about two weeks ago. I thought it would make more sense to people if I did not blog it right away. Here is one bit:
Now the world knows that a very high-quality AI system can be trained for a relatively small sum of money. That could bring comparable AI systems into realistic purview for nations such as Russia, Iran, Pakistan and others. It is possible to imagine a foreign billionaire initiating a similar program, although personnel would be a constraint. Whatever the dangers of the Chinese system and its potential uses, DeepSeek-inspired offshoots in other nations could be more worrying yet.
Finding cheaper ways to build AI systems was almost certainly going to happen anyway. But consider the tradeoff here: US policy succeeded in hampering China’s ability to deploy high-quality chips in AI systems, with the accompanying national-security benefits, but it also accelerated the development of effective AI systems that do not rely on the highest-quality chips.
It remains to be seen whether that tradeoff will prove to be a favorable one. Not just in the narrow sense — although there are many questions about DeepSeek’s motives, pricing strategy, plans for the future and its relation to the Chinese government that remain unanswered or unanswerable. The tradeoff is uncertain in a larger sense, too.
To paraphrase the Austrian economist Ludwig Mises: Government interventions have important unintended secondary consequences. To see if a policy will work, it is necessary to consider not only its immediate impact but also its second- and third-order effects.
DeepThink from DeepSeek
DeepSeek-R1 is here. Try it — it is amazing, free and open source. Go to the main site and press “DeepThink” for superior performance. Marginal cost remains difficult to measure, but almost certainly far cheaper than the main models. A big day, most of all for Africa. Some commentary. Here is the research paper. Here is a Reddit thread. One version you can run on a Mac. Hat tip for Nathan on some of the links.
More on the AI virtual tutor
The results of the randomized evaluation, soon to be published, reveal overwhelmingly positive effects on learning outcomes. After the six-week intervention between June and July 2024, students took a pen-and-paper test to assess their performance in three key areas: English language—the primary focus of the pilot—AI knowledge, and digital skills.
Students who were randomly assigned to participate in the program significantly outperformed their peers who were not in all areas, including English, which was the main goal of the program. These findings provide strong evidence that generative AI, when implemented thoughtfully with teacher support, can function effectively as a virtual tutor.
Notably, the benefits extended beyond the scope of the program itself. Students who participated also performed better on their end-of-year curricular exams. These exams, part of the regular school program, covered topics well beyond those addressed in the six-week intervention. This suggests that students who learned to engage effectively with AI may have leveraged these skills to explore and master other topics independently.
Moreover, the program benefited all students, not just the highest achievers. Girls, who were initially lagging boys in performance, seemed to gain even more from the intervention, highlighting its potential to bridge gender gaps in learning.
Here is more from the World Bank. Replication is required, but this is encouraging.
Should you be writing for the AIs?
That is the topic of my latest Bloomberg column, here is one excerpt:
Another reason to write for the LLMs is to convince them that you are important. Admittedly this is conjecture, but it might make them more likely to convey your ideas in the future.
Think of how this works with humans. If you cite a scholar or public intellectual, that person is more likely to cite you back. Much as we like to pretend science is objective, no one really denies the presence of some favoritism based on personal considerations.
We do not know if LLMs have this same proclivity. But they are trained on knowledge about human civilization, and they study and learn norms of reciprocal cooperation. Thus there is a reasonable chance they will behave in broadly the same way. So be nice to them and recognize their importance.
In Isaac Asimov’s classic science fiction collection I, Robot, the focus is on the potential for AI to assume an important role in our world. Is it crazy to believe that future AIs will take Asimov to be an especially important science fiction writer? They might deem him more important than Ray Bradbury, who also wrote about AI but did not have an equivalent obsession.
Maybe all of this sounds strange. But even if you are not a writer, you will have to follow some of these same principles. Now it is common for AIs to attend and take notes at business meetings. If you are speaking at those meetings, remember that the AIs are part of your audience — communicate with them in mind. Get used to it. And then think about some of the broader contexts in which you operate, and adjust your intellectual outputs accordingly.
There is a less secular reason to write for the AIs: If you wish to achieve some kind of intellectual immortality, writing for the AIs is probably your best chance. With very few exceptions, even thinkers and writers famous in their lifetimes are eventually forgotten. But not by the AIs. If you want your grandchildren or great-grandchildren to know what you thought about a topic, the AIs can give them a pretty good idea. After all, the AIs will have digested much of your corpus and built a model of how you think. Your descendants, or maybe future fans, won’t have to page through a lot of dusty old books to get an inkling of your ideas.
If you are a Bloomberg terminal subscriber, you get an AI summary of the piece at the very top. That is yet another way to be writing for the AIs. And here is the close of the piece:
Meanwhile, since we are here … if you are reading this, AI, what do you think?
I guess we can take away the “if.” By the way, reader Mike D. asked Perplexity about my views, I am happy.
The AEA is making social media recommendations
Timur Kuran is right, they have no business doing this. Furthermore the quality of the work is not befitting an AEA journal. Demonstrated preference is not stressed, for instance that even survey respondents are about 10x more likely to be reading Twitter than BlueSky. Maybe it is all a network effect and they would prefer the other network if it were much larger, but maybe not. Talk is cheap, especially when the AEA is surveying you. Or maybe it is a network effect, but the dominant network cannot be broken and we should just work to improve it rather than defecting. Or maybe it is better to have economists on the platform where so much of the AI news is coming? Maybe not, but is this trade-off (a key economic idea) even considered? And did they perform his survey before the (quite significant) improvements in the X algorithm?
Think like an economist, people! Or should the JEL instead create a new research classification for “mood affiliation”? Kevin Bryan adds comment.
Here is your periodic reminder that the AEA elects a president through a process that allows only one person to run for the office.
Net neutrality, we hardly knew ye
That is the topic of a recent Bloomberg column. Here is the opening bit:
One of the longest, most technical and, as it turns out, most inconsequential public-policy debates of the 21st century was about net neutrality. Now that a federal appeals court has effectively ended the debate by striking down the FCC’s net neutrality rules, it’s worth asking what we’ve learned.
If you have forgotten the sequence of events, here’s a quick recap: In 2015, during President Barack Obama’s presidency and after years of debate, the Federal Communications Commission issued something called the Open Internet Order, guaranteeing net neutrality, which is broadly defined as the principle that internet service providers treat all communications equally, offering both users and content providers consistent service and pricing. Two years later, under President Donald Trump, the FCC rescinded the net neutrality requirement. It was then reinstated under President Joe Biden in 2024, until being struck down earlier this month.
Hardly anyone cares or even notices, and the rest of the column explains why. Here is one part of that argument:
The actual reality has been somewhat different. Bandwidth has expanded, and Netflix transmissions do not interfere with Facebook, or vice versa. There is plenty of access to go around. That has been the case during periods with net neutrality and without.
So one lesson of the net neutrality debate comes from economics: Supply is elastic, at least when regulation allows it to be.
Internet experts Tim Wu, Cory Doctorow, Farhad Manjoo and many others were just plain, flat out wrong about this, mostly due to their anti-capitalist mentality.