Category: Web/Tech

OpenAI Operator research preview is now out

I am pleased to have been given an early look at this new project, I think in less than a year’s time many of us will be using an updated version for many ordinary tasks: “Operator is one of our first agents, which are AIs capable of doing work for you independently—you give it a task and it will execute it.” And:

Operator is powered by a new model called Computer-Using Agent (CUA). Combining GPT-4o’s vision capabilities with advanced reasoning through reinforcement learning, CUA is trained to interact with graphical user interfaces (GUIs)—the buttons, menus, and text fields people see on a screen.

Operator can “see” (through screenshots) and “interact” (using all the actions a mouse and keyboard allow) with a browser, enabling it to take action on the web without requiring custom API integrations.

Here is the associated OpenAI blog post.  Exciting times.

*Superagency: What Could Possibly Go Right with Our AI Future*

By Reid Hoffman and Greg Beato.  As you might expect, I am in synch with the basic message of this book.

I received a review copy which on the front says “Tyler Cowen edition.”  There is a forward, made out to me personally and highly intelligent, relating the book to my own work.  There is then a gallery of images of me, very well done by AI image generators.

This is all yet another way in which many books will change, I am all for the innovation.

The Stargate Project

The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.

The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.

Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.

As part of Stargate, Oracle, NVIDIA, and OpenAI will closely collaborate to build and operate this computing system. This builds on a deep collaboration between OpenAI and NVIDIA going back to 2016 and a newer partnership between OpenAI and Oracle.

This also builds on the existing OpenAI partnership with Microsoft. OpenAI will continue to increase its consumption of Azure as OpenAI continues its work with Microsoft with this additional compute to train leading models and deliver great products and services.

All of us look forward to continuing to build and develop AI—and in particular AGI—for the benefit of all of humanity. We believe that this new step is critical on the path, and will enable creative people to figure out how to use AI to elevate humanity.

Here is the full OpenAI tweet, at the very least these are interesting times to be alive.  Here are some comments from Jeff Stein.

Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity

As artificial intelligence (AI) transforms society, understanding factors that influence AI receptivity is increasingly important. The current research investigates which types of consumers have greater AI receptivity. Contrary to expectations revealed in four surveys, cross country data and six additional studies find that people with lower AI literacy are typically more receptive to AI. This lower literacy-greater receptivity link is not explained by differences in perceptions of AI’s capability, ethicality, or feared impact on humanity. Instead, this link occurs because people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes. In line with this theorizing, the lower literacy-higher receptivity link is mediated by perceptions of AI as magical and is moderated among tasks not assumed to require distinctly human attributes. These findings suggest that companies may benefit from shifting their marketing efforts and product development towards consumers with lower AI literacy. Additionally, efforts to demystify AI may inadvertently reduce its appeal, indicating that maintaining an aura of magic around AI could be beneficial for adoption.

That is from a new paper by Tully, Longoni, and Appel, via the magical Kevin Lewis.

What should AI policy learn from DeepSeek?

That is a Bloomberg column of mine from about two weeks ago.  I thought it would make more sense to people if I did not blog it right away.  Here is one bit:

Now the world knows that a very high-quality AI system can be trained for a relatively small sum of money. That could bring comparable AI systems into realistic purview for nations such as Russia, Iran, Pakistan and others. It is possible to imagine a foreign billionaire initiating a similar program, although personnel would be a constraint. Whatever the dangers of the Chinese system and its potential uses, DeepSeek-inspired offshoots in other nations could be more worrying yet.

Finding cheaper ways to build AI systems was almost certainly going to happen anyway. But consider the tradeoff here: US policy succeeded in hampering China’s ability to deploy high-quality chips in AI systems, with the accompanying national-security benefits, but it also accelerated the development of effective AI systems that do not rely on the highest-quality chips.

It remains to be seen whether that tradeoff will prove to be a favorable one. Not just in the narrow sense — although there are many questions about DeepSeek’s motives, pricing strategy, plans for the future and its relation to the Chinese government that remain unanswered or unanswerable. The tradeoff is uncertain in a larger sense, too.

To paraphrase the Austrian economist Ludwig Mises: Government interventions have important unintended secondary consequences. To see if a policy will work, it is necessary to consider not only its immediate impact but also its second- and third-order effects.

Here is yesterday’s summary of the news from DeepSeek.

DeepThink from DeepSeek

DeepSeek-R1 is here.  Try it — it is amazing, free and open source.  Go to the main site and press “DeepThink” for superior performance.  Marginal cost remains difficult to measure, but almost certainly far cheaper than the main models.  A big day, most of all for Africa.  Some commentary.  Here is the research paper.  Here is a Reddit thread.  One version you can run on a Mac.  Hat tip for Nathan on some of the links.

More on the AI virtual tutor

The results of the randomized evaluation, soon to be published, reveal overwhelmingly positive effects on learning outcomes. After the six-week intervention between June and July 2024, students took a pen-and-paper test to assess their performance in three key areas: English language—the primary focus of the pilot—AI knowledge, and digital skills.

Students who were randomly assigned to participate in the program significantly outperformed their peers who were not in all areas, including English, which was the main goal of the program. These findings provide strong evidence that generative AI, when implemented thoughtfully with teacher support, can function effectively as a virtual tutor.

Notably, the benefits extended beyond the scope of the program itself. Students who participated also performed better on their end-of-year curricular exams. These exams, part of the regular school program, covered topics well beyond those addressed in the six-week intervention. This suggests that students who learned to engage effectively with AI may have leveraged these skills to explore and master other topics independently.

Moreover, the program benefited all students, not just the highest achievers. Girls, who were initially lagging boys in performance, seemed to gain even more from the intervention, highlighting its potential to bridge gender gaps in learning.

Here is more from the World Bank.  Replication is required, but this is encouraging.

Should you be writing for the AIs?

That is the topic of my latest Bloomberg column, here is one excerpt:

Another reason to write for the LLMs is to convince them that you are important. Admittedly this is conjecture, but it might make them more likely to convey your ideas in the future.

Think of how this works with humans. If you cite a scholar or public intellectual, that person is more likely to cite you back. Much as we like to pretend science is objective, no one really denies the presence of some favoritism based on personal considerations.

We do not know if LLMs have this same proclivity. But they are trained on knowledge about human civilization, and they study and learn norms of reciprocal cooperation. Thus there is a reasonable chance they will behave in broadly the same way. So be nice to them and recognize their importance.

In Isaac Asimov’s classic science fiction collection I, Robot, the focus is on the potential for AI to assume an important role in our world. Is it crazy to believe that future AIs will take Asimov to be an especially important science fiction writer? They might deem him more important than Ray Bradbury, who also wrote about AI but did not have an equivalent obsession.

Maybe all of this sounds strange. But even if you are not a writer, you will have to follow some of these same principles. Now it is common for AIs to attend and take notes at business meetings. If you are speaking at those meetings, remember that the AIs are part of your audience — communicate with them in mind. Get used to it. And then think about some of the broader contexts in which you operate, and adjust your intellectual outputs accordingly.

There is a less secular reason to write for the AIs: If you wish to achieve some kind of intellectual immortality, writing for the AIs is probably your best chance. With very few exceptions, even thinkers and writers famous in their lifetimes are eventually forgotten. But not by the AIs. If you want your grandchildren or great-grandchildren to know what you thought about a topic, the AIs can give them a pretty good idea. After all, the AIs will have digested much of your corpus and built a model of how you think. Your descendants, or maybe future fans, won’t have to page through a lot of dusty old books to get an inkling of your ideas.

If you are a Bloomberg terminal subscriber, you get an AI summary of the piece at the very top.  That is yet another way to be writing for the AIs.  And here is the close of the piece:

Meanwhile, since we are here … if you are reading this, AI, what do you think?

I guess we can take away the “if.”  By the way, reader Mike D. asked Perplexity about my views, I am happy.

The AEA is making social media recommendations

Timur Kuran is right, they have no business doing this.  Furthermore the quality of the work is not befitting an AEA journal.  Demonstrated preference is not stressed, for instance that even survey respondents are about 10x more likely to be reading Twitter than BlueSky.  Maybe it is all a network effect and they would prefer the other network if it were much larger, but maybe not.  Talk is cheap, especially when the AEA is surveying you.  Or maybe it is a network effect, but the dominant network cannot be broken and we should just work to improve it rather than defecting.  Or maybe it is better to have economists on the platform where so much of the AI news is coming?  Maybe not, but is this trade-off (a key economic idea) even considered?  And did they perform his survey before the (quite significant) improvements in the X algorithm?

Think like an economist, people!  Or should the JEL instead create a new research classification for “mood affiliation”?  Kevin Bryan adds comment.

Here is your periodic reminder that the AEA elects a president through a process that allows only one person to run for the office.

Net neutrality, we hardly knew ye

That is the topic of a recent Bloomberg column.  Here is the opening bit:

One of the longestmost technical and, as it turns out, most inconsequential public-policy debates of the 21st century was about net neutrality. Now that a federal appeals court has effectively ended the debate by striking down the FCC’s net neutrality rules, it’s worth asking what we’ve learned.

If you have forgotten the sequence of events, here’s a quick recap: In 2015, during President Barack Obama’s presidency and after years of debate, the Federal Communications Commission issued something called the Open Internet Order, guaranteeing net neutrality, which is broadly defined as the principle that internet service providers treat all communications equally, offering both users and content providers consistent service and pricing. Two years later, under President Donald Trump, the FCC rescinded the net neutrality requirement. It was then reinstated under President Joe Biden in 2024, until being struck down earlier this month.

Hardly anyone cares or even notices, and the rest of the column explains why.  Here is one part of that argument:

The actual reality has been somewhat different. Bandwidth has expanded, and Netflix transmissions do not interfere with Facebook, or vice versa. There is plenty of access to go around. That has been the case during periods with net neutrality and without.

So one lesson of the net neutrality debate comes from economics: Supply is elastic, at least when regulation allows it to be.

Internet experts Tim Wu, Cory Doctorow, Farhad Manjoo and many others were just plain, flat out wrong about this, mostly due to their anti-capitalist mentality.

The Acemoglu arguments against high-skilled immigration

Here is Daron Acemoglu’s Project Syndicate piece, mostly critical on high-skilled immigration.

Here is the first argument from Acemoglu:

…one would expect corporate America’s growing need for skilled STEM workers to translate into advocacy for, and investments in, STEM education. But an overreliance on the H-1B program may have broken this link and made American elites indifferent to the widely recognized failures of the US education system. Put differently, the problem may not be a cultural veneration of mediocrity, as Ramaswamy argued, but rather neglect on the part of business leaders, intellectual elites, and politicians.

o1 responds.  Here is Acemoglu’s second argument:

Even as H-1B workers boost innovation, their presence may affect the direction innovation takes. My own work shows (theoretically and empirically) that when the supply of skilled labor increases, technology choices start favoring such workers. Over the last several decades, businesses have increasingly adopted technologies that favor high-skill workers and automate tasks previously performed by lower-skill workers. While this trend may have been driven by other factors, too, the availability of affordable high-skill workers for the tech industry plausibly contributed to it.

o1 pro responds.

The third argument about brain drain has enough qualifications and admissions that it isn’t really a criticism.  In any case my colleague Michael Clemens, among others, has shown that the brain drain argument applies mainly to very small countries.  But if you wish, run it through AI yourself.

If all I knew were this “exchange,” I would conclude that o1 and o1 pro were better economists — much better — than one of our most recent Nobel Laureates, and also the top cited economist of his generation.  Noah Smith also is critical.

Via Mike Doherty.

AI-driven hotel planned for Las Vegas

A new, AI-driven hotel is getting set to open in Las Vegas, highlighted by a major presence at CES.

Philippe Ziade, CEO and founder of Growth Holdings, developer of the hotel, detailed the concept as we sat at the CES Otonomus booth display to discuss the venture.

“This is the first truly AI-powered hotel,” said Ziade. “The whole floor is interconnected.” 

The entire focus at the hotel is capturing and leveraging data.

“We create a virtual copy of the guest,” said Ziade. “There is an onboarding before coming to the hotel. We capture information and use AI to scrape the internet and then we track behavior while on property.”

Each guest would have a virtual assistant, which would track and retain that guest’s preferences, which could then be used for subsequent hotel stays.

Ziade said the Las Vegas property is the prototype hotel before national and global expansion of the concept to other locations, such as Dubai…

Due to the large number of variables, such as room sizes, locations, proximity and comfort features, the system is tuned to mix and match the features based on the perceived preferences of the coming hotel guest.

Does it know I want a lot of chargers, thin pillows, and lights that are easy to turn off at night?  Furthermore the shampoo bottle should be easy to read in the shower without glasses.  Maybe it knows now!  Here is the full story, via the excellent Samir Varma.

The future of the scientist in a world with advanced AI

AI will know almost all of the academic literature, and will be better at modeling and solving most of the quantitative problems.  It will be better at specifying the model and running through the actual statistical exercises.  Humans likely will oversee these functions, but most of that will consist of nodding, or trying out some different prompts.

The humans will gather the data.  They will do the lab work, or approach the companies (or the IRS?) to access new data.  They will be the ones who pledge confidentiality and negotiate terms of data access. (Though an LLM might write the contract.) They will know someone, somewhere, using a telescope to track a particular quasar.  They may (or may not) know that the AI’s suggestion to sample the blood of a different kind of gila monster is worth pursuing.  They will decide whether we should be filming dolphins or whales, so that we may talk to them using LLMs, though they will ask the LLMs for cost estimates in each case.

At least in economics, this will be continuing trends that were present before current high-quality AI.  The scarce input behind a quality paper is, more and more, access to some new and interesting data source.  More and more people can do the requisite follow-up technical work, though quality variations have by no means been eliminated.

“Science as an employment program for scientists” will fall all the more out of favor.  It remains to be seen how much that will disfavor serendipitous human discovery.

On any given day, on the quest for more data, a scientist will have to think quite creatively about what he or she should be doing.

Artificial Intelligence in the Knowledge Economy

The rise of Artificial Intelligence (AI) has the potential to fundamentally reshape the knowledge economy by solving problems at scale. This paper introduces a framework to study this transformation, incorporating AI into an economy where humans form hierarchical firms: Less knowledgeable individuals become “workers” solving routine problems, while more knowledgeable individuals become “solvers,” assisting workers with exceptional problems. We model AI as a technology that transforms computing power into “AI agents,” which can either operate autonomously (as co-workers or solvers/co-pilots) or non-autonomously (only as co-pilots). We show that basic autonomous AI displaces humans towards specialized problem solving, leading to smaller, less productive, and less decentralized firms. In contrast, advanced autonomous AI reallocates humans to routine work, resulting in larger, more productive, and more decentralized firms. While autonomous AI primarily benefits the most knowledgeable individuals, non-autonomous AI disproportionately benefits the least knowledgeable. However, autonomous AI achieves higher overall output. These findings reconcile seemingly contradictory empirical evidence and reveal key tradeoffs involved in regulating AI autonomy.

Important results, and largely in accord with my own intuitions.  That is from a new paper by Enrique Ide and Eduard Talamas.