Category: Web/Tech
The Acemoglu arguments against high-skilled immigration
Here is Daron Acemoglu’s Project Syndicate piece, mostly critical on high-skilled immigration.
Here is the first argument from Acemoglu:
…one would expect corporate America’s growing need for skilled STEM workers to translate into advocacy for, and investments in, STEM education. But an overreliance on the H-1B program may have broken this link and made American elites indifferent to the widely recognized failures of the US education system. Put differently, the problem may not be a cultural veneration of mediocrity, as Ramaswamy argued, but rather neglect on the part of business leaders, intellectual elites, and politicians.
o1 responds. Here is Acemoglu’s second argument:
Even as H-1B workers boost innovation, their presence may affect the direction innovation takes. My own work shows (theoretically and empirically) that when the supply of skilled labor increases, technology choices start favoring such workers. Over the last several decades, businesses have increasingly adopted technologies that favor high-skill workers and automate tasks previously performed by lower-skill workers. While this trend may have been driven by other factors, too, the availability of affordable high-skill workers for the tech industry plausibly contributed to it.
The third argument about brain drain has enough qualifications and admissions that it isn’t really a criticism. In any case my colleague Michael Clemens, among others, has shown that the brain drain argument applies mainly to very small countries. But if you wish, run it through AI yourself.
If all I knew were this “exchange,” I would conclude that o1 and o1 pro were better economists — much better — than one of our most recent Nobel Laureates, and also the top cited economist of his generation. Noah Smith also is critical.
Via Mike Doherty.
AI-driven hotel planned for Las Vegas
A new, AI-driven hotel is getting set to open in Las Vegas, highlighted by a major presence at CES.
Philippe Ziade, CEO and founder of Growth Holdings, developer of the hotel, detailed the concept as we sat at the CES Otonomus booth display to discuss the venture.
“This is the first truly AI-powered hotel,” said Ziade. “The whole floor is interconnected.”
…The entire focus at the hotel is capturing and leveraging data.
“We create a virtual copy of the guest,” said Ziade. “There is an onboarding before coming to the hotel. We capture information and use AI to scrape the internet and then we track behavior while on property.”
Each guest would have a virtual assistant, which would track and retain that guest’s preferences, which could then be used for subsequent hotel stays.
Ziade said the Las Vegas property is the prototype hotel before national and global expansion of the concept to other locations, such as Dubai…
Due to the large number of variables, such as room sizes, locations, proximity and comfort features, the system is tuned to mix and match the features based on the perceived preferences of the coming hotel guest.
Does it know I want a lot of chargers, thin pillows, and lights that are easy to turn off at night? Furthermore the shampoo bottle should be easy to read in the shower without glasses. Maybe it knows now! Here is the full story, via the excellent Samir Varma.
The future of the scientist in a world with advanced AI
AI will know almost all of the academic literature, and will be better at modeling and solving most of the quantitative problems. It will be better at specifying the model and running through the actual statistical exercises. Humans likely will oversee these functions, but most of that will consist of nodding, or trying out some different prompts.
The humans will gather the data. They will do the lab work, or approach the companies (or the IRS?) to access new data. They will be the ones who pledge confidentiality and negotiate terms of data access. (Though an LLM might write the contract.) They will know someone, somewhere, using a telescope to track a particular quasar. They may (or may not) know that the AI’s suggestion to sample the blood of a different kind of gila monster is worth pursuing. They will decide whether we should be filming dolphins or whales, so that we may talk to them using LLMs, though they will ask the LLMs for cost estimates in each case.
At least in economics, this will be continuing trends that were present before current high-quality AI. The scarce input behind a quality paper is, more and more, access to some new and interesting data source. More and more people can do the requisite follow-up technical work, though quality variations have by no means been eliminated.
“Science as an employment program for scientists” will fall all the more out of favor. It remains to be seen how much that will disfavor serendipitous human discovery.
On any given day, on the quest for more data, a scientist will have to think quite creatively about what he or she should be doing.
Dean Ball speaks
o1 and o1-pro are a cut above other models at economic history writing. they can combine knowledge of historical facts with economic analysis in a *much* more creative and cogent way than earlier models. it’s not just math and code, folks.
Here is the link.
Artificial Intelligence in the Knowledge Economy
The rise of Artificial Intelligence (AI) has the potential to fundamentally reshape the knowledge economy by solving problems at scale. This paper introduces a framework to study this transformation, incorporating AI into an economy where humans form hierarchical firms: Less knowledgeable individuals become “workers” solving routine problems, while more knowledgeable individuals become “solvers,” assisting workers with exceptional problems. We model AI as a technology that transforms computing power into “AI agents,” which can either operate autonomously (as co-workers or solvers/co-pilots) or non-autonomously (only as co-pilots). We show that basic autonomous AI displaces humans towards specialized problem solving, leading to smaller, less productive, and less decentralized firms. In contrast, advanced autonomous AI reallocates humans to routine work, resulting in larger, more productive, and more decentralized firms. While autonomous AI primarily benefits the most knowledgeable individuals, non-autonomous AI disproportionately benefits the least knowledgeable. However, autonomous AI achieves higher overall output. These findings reconcile seemingly contradictory empirical evidence and reveal key tradeoffs involved in regulating AI autonomy.
Important results, and largely in accord with my own intuitions. That is from a new paper by Enrique Ide and Eduard Talamas.
The new o3 model from OpenAI
Some more results. And this:
Yupsie-dupsie, delivery of this:
Happy holidays people, hope you are enjoying the presents!
Scott Alexander on chips (from the comments)
Why you should be talking with gpt about philosophy
I’ve talked with Gpt (as I like to call it) about Putnam and Quine on conceptual schemes. I’ve talked with it about Ζeno’s paradoxes. I’ve talked with it about behaviourism, causality, skepticism, supervenience, knowledge, humour, catastrophic moral horror, the container theory of time, and the relation between different conceptions of modes and tropes. I tried to get it to persuade me to become an aesthetic expressivist. I got it to pretend to be P.F. Strawson answering my objections to Freedom and Resentment. I had a long chat with it about the distinction between the good and the right.
…And my conclusion is that it’s now really good at philosophy…Gpt could easily get a PhD on any philosophical topic. More than that, I’ve had many philosophical discussions with professional philosophers that were much less philosophical than my recent chats with Gpt.
Here is the full Rebecca Lowe Substack on the topic. There are also instructions for how to do this well, namely talk with Gpt about philosophical issues, including ethics:
In many ways, the best conversation I’ve had with Gpt, so far, involved Gpt arguing against itself and its conception of me, as both Nozick1 (the Robert Nozick who sadly died in 2002) and Nozick2 (the imaginary Robert Nozick who is still alive today, and who according to Gpt has developed into a hardcore democrat), on the topic of catastrophic moral horror.
And as many like to say, this is the worst it ever will be…
Thomas Storrs on elastic data supply (from my email)
Regarding your post yesterday Are LLMs running out of data?, the National Archives has 13.5 billion pieces of paper and only 240 million are digitized. Some of this is classified or otherwise restricted, but surely we can do better than less than 2%.
NARA says they aim to increase this to 500 million by the end of 2026. Even with an overly generous $1/record estimate, it makes sense to me for someone to digitize much of the remaining 13 billion though the incentives are tricky for private actors. Perhaps a consortium of AI companies could make it work. It’s a pure public good so I would be happy with a federal appropriation.
Admittedly, I have a parochial interest in specific parts’ being digitized on mid-20th century federal housing policy. Nonetheless, supply of data for AI is obviously elastic and there’s some delicious low-hanging fruit available.
The National Archives are probably the biggest untapped source of extant data. There are hundreds of billions of more pages around the world though.
*The Nvidia Way*
I quite liked this new book by Tae Kim, offering a 245 pp. history of the company. Here is a useful review from the WSJ.
I can note that recently, a bit before Thanksgiving, I had the chance to visit Nvidia headquarters in Santa Clara, receive a tour, see some demos, and (a few days earlier) chat with Jensen Huang. I am pleased to report very positive impressions all around. My intuitive sense also jives with the portrait painted in this book.
As for my impressions of Nvidia, I was struck by the prevalence of large, attractive plant displays in the headquarters, and also how much care they take to ensure quietness on the main corporate floors and spaces (I notice funny things about companies). The geometric shapes and designs, for whatever reason, reminded me of the early 1970s movie Silent Running. If I visit an AI company, in the hallways many people will recognize me. At Nvidia nobody did, except those who invited me in. That too is an interesting contrast.
I am honored to have seen their lovely facilities.
Kevin Bryan and Joshua Gans have a new AI educational project
Just wanted to ping you about a tool Joshua Gans and I launched publicly today after a year of trials at universities all over the world (and just a stupid amount of work!) which I think is up your alley.
Idea is simple: AI should be 1) personalized to the student, 2) personalized to the professor’s content, and 3) structured to improve rather than degrade learning. In a perfect world, we want every student to have individual-level assistance, at any time, in any language, in the format they want (a chatbot TA, a question bank, a sample test grader, etc.). We want all assignments to be adaptive “mastery learning”. We want the professor to have insight on a weekly basis into how students are doing, and even into topics they may have taught in a somewhat confusing way. And we want to do this basically for free.
Right now, we have either raw GPT or Claude accomplishing 1 but not 2 or 3 (and some evidence it degrades learning for some students), or we have classes big enough to build custom AI-driven classes (like Khan Academy for basic algebra). For the thousands of classes where the professor’s teaching is idiosyncratic, the latter set of tools is basically “give the students a random textbook off the library shelf on the topic and have them study it” – not at all what I want my students to do!
We set up a team including proper UX designers and backend devs and built this guy here: https://www.alldayta.com/. It’s drag-and-drop for your course audio/video, slides, handouts, etc., preprocesses everything is a much deeper raw than raw OCR or LLMs, then preps a set of tools. Right now, there is a student-facing “virtual TA” and an autosummary weekly of where students are having trouble with the rest rolling out once we’re convinced the beta version is high enough quality. In my classes, I’ve had up to 10000 interactions in a term with this, and we ran trials at [redacted]. And we can do it at like a buck or two a student across a term, spun up in like 30 minutes of professor time for smaller courses.
There’s a free trial anyone can just sign up for; if your colleagues or the MR crowd would be interested, definitely send it along. I put a Twitter thread up about it as well with some examples of where we are going and why we think this is where higher ed is headed: https://x.com/Afinetheorem/status/1867632900956365307
Midnight regulations on chip access
Let us hope the Biden administration does not do too much damage on its way out the door (WSJ):
The U.S. is preparing rules that would restrict the sale of advanced artificial-intelligence chips in certain parts of the world in an attempt to limit China’s ability to access them, according to people familiar with the matter.
The rules are aimed at China, but they threaten to create conflict between the U.S. and nations that may not want their purchases of chips micromanaged from Washington.
…The purchasing caps primarily apply to regions such as Southeast Asia and the Middle East, the people said. The rules cover cutting-edge processors known as GPUs, or graphic processing units, which are used to train and run large-scale AI models.
Should we not want to bring the UAE more firmly into the American orbit? Is there not a decent chance they will have the energy supply for AI that we are unwilling to build domestically? Might not these regulations, over time, encourage foreign nations to become part of the Chinese AI network? More generally, why should an outgoing administration be making what are potentially reversible foreign policy decisions for the next regime?
Ilya’s talk
Twenty-four minutes, sixteen minutes for the core talk, self-recommending.
A new paper on the economics of AI alignment
A principal wants to deploy an artificial intelligence (AI) system to perform some task. But the AI may be misaligned and pursue a conflicting objective. The principal cannot restrict its options or deliver punishments. Instead, the principal can (i) simulate the task in a testing environment and (ii) impose imperfect recall on the AI, obscuring whether the task being performed is real or part of a test. By committing to a testing mechanism, the principal can screen the misaligned AI during testing and discipline its behaviour in deployment. Increasing the number of tests allows the principal to screen or discipline arbitrarily well. The screening effect is preserved even if the principal cannot commit or if the agent observes information partially revealing the nature of the task. Without commitment, imperfect recall is necessary for testing to be helpful.
That is by Eric Olav Chen, Alexis Ghersengorin, and Sami Petersen. And here is a tweet storm on the paper. I am very glad to see the idea of an optimal principal-agent contract brought more closely into AI alignment discussions. As you can see, it tends to make successful alignment more likely.
Austin Vernon on drones (from my email)
The offensive vs. defensive framing seems wrong, at least temporarily. It should be motivated vs. unmotivated, with drones favoring the motivated.
A competent drone capability requires building a supply chain, setting up a small manufacturing/assembly operation, and training skilled operators. They need to manage frequencies and adjust to jamming. Tight integration of these functions is a necessity. That favors highly motivated groups with broad popularity (recruiting skilled talent!) even if they are nominally weak.
Conversely, it can be challenging for overly corrupt or complacent organizations to counter. They are also more likely to fracture and lose cohesion when under attack.
We’ve seen HTS, Burmese rebels, and Azerbaijan all have a lot of success with drones. Ukraine went from hopelessly behind in drone tech to leading Russia in innovation in many niches.
It seems reasonable that the barriers to entry for a motivated drone “startup” will go up. The US military has effective, expensive interceptors like Coyote Block II to counter small attacks in locations like Syria. Fighting larger entities requires pretty absurd scaling to match enemy numbers and the low per-flight success rate – Ukraine claims they might produce millions of drones this year. Hamas had initial success attacking Israel on Oct. 7 but didn’t have the magazine depth to defend themselves.
AI targeting, the necessity of specialized components to defeat electronic warfare, and cheaper drone interceptors are all factors that could upset this balance. Entities that have the scale to deploy an AI stack, true factories, and specialized components should gain the advantage if the rate of change slows.
From this post on chip export bans:
That seems to be the real Scott from the IP address and his knowledge of our conversation. Plus it sounds like Scott (apologies if it is not!).
I would say this: since I chatted with Scott I took a very instructive and positive trip to United Arab Emirates. I am very impressed by their plans to put serious energy power behind AI projects. If you think about it, they have a major presence in three significant energy sources: fossil fuels, solar (more to come), and nuclear (much more to come). They also are not so encumbered by NIMBY constraints, whereas some of the American nuclear efforts have in the meantime met with local and regional stumbling blocks. There really is plenty of empty desert there.
So I think America has a great chance to work with UAE on these issues. I do understand there are geopolitical and other risks to such a collaboration, but I think the risks from no collaboration are greater.
This short tale is a good example of the benefits of travel.
And if you can get to Abu Dhabi, I urge you to go. In addition to what I learned about AI, I very much enjoyed their branch of the Louvre, with its wonderful Greek statue and Kandinsky, among other works, not to mention the building itself. The Abrahamic Family House, on a plaza, has a lovely mix of mosque, church, and synagogue, the latter of course being politically brave and much needed in the Middle East. Here is Rasheed Griffith on Abu Dhabi.