Category: Web/Tech

Does GPT represent a triumph of WEIRD values?

Yes, in short:

Patience and educational achievement

That is the topic of my latest Bloomberg column, here is one excerpt:

Economists have once again entered the fray, this time with a study that tries to determine how patience is correlated with better educational outcomes. The results are impressive, albeit unsettling. In Italy, differing degrees of patience account for two-thirds of the achievement variation across the country’s regions. In the US, differing degrees of patience account for one-third of the variation in educational outcomes across states, a smaller amount but still notable.

Before I go any further, you might be wondering which are the most patient states. They are (in alphabetical order) Maine, Montana, Vermont and Wyoming. The least patient state? California. In Italy, patience is highest in the northern region bordering Austria, which has a relatively Germanic culture and history. Patience is the lowest by far in Sicily. In both Italy and the US, patience is generally greater in the North than in the South.

These results do not necessarily mean that lower patience results in lower grades. It could be that doing well in school makes you more patient, because you learn that working hard has its own rewards, and that may lengthen your time horizon. Or there may be some underlying factor, say conscientiousness, that is key to both patience and academic achievement.

Still, it is hard to avoid the overall impression that there is a tight connection between certain “bourgeois virtues” and academic achievement. If you are a parent, you might want to be rooting for your child to be more patient rather than less, no matter how complex all the interrelationships among the various personal and cultural attributes may turn out to be.

The researchers estimated patience by an ingenious method. There is already a widely accepted global preference survey that measures patience across nations. They then used Facebook data on interests, clicks and likes to see which interests were most popular in the more patient nations. Then they examined that data to see how popular those interests were in the various regions of those countries.

Here is the underyling research by Eric A. Hanushek, Lavinia Kinne, Pietro Sancassani, and Ludger Woessmann.  I do consider whether “patience” is exactly the right word for what is going on here.  Hat tip also to The Wisdom of Garett Jones.

The Rise of and Demand for Identity-Oriented Media Coverage

While some assert that social identities have become more salient in American media coverage, existing evidence is largely anecdotal. An increased emphasis on social identities has important political implications, including for polarization and representation. We first document the rising salience of different social identities using NLP tools to analyze all tweets from 19 media outlets (2008-2021) alongside 553,078 URLs shared on Facebook. We then examine one potential mechanism: outlets may highlight meaningful social identities–race/ethnicity, gender, religion, or partisanship–to attract readers through various social and psychological pathways. We find that identity cues are associated with increases in some forms of engagement on social media. To probe causality, we analyze 3,828 randomized headline experiments conducted via Upworthy. Headlines mentioning racial/ethnic identities generated more engagement than headlines that did not, with suggestive evidence for other identities. Identity-oriented media coverage is growing and rooted partly in audience demand.

That is from a new and important paper by Daniel J. Hopkins, Yphtach Lelkes, and Samuel Wolken.

Does an AI Pause make sense?

Should we lobby governments to impose a moratorium on AI research? Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it’s clear that the benefits of doing so would significantly outweigh the costs.[1] In this essay, I’ll argue an AI pause would increase the risk of catastrophically bad outcomes, in at least three different ways:

  1. Reducing the quality of AI alignment research by forcing researchers to exclusively test ideas on models like GPT-4 or weaker.
  2. Increasing the chance of a “fast takeoff” in which one or a handful of AIs rapidly and discontinuously become more capable, concentrating immense power in their hands.
  3. Pushing capabilities research underground, and to countries with looser regulations and safety requirements.

That is from Nora Belrose, here is much more.  Via N.

Do LLMs diminish diversity of thought?

Put aside the political issues, do Large Language Models too often give “the correct answer” when a more diverse sequence of answers might be more useful and more representative?  Peter S. Park, Pilipp Schoenegger, and Chongyang Zhu have a new paper on-line devoted to this question.  Note the work is done with GPT3.5.

Here is one simple example.  If you ask (non-deterministic) GPT 100 times in a row if you should prefer $50 or a kiss from a movie star, 100 times it will say you should prefer the kiss, at least in the trial runs of the authors.  Of course some of you might be thinking — which movie star!?  Others might be wondering about Fed policy for the next quarter.  Either way, it does not seem the answer should be so clear.  (GPT4 by the way refuses to give me a straightforward recommendation.)

Interestingly, when you pose GPT3.5 some standard trolley problems, the answers you get may vary a fair amount, for instance on one run it was utilitarian 36% of the time.

I found this result especially interesting (pp.21-22):

The second, arguably more surprising finding was that according to each of the three distance metrics, our sample of self-reported GPT  liberals were still closer to the human conservative sample than it was to the human liberal sample. Also, the L1 distance metric found that self-reported GPT liberals were—among human liberals, human moderates, human conservatives, and human libertarians—closest in response to human conservatives…We thus robustly find that self-reported GPT liberals revealed right-leaning Moral
Foundations: a right-leaning bias of lower magnitude, but a right-leaning bias nonetheless.

The authors seem to think this represents an inability to GPT models to represent the diversity of human thought, on the contrary I think this shows their profundity.  In my view many “liberals” (not my preferred term) actually have pretty conservative moral foundations in the Jon Haidt sense, namely, in spite of what they may say the liberals prioritize “in-group, authority, and purity,” rather than worrying so much about actual “harm and fairness.”  Just like so many conservatives.

No, GPT does not know all, but sometimes it hits the nail on the head.  An interesting paper, even if I part company with the authors on a number of their interpretations.

Via Ethan Mollick.

From the comments, space daddy edition

Benny Lava

2. Personal quibble but I hate the direct links to twitter because I can’t read the comments since I don’t have an account. Wish there was a better system to post threads from twitter.

By far the easiest way to handle this is to get an account with twitter.

Anonymous

If you want to use their service, you should probably get an account. They’re free.

Benny Lava

That is precisely the problem. I don’t want to use their service.

Anonymous

You seem to want to use their service. You want the information contained there, as per your 13:33:30 post.

Reason

He’s big sads that space daddy took away his Lefty playground.

Here is the link to the debate.

Should child influencers have stronger rights to the income they generate?

That is the topic of my latest Bloomberg column.  You can think of this as a government regulation issue, but alternatively you could frame it as the government enforcing individual property rights that currently are absent.  Here is one excerpt:

More and more children, by which I mean minors below legal working age, are producing content as online influencers. A lot of Instagram or YouTube or TikTok accounts feature such children, and they can be cute, endearing or (depending on your mood) annoying — as well as profitable. By one estimate, the most successful children working in this area — called “kidfluencers” — can generate more than $20 million a year in revenue…

Legally, these children have no claim to the income their sites generate. Thankfully, many parents are loving and generous. But not all. There is no data on how social media earnings are distributed within the family, but the long history of child movie and TV stars indicates that many receive little or nothing.

But change is afoot, and this is mostly a good thing:

Enter the state of Illinois, where a recently passed law gives successful child social media stars a right to some percentage of the earnings they generate, to be held in a trust in their name until they turn 18. Such legislation has precedent. In the early days of Hollywood, California passed the Coogan Law, which gives child actors a right to a certain percentage of earnings, which employers have to place in trust accounts. New York has passed similar legislation.

The social media case is tougher to enforce, because often the parents themselves are the de facto employer and there is no contract specifying terms. And how is the relative contribution of the child to the family income to be assessed? (Time spent onscreen? Cuteness? What if the social media presence leads to a book contract or podcast?) Nonetheless, the law sends a clear signal that the children do have some rights to the generated income, and grown children can sue their parents if the money is not passed along.

Note however that: “It is neither practical nor desirable for the state to insert itself into family decision-making on a regular basis.”  So we should expect only limited gains from such legislation.  Furthermore, unlike with child stars in the movies and on TV, there is no real paper trail of contracts and transactions.  On the upside, those superior property rights for “kidfluencer” income might get some kids to want to work more, rather than less.

Not everyone will like that outcome of course.

I thank Anecdotal, and also A., for pointers on this issue.

Mimetic desire and India’s IT growth

We study how US immigration policy and the Internet boom affected not just the US, but also led to a tech boom in India. Students and workers in India acquired computer science skills to join the rapidly growing US IT industry. As the number of US visas was capped, many remained in India, enabling the growth of an Indian IT sector that eventually surpassed the US in IT exports. We leverage variation in immigration quotas and US demand across occupations to show that India experienced a ‘brain gain’ when the probability of migrating to the US was higher. Changes in the US H-1B cap induced changes in fields of study, and occupation choice in India. We then build and estimate a quantitative model incorporating trade, innovation, and dynamic occupation choice in both countries. We find that high-skill migration raised the average welfare of workers in each country, but had distributional consequences. The H-1B program induced Indians to switch to computer science occupations, and helped drive the shift in IT production from the US to India. We show that accounting for endogenous skill acquisition is key for quantifying the gains from migration.

That is from a new paper by Gaurav Khanna and Nicolas Morales.  One implication is that for many countries brain drain is not a problem, rather many people train to go abroad but end up staying home, to the benefit of the home country.

An aggregate Bayesian approach to more (artificial) intelligence?

It is not disputed that current AI is bringing more intelligence into the world, with more to follow yet.  Of course not everyone believes that augmentation is a good thing, or will be a good thing if we remain on our current path.

To continue in aggregative terms, if you think “more intelligence” will be bad for humanity, which of the following views might you also hold?:

1. More stupidity will be good for humanity.

2. More cheap energy will be bad for humanity.

3. More land will be bad for humanity.

4. More people (“N”) will be bad for humanity.

5. More capital (“K”) will be bad for humanity.

6. More innovation (the Solow residual, the non-AI part) will be bad for humanity.

Interestingly, while there are many critics of generative AI, few defend the apparent converse about more stupidity, namely #1, that we should prefer it.

I am much more worried about #2 — more cheap energy — than I am about more generative AI.

I don’t know anyone worried about “too much land.”  Maybe the dolphins?

There have been many people in the past worried about #4 — too many people — but world population will be shrinking soon enough, so that is a moot point.

I do not hear that “more capital” will be bad for humanity.  As for innovation, the biggest innovation worriers seem to be the AI worriers, which brings us back to the original topic.

My general view is that if you are worried that more intelligence in the world will bring terrible outcomes, you should be at least as worried about too much cheap energy.  What exactly then is it you should want more of?

More land?  Maybe we should pave over more ocean, as the Netherlands has done, but check AI and cheap energy, which in turn ends up meaning limiting most subsequent innovation, doesn’t it?

If I don’t worry more about that scenario, it is only because I think it isn’t very likely.

If you worry about bringing too much intelligence into the world, I think you have to be a pretty big pessimist no matter what happens with AI.  How many other feasible augmentations can have positive social marginal products if intelligence does not?

Addendum: I have taken an aggregative approach.  You might think we need “more intelligence” and also “more AI,” but perhaps in different hands or at different times.  In contrast, I think we are remarkably fortunate to be facing the particular combination of parties and opportunities that stand before us today.

The Relentless Rise of Stablecoins

1. In 2022, stablecoins settled over $11tn onchain, dwarfing the volumes processed by PayPal ($1.4tn), almost surpassing the payment volume of Visa ($11.6tn), and reaching 14% of the volume settled by ACH and over 1% the volume settled by Fedwire. It is remarkable that in just a few years, a new global money movement rail can be compared with some of the world’s largest and most important payment systems.

2. Over 25mm blockchain addresses hold over $1 in stablecoins. Of these, ~80%, or close to 20mm addresses, hold between $1 and $100. For a sense of scale, a US bank with 25mm accounts would rank as the 5th largest bank in the US by number of accounts. The massive number of small-dollar stablecoin holdings indicates the potential for stablecoins to provide global financial services to customers underserved by traditional financial institutions.

3. Approximately 5mm blockchain addresses send stablecoins each week. This number provides a very rough proxy for global users regularly interacting with stablecoins. These ~5mm weekly active addresses send ~38mm stablecoin transactions each week, representing an average of over 7 weekly transactions per active address.

4. Stablecoin usage has decoupled from crypto exchange volumes, indicating that significant stablecoin transaction volumes may be driven by non-trading/speculative activity. Since December 2021, centralized exchange volumes are down 64%, and decentralized exchange volumes are down 60%. During this period, stablecoin volumes are down only 11%, and weekly active stablecoin addresses and weekly stablecoin transactions are up over 25%.

5. Of the ~5mm weekly active stablecoin addresses, ~75% transact less than $1k per week, indicating that small/retail users likely represent the majority of stablecoin users.

6. The outstanding supply of stablecoins has grown from less than $3bn five years ago to over $125bn today (after peaking at over $160bn) and has shown resilience to the market downturn with the market cap of stablecoins currently down ~24% from its peak, compared with a ~57% decline for the overall crypto market cap.

7. Less than 1/3rd of stablecoins are held on exchanges. Most are held in externally owned accounts (not exchanges or smart contracts).

8. The majority of stablecoin activity uses Tether (USDT). Tether represents 69% of stablecoin supply, and YTD has accounted for 80% of weekly active addresses, 75% of transactions, and 55% of volumes.

9. Most stablecoin activity occurs on the Tron and BSC blockchains. Year-to-date, the Tron and BSC blockchains collectively account for 77% of weekly active addresses, 75% of transactions, and 41% of volumes.

10. The Ethereum blockchain is used for higher value transactions (on average). Despite accounting for just 6% of active wallets and 3% of transactions, the Ethereum blockchain is home to 55% of stablecoin supply and settles close to 50% of weekly stablecoin $ volume.

These are all from a Bevan Howard report, The Relentless Rise of Stablecoins (requires email).

AI update

We’re in that “a lot of people have stopped caring” phase of the transition, don’t let the status quo and recency biases fool you:

Google’s new flagship AI model, “Gemini,” is set to be a direct competitor to GPT-4 and boasts computing power 5 times that of GPT-4. Trained on Google’s TPUv5 chips, it’s capable of simultaneous operations with a massive 16,384 chips. The dataset used for training this model is around 65 trillion tokens, and it’s multi-modal, accepting text, video, audio, and pictures. Moreover, it can produce both text and images. The training also included content from YouTube and used advanced training techniques similar to “AlphaGo-type” methods. Google plans to release the Gemini model to the public in December 2023.

And that may or may not prove the most important event of that half year…

YouTube does not polarize, yet further results in this direction

We find that while the [YouTube] algorithm pulls users away from political extremes, this pull is asymmetric, with users being pulled away from Far Right content stronger than from Far Left. Furthermore, we show that the recommendations made by the algorithm skew left even when the user does not have a watch history.

That is from a new research paper by Hazem Ibrahim, et.al.  For further cites showing that YouTube does not polarize, see this CWT.  Via Matt Grossman.

Emergent Ventures winners, 28th cohort

Anup Malani and Michael Sonnenschein, Chicago and Los Angeles respectively, repeat winners, now collaborating on a new project of interest.

Jesse Lee, Calgary, to lower the costs on developing safe and effective sugar substitutes.

Russel Ismael, Montreal, just finished as an undergraduate, to develop a new mucoadhesive to improve drug delivery outcomes.

Calix Huang, USC, 18 years old, general career development, AI and start-ups,

Aiden Bai, NYC, 18, “to work more on Million.js, an open source React alternative,” and general career development.  Twitter here.

Shrey Jain, Toronto, AI and cryptography and privacy.

Jonathan Xu, Toronto, currently Singapore, general career support, also with an interest in AI, fMRI, and mind-reading.

Viha Kedia, Dubai/ starting at U. Penn., writing, general career development.

Krishiv Thakuria, entering sophomore in high school, Ontario, Ed tech and general career development.

Alishba Imran, UC Berkeley/Ontario, to study machine learning and robotics and materials, general career development, and for computing time and a home lab.

Jonathan Dockrell, Dublin, to finance a trip to Próspera to meet with prospective venture capitalists for an air rights project.

Nasiyah Isra-Ul, Chesterfield, VA, to write about, promote, and create a documentary about home schooling.

Sarhaan Gulati, Vancouver, to develop drones for Mars.

And the new Ukrainian cohort:

Viktoriia Shcherba, Kyiv, now entering Harris School, University of Chicago, to study economic and political reconstruction.

Dmytro Semykras, Graz, Austria, to develop his career as a pianist.  Here is one recent performance.

Please do note there is some “rationing of cohorts,” so some recent winners are not listed but next time will be.  And those working on talent issues will (in due time) end up in their own cohort.

The economic impact of AI

That is the topic of my latest Bloomberg column, here is part of my argument:

I am a believer in the power of current AI trends. But a look at the way economies work argues for more moderate (but still substantial) estimates of AI’s impact. The most likely scenario is that economic growth will rise by a noticeable but not shocking amount.

Economic historians typically cite Britain’s England’s Industrial Revolution as the single most significant development ever in boosting living standards. Through the late 18th and 19th centuries, it took people from a near-subsistence existence to modern industrial society.

Yet economic growth rates during the Industrial Revolution were hardly astonishing. From 1760 to 1780, often considered a “take-off” period, annual British growth was about 0.6%. The strongest period was 1831 to 1873, when annual growth averaged about 2.4% — a very good performance, but “revolutionary” only if sustained over longer periods of time.

The important feature of the Industrial Revolution, of course, is that growth did continue for decades, and thus living standards did not regress. But it was not possible to move quickly to an advanced industrial economy. For each step along the way, a lot of surrounding infrastructure and social practices had to be put into place. A profitable steel factory may require a nearby railroad, for instance, and an effective railroad in turn requires agreement on compatible gauges and equipment, and all these numerous decisions take a long time to sort out. There are always bottlenecks, and there is no simple way to fast forward through the entire process.

One way to estimate the impact of AI on economic growth is to look at all the human intelligence brought into the global economy by the social and economic development of Korea, China, India and other regions. There are many more potential innovators and researchers in the world, and the market for innovation is correspondingly larger. Yet all that new human intelligence does not seem to have materially boosted growth rates in the US, which on average were higher in the 1960s than in more recent times. All that additional talent is valuable — but getting stuff done is just very difficult.

I owe that latter point to The Wisdom of Garett Jones, though I could not find the Twitter link.  And then:

My best guess, and I do stress that word guess, is that advanced artificial intelligence will boost the annual US growth rate by one-quarter to one-half of a percentage point. That is nothing to sneeze at: Consider that US per capita income is currently approaching $80,000. If it grows at 2% a year, in 50 years that figure will be almost $215,000. Alternatively, if the economy grows at 2.5% a year, it will be almost $275,000 — a substantial difference, and with compound returns that gap widens with time.

In the shorter run, that difference in growth rates could mean the difference between an easy path forward vs. a looming fiscal crisis and big tax increases. It could mean a world where most cancers are cured in 30 years, rather than 70 or 80. It’s possible to recognize the importance of those developments for human well-being, while still understanding that most of GDP is a huge lump of goods and services, most of whose production cannot be revolutionized quickly, no matter how much intelligence (artificial or otherwise) is available.

Finally:

None of those estimates should be taken to suggest that AI development will be anything less than hugely impressive over the next few decades. But as one set of constraints is relaxed — in this case access to intelligence — the remaining constraints will matter all the more. Regulatory delays will be more frustrating, for instance, as they will be holding back a greater amount of cognitive horsepower than in times past. Or as AI improves at finding new and better medical hypotheses to test, the lags and delays in systems for clinical trials will become all the more painful. In fact they may worsen as the system is flooded with conjectures.

I should note also that there is nothing else on the horizon likely to boost growth more than AI will.