AI Is Improving Faster Than Most Humans Realize

That is the topic of my latest Bloomberg column, here is one excerpt:

I have a story for you, about chess and a neural net project called AlphaZero at DeepMind. AlphaZero was set up in late 2017. Almost immediately, it began training by playing hundreds of millions of games of chess against itself. After about four hours, it was the best chess-playing entity that ever had been created. The lesson of this story: Under the right conditions, AI can improve very, very quickly.

LLMs cannot match that pace, as they are dealing with more open and more complex systems, and they also require ongoing corporate investment. Still, the recent advances have been impressive.

I was not wowed by GPT-2, an LLM from 2019. I was intrigued by GPT-3 (2020) and am very impressed by ChatGPT, which is sometimes labeled GPT-3.5 and was released late last year. GPT-4 is on its way, possibly in the first half of this year. In only a few years, these models have gone from being curiosities to being integral to the work routines of many people I know. This semester I’ll be teaching my students how to write a paper using LLMs.

We are now at or close to the point where LLMs can read and accurately evaluate the work of…LLms.  That will accelerate progress considerably.

And to close I wrote this:

I’ve started dividing the people I know into three camps: those who are not yet aware of LLMs; those who complain about their current LLMs; and those who have some inkling of the startling future before us. The intriguing thing about LLMs is that they do not follow smooth, continuous rules of development. Rather they are like a larva due to sprout into a butterfly.

It is only human, if I may use that word, to be anxious about this future. But we should also be ready for it.

Recommended.  Remember my old Wilson Quarterly piece about “invisible competition”?

Comments

Comments for this post are closed