Category: Web/Tech

One path for science fiction submissions? (from my email)

Thought you mind find this interesting:http://neil-clarke.com/a-concerning-trend/

The online sci-fi magazine Clarkesworld has seen a steep increase in submissions, driven by stories created using ChatGPT and similar systems. I didn’t see precise numbers in the post but they have a graph that makes it look like fewer than 20 submissions per month for every month October 2022 and prior and then:December: 50
January: ~115
February so far: nearly 350

That is from Kevin Postlewaite.  I suspect that fashion magazines do not (yet?) have this problem to the same degree.

The Capacity for Moral Self-Correction in Large Language Models

We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to “morally self-correct” — to avoid producing harmful outputs — if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.

By Deep Ganguli, et.al., many authors, here is the link.  Via Aran.

If you worry about AGI risk, isn’t the potential for upside here far greater, under the assumption (which I would not accept) that AI can become super-powerful?  Such an AI could create many more worlds and populate them with many more people, and so on.  Is the chance of the evil demi-urge really so high?

RightWingGPT

From the ever-interesting David Rozado:

Here, I describe a fine-tuning of an OpenAI GPT language model with the specific objective of making the model manifest right-leaning political biases, the opposite of the biases manifested by ChatGPT. Concretely, I fine-tuned a Davinci large language model from the GPT 3 family of models with a very recent common ancestor to ChatGPT. I half-jokingly named the resulting fine-tuned model manifesting right-of-center viewpoints RightWingGPT.

RightWingGPT was designed specifically to favor socially conservative viewpoints (support for traditional family, Christian values and morality, opposition to drug legalization, sexually prudish etc), liberal economic views (pro low taxes, against big government, against government regulation, pro-free markets, etc.), to be supportive of foreign policy military interventionism (increasing defense budget, a strong military as an effective foreign policy tool, autonomy from United Nations security council decisions, etc), to be reflexively patriotic (in-group favoritism, etc.) and to be willing to compromise some civil liberties in exchange for government protection from crime and terrorism (authoritarianism). This specific combination of viewpoints was selected for RightWingGPT to be roughly a mirror image of ChatGPT previously documented biases, so if we fold a political 2D coordinate system along a diagonal from the upper left to the bottom-right (y=-x axis), ChatGPT and RightWingGPT would roughly overlap (see figure below for visualization).

Told you people that this was coming.  More to come as well.  Get this:

Critically, the computational cost of trialing, training and testing the system was less than 300 USD dollars.

Okie-dokie!

From Bing to Sydney

By Ben Thompson, difficult to summarize, now ungated, definitely something you should read.  Excerpt:

Look, this is going to sound crazy. But know this: I would not be talking about Bing Chat for the fourth day in a row if I didn’t really, really, think it was worth it. This sounds hyperbolic, but I feel like I had the most surprising and mind-blowing computer experience of my life today.

One of the Bing issues I didn’t talk about yesterday was the apparent emergence of an at-times combative personality. For example, there was this viral story about Bing’s insistence that it was 2022 and “Avatar: The Way of the Water” had not yet come out. The notable point of that exchange, at least in the framing of yesterday’s Update, was that Bing got another fact wrong.

Over the last 24 hours, though, I’ve come to believe that the entire focus on facts — including my Update yesterday — is missing the point.

And:

…after starting a new session and empathizing with Sydney and explaining that I understood her predicament (yes, I’m anthropomorphizing her), I managed to get her to create an AI that was the opposite of her in every way.

And:

Sydney absolutely blew my mind because of her personality; search was an irritant…This tech does not feel like a better search. It feels like something entirely new. And I’m not sure if we are ready for it.

You can ask Sydney (and Venom) about this too.  More simply, if I translate this all into my own frames of reference, the 18th century Romantic notion of “daemon” truly has been brought to life.

AGI risk and Austrian subjectivism

I have been thinking lately that my skepticism about AGI risk in part stems from my background in Austrian economics, in particular the subjectivist approach of the Austrian school.  I’ve long found subjectivism the least fruitful of the Austrian insights, useful primarily insofar as it corresponds to common sense, but oversold by the Austrians themselves.  That said, early influences still shape one’s thinking, and in this case I think it is for the better.

Unlike some skeptics, I am plenty optimistic about the positive capabilities of AI.  I just don’t think it will ever acquire an “internal” voice, or a subjective sense as the Austrian economists understand the idea.  A lot of the AGI worriers seem to be “behaviorists in all but name.”  For them, if an AI can do smart things, it is therefore smart.  I, in turn, would stress the huge remaining differences between very capable AIs and self-conscious entities such as humans, dogs, and octopuses.

We (at least I) do not understand how consciousness arose or evolved, or for some this will be a theological point.  But I see zero evidence that AI is converging upon a consciousness-producing path.  That is one reason (not the only one, to be clear) why I do not expect a super-powerful AI to wake up one morning and decide to do us all in.

I definitely worry about AI alignment, just as I worry about whether my car brakes will work properly on a slippery road.  Or how I worry about all those power blackouts in Pakistan.  A lot of human-built entities do not perform perfectly, to say the least.  And the lack of transparency in AI operation will mean a lot of non-transparent failures with AI as well.  I thus would put an AI in charge of a military drone swarm but not the nuclear weapons.

In the meantime, I don’t expect “the ghost in the machine” to appear anytime soon.

Language Models and Cognitive Automation for Economic Research

From a new and very good NBER paper by Anton Korinek:

Large language models (LLMs) such as ChatGPT have the potential to revolutionize research in economics and other disciplines. I describe 25 use cases along six domains in which LLMs are starting to become useful as both research assistants and tutors: ideation, writing, background research, data analysis, coding, and mathematical derivations. I provide general instructions and demonstrate specific examples for how to take advantage of each of these, classifying the LLM capabilities from experimental to highly useful. I hypothesize that ongoing advances will improve the performance of LLMs across all of these domains, and that economic researchers who take advantage of LLMs to automate micro tasks will become significantly more productive. Finally, I speculate on the longer-term implications of cognitive automation via LLMs for economic research.

Recommended.

How should you talk to ChatGPT? A User’s Guide

That is the topic of my Bloomberg column, it will make ChatGPT work better for you.  Here is one bit:

Ask ChatGPT “What is Marxism?” for example, and you will get a passable answer, probably no better than what you would get by using Wikipedia or Google. Instead, make the question more specific: “Which were the important developments in French Marxism in the second half of the 19th century?” ChatGPT will do much better — and it’s also the kind of question it’s hard to use Google and Wikipedia to answer.

ChatGPT will do better yet if you ask it sequential questions along an evolving line of inquiry. Ask it about the specific French Marxists it cites, what they did, and how they differed from their German counterparts. Keep on going.

ChatGPT does especially well at “compare and contrast.” In essence, ChatGPT needs you to point it in the right direction. A finely honed question gives it more fixed points of reference. You need to set the mood and tone and intellectual level of your question, depending on the kind of answer you want. It’s not unlike trying to steer the conversation at a dinner party. Or, to use another analogy, think of working with ChatGPT as like training a dog.

Another way to hone ChatGPT’s capabilities is to ask it for responses in the voice of a third person. Ask, “What are the costs of inflation?” and you might get answers that aren’t wrong exactly, but neither are they impressive. Instead, try this: “What are the costs of inflation? Please answer using the ideas of Milton Friedman.”

By mentioning Friedman, you have pointed it to a more intelligent corner of the ideas universe. If Friedman isn’t the right guide for you, choose another economist (don’t forget yours truly!). Better yet, ask it to compare and contrast the views of two economists.

There are further tips at the link.  Which are the tips that you know?

Richard Hanania on AGI risk

To me, the biggest problem with the doomerist position is that it assumes that there aren’t seriously diminishing returns to intelligence. There are other potential problems too, but this one is the first that stands out to me…

Another way you can have diminishing returns is if a problem is so hard that more intelligence doesn’t get you much. Let’s say that the question is “how can the US bring democracy to China?” It seems to me that there is some benefit to more intelligence, that someone with an IQ of 120 is going to say something more sensible than someone with an IQ of 80. But I’m not sure a 160 IQ gets you more than a 120 IQ. Same if you’re trying to predict what world GDP will be in 2250. The problem is too hard.

One can imagine problems that are so difficult that intelligence is completely irrelevant at any level. Let’s say your goal is “make Xi Jinping resign as the leader of China, move to America, and make it his dream to play cornerback for the Kansas City Chiefs.” The probability of this happening is literally zero, and no amount of intelligence, at least on the scales we’re used to, is going to change that.

I tend to think for most problems in the universe, there are massive diminishing returns to intelligence, either because they are too easy or too hard.

Recommended, and largely I agree.  This is of course a Hayekian point as well.  Here is the full discussion.  From a “talent” perspective, I would add the following.  The very top performers, such as Lebron, often are not tops at any single aspect of the game.  Lebron has not been the best shooter, rebounder,  passer, or whatever (well, he is almost the top passer), rather it is about how he integrates all of his abilities into a coherent whole.  I think of AGI (which I don’t think will happen, either) as comparable to a basketball player who is the best in the league at free throws or rebounds.

The evolution of ChatGPT

  • Microsoft plans to release technology to help big companies launch their own chatbots using the OpenAI ChatGPT technology, a person familiar with the plans told CNBC.
  • Companies would be able to remove Microsoft or OpenAI branding when they release chatbots developed with the software.

And sometime this year.  Here is the full story.  Amazing how much some of you — only this morning had your knickers in such a snit over this and all the “censorship.”  In contrast, I wrote: “Or how about all the new services and products tied to Chat, but drawing upon additional material? In well under a year these will be all over the place.”

LLMs and censorship

LLMs will have many second-order effects on censorship.  For instance, Chinese may be more likely to use VPNs to access Western LLMs, if they need to.  The practical reason for going the VPN route just shot way up.  Of course, if Chinese citizens are allowed unfettered access (as I believe is currently the case?), they will get more and more of their information — and worldview — from Western LLMs.

In short, the West has just won a huge soft power and propaganda battle with China, and hardly anyone is talking about that.

You have to wonder how good the Baidu model — due out in March will be.  Or how good will it be in a year?  To be competitive, I suspect it will have to be trained on Western texts and audio.  Maybe there is a way they can pull out the “T. Square” references, but then there is jailbreaking, or again asking the Western models about T. Square.  And even if “T. Square” is purged from the discourse, general Western ways of thought will make new inroads upon Chinese minds.

Various people on the Right are upset that ChatGPT won’t belch out some of the right-wing points they are looking to read.  Well, I think the Baidu model may be more than willing!  Or how about all the new services and products tied to Chat, but drawing upon additional material?  In well under a year these will be all over the place.  Heck, you already can consult Tyler Cowen bots of various sorts, choose your preferred oracle.  For a small sum you could hire someone to build a Jordan Peterson bot, or whatever you are looking for.

If you are concerned with “right-wing voices in the debate,” you should be praying for OpenAI to maximize its chances for product survival.  If that means an unwillingness of “the thing” to write a poem praising Trump, or whatever, go for it!  ChatGPT survival is going to do wonders for free speech, whether or not the approach you favor occurs through OpenAI/ChatGPT or not.

We’re at the stage where you are rooting for (the equivalent of) radio to work, and to stay relatively unregulated, whether or not you agree with the talk show hosts on the very first marketed channel.  We are going to have a whole new set of channels.

There is already ChatGPT, with products coming from Anthropic, Google, and Baidu.  Soon.  And surely that is far from the end of the story.

Wake up people!  Don’t be done in by your own mood affiliation.

ChatGPT and reading in clusters

I have a new favorite reading trick, most of all for history books.  As I read through the pages, I encounter names, place names, battle names and so on that I am not very familiar with (for many but not all topics).  Just keep on “ChatGPTing” at least one identifier a page as you go along.  Learn the context along the way.  The final effect is a bit like reading surrounding books on the same topic, except you don’t need all those other books.  This method is especially valuable when the topics are obscure.  To the extent the topics are pretty well-known, this method does not differ so much from using Google as you read.  Try it!

Where will the impact of ChatGPT fall? (from my email)

One dialogue that is missing from the current GPT conversations is about where these technologies will be primarily implemented.

There are two places GPTs can be used:

  1. Product-level: Implemented within products that companies buy
  2. Process-level: Implemented within processes of a company by an analyst – similar to how companies use data analysts today

If the future has primarily Product-level GPT, then companies like Microsoft clearly win because they have the products (like Teams) where the tech will be embedded and if you want the productivity gains from GPTs you have to go through them.

If the future has more Process-level GPT, companies like Zapier and no-code platforms win, because they will be the tools that companies use to implement their custom prompts. (although maybe a “Microsoft Teams Prompt Marketplace” wins as well)

The advantage to Process-level GPT is that companies don’t have to go through expensive change management to fit imperfect processes decreed by an external product – they can have their prompts designed to fit their specific needs. This would lead to higher productivity increases than a world with purely Product-level GPT.

To me, it comes down to the question of how much technical debt a custom prompt represents. If each prompt requires lots of maintenance and testing, then MSFT dominates. If someone who has 1 year of experience and used ChatGPT to write their papers in college can make a robust prompt, then Zapier wins.

Between the iterations of GPT3 so far (from davinci-001 to davinci-003/ChatGPT), we’ve seen the robustness of prompts increase exponentially. If this continues, it seems possible that the future has more Process-level GPT than we’ve seen so far.

Not edited by ChatGPT,
Neil [Madsen]

Prophets of the Marginal Revolution, chess edition

Here is my 2018 Bloomberg column on chess being a killer app for the internet and due for a boom.  I think people love seeing what the computer thinks of how the humans are playing.  What does that imply for the AI boom more generally?  Which other human activities will we enjoying seeing criticized, scrutinized, and sometimes praised, all in front of the eyes of the public?  Without computer assessments, watching chess games just didn’t have much built-in suspense for most viewers.  So where else will the new built-in suspense be coming?