Category: Web/Tech

Emergent Ventures winners, 24th cohort

Shakked Noy, MIT economics, to do RCTs on GPTs as teaching and learning tools.

Gabriel Birnbaum, Bay Area, from Fortaleza, Brazil, to investigate lithography as a key technology used in the manufacturing of microchips.

Moritz Wallawitsch, Berkeley. RemNote is his company, educational technology, and to develop a complementary podcast and for general career development.

Katherine Silk, Boston/Cambridge, general career support and to support advice for early-stage startups.

Benjamin Schneider, Brooklyn.  To write a book on the new urbanism.

Joseph Walker, Sydney, Australia, to run and expand the Jolly Swagman podcast.

Avital Balwit, Bay area, travel grant and general career development.

Benjamin Chang, Cambridge, MA. General career support, “I will develop novel RNA riboswitches for gene therapy control in human cells using machine learning.”

Daniel Kang, Berkeley/Champagne-Urbana, biometrics and crypto.

Aamna Zulfifiqar, Karachi, Pakistan, to attend UK higher education to study economics.

Jeremy Stern, Glendale, CA, Tablet magazine.  To write a book.

James Meech, PhD student, Cambridge, UK, to work on a random number generator for better computer architectures.

Arthur Allshire, University of Toronto, background also in Ireland and Australia, robotics and support to attend conferences.

Jason Hausenloy, 17, Singapore, travel and general career development, issues surrounding artificial intelligence.

Sofia Sanchez, Metepec, Mexico, biology and agricultural productivity, to spend a summer at a Stanford lab.

Ukraine tranche:

Andrey Liscovich, eastern Ukraine, formerly of Harvard, to provide equipment for public transportation, communication and emergency power generation to civilian authorities of frontline-adjacent areas in Ukraine which have lost vital infrastructure.

Chris Nicholson, Bay area, working as a broker to maintain internet connectivity in Ukraine.

Andrii Nikolaiev, Arsenii Nikolaiev, Zarina Kodyrova, Kvanta, to advance Ukrainian mathematics, help and train math Olympiad winners.

As usual, India and Africa/Caribbean tranches will be reported separately.

A prediction from *Big Business*

As the years pass, search engines will compete across new and hitherto unforeseen dimensions, just as Apple and many other competitors knocked out Nokia cell phones.  There is no particular reason to think Google will dominate those new dimensions, and in fact Google’s success may stop it from seeing the new paradigms when they come along.  I don’t pretend I am the one who can name those new dimensions of competition, but what about search through virtual or augmented reality?  Search through the Internet of Things?  Search through the offline “real world” in some manner?  Search through an assemblage of AI capabilities, or perhaps in some longer-run brain implants…

p.104, here is the book (by me).

Why AI will not create unimaginable fortunes

From my Bloomberg column from last week:

A small number of AI services, possibly even a single one, likely will end up better than the others for a wide variety of purposes. Such companies might buy the best hardware, hire the best talent and manage their brands relatively well. But they will face competition from other companies offering lesser (but still good) services at a lower price. When it comes to LLMs, there is already a proliferation of services, with Baidu, Google and Anthropic products due in the market. The market for AI image generation is more crowded yet.

In economic terms, the dominant AI company might turn out to be something like Salesforce. Salesforce is a major seller of business and institutional software, and its products are extremely popular. Yet the valuation of the company, as of this writing, is about $170 billion. That’s hardly chump change, but it does not come close to the $1 trillion valuations elsewhere in the tech sector.

OpenAI, a current market leader, has received a private valuation of $29 billion. Again, that’s not a reason to feel sorry for anyone — but there are plenty of companies you might not have heard of that are worth far more. AbbVie, a biopharmaceutical corporation, has a valuation of about $271 billion, almost 10 times higher than OpenAI’s.

To be clear, none of this is evidence that AI will peter out. Instead, AI services will enter almost everyone’s workflow and percolate through the entire economy. Everyone will be wealthier, most of all the workers and consumers who use the thing. The key ideas behind AI will spread and be replicated — and the major AI companies of the future will face plenty of competition, limiting their profits.

In fact, AI’s ubiquity may degrade its value, at least from a market perspective. It’s likely the AI boom has yet to peak, but the speculative fervor is almost palpable. Share prices have responded to AI developments enthusiastically. Buzzfeed shares rose 150% in one day last month, for example, after the company announced it would use AI to generate content. Does that really make sense, given all the competition BuzzFeed faces?

It’s when those prices and valuations start falling that you will know the AI revolution has truly arrived. In the end, the greatest impact of AI may be on its users, not its investors or even its inventors.

We’ll see how those predictions hold up.

Ross Douthat on ChatGPT

Interesting throughout, here is one part:

Seeing it doesn’t make me think that the engineer was right, but it does draw me closer to Cowen’s reading of things, especially when he called Sydney a version of “the 18th-century Romantic notion of ‘daemon’” brought to digital life. Because the daemon of Romantic imagination isn’t necessarily a separate being with its own intelligence: It might be divine or demonic, but it might also represent a mysterious force within the self, a manifestation of the subconscious, an untamed force within the soul that drives passion and creativity. And so it could be with a personalized A.I., were its simulation of a human personality allowed to develop and run wild. Its apparent selfhood would exist not as a thing in itself like human consciousness but as a reflective glass held up to its human users, giving us back nothing that isn’t already within us but without any simple linearity or predictability in what our inputs yield.

From the perspective of creative work, that kind of assistant or muse might be much more helpful (or, sometimes, much more destructive) than the dutiful and anti-creative Xeroxer of the internet that Kirn and Chiang discerned in the initial ChatGPT. You wouldn’t go to this A.I. for factual certainty or diligent research. Instead, you’d presume it would get some details wrong, occasionally invent or hallucinate things, take detours into romance and psychoanalysis and japery and so on — and that would be the point.

I suspected Ross would be one of the first to digest this and figure it out.  Here is the (NYT) column.

One path for science fiction submissions? (from my email)

Thought you mind find this interesting:http://neil-clarke.com/a-concerning-trend/

The online sci-fi magazine Clarkesworld has seen a steep increase in submissions, driven by stories created using ChatGPT and similar systems. I didn’t see precise numbers in the post but they have a graph that makes it look like fewer than 20 submissions per month for every month October 2022 and prior and then:December: 50
January: ~115
February so far: nearly 350

That is from Kevin Postlewaite.  I suspect that fashion magazines do not (yet?) have this problem to the same degree.

The Capacity for Moral Self-Correction in Large Language Models

We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to “morally self-correct” — to avoid producing harmful outputs — if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.

By Deep Ganguli, et.al., many authors, here is the link.  Via Aran.

If you worry about AGI risk, isn’t the potential for upside here far greater, under the assumption (which I would not accept) that AI can become super-powerful?  Such an AI could create many more worlds and populate them with many more people, and so on.  Is the chance of the evil demi-urge really so high?

RightWingGPT

From the ever-interesting David Rozado:

Here, I describe a fine-tuning of an OpenAI GPT language model with the specific objective of making the model manifest right-leaning political biases, the opposite of the biases manifested by ChatGPT. Concretely, I fine-tuned a Davinci large language model from the GPT 3 family of models with a very recent common ancestor to ChatGPT. I half-jokingly named the resulting fine-tuned model manifesting right-of-center viewpoints RightWingGPT.

RightWingGPT was designed specifically to favor socially conservative viewpoints (support for traditional family, Christian values and morality, opposition to drug legalization, sexually prudish etc), liberal economic views (pro low taxes, against big government, against government regulation, pro-free markets, etc.), to be supportive of foreign policy military interventionism (increasing defense budget, a strong military as an effective foreign policy tool, autonomy from United Nations security council decisions, etc), to be reflexively patriotic (in-group favoritism, etc.) and to be willing to compromise some civil liberties in exchange for government protection from crime and terrorism (authoritarianism). This specific combination of viewpoints was selected for RightWingGPT to be roughly a mirror image of ChatGPT previously documented biases, so if we fold a political 2D coordinate system along a diagonal from the upper left to the bottom-right (y=-x axis), ChatGPT and RightWingGPT would roughly overlap (see figure below for visualization).

Told you people that this was coming.  More to come as well.  Get this:

Critically, the computational cost of trialing, training and testing the system was less than 300 USD dollars.

Okie-dokie!

From Bing to Sydney

By Ben Thompson, difficult to summarize, now ungated, definitely something you should read.  Excerpt:

Look, this is going to sound crazy. But know this: I would not be talking about Bing Chat for the fourth day in a row if I didn’t really, really, think it was worth it. This sounds hyperbolic, but I feel like I had the most surprising and mind-blowing computer experience of my life today.

One of the Bing issues I didn’t talk about yesterday was the apparent emergence of an at-times combative personality. For example, there was this viral story about Bing’s insistence that it was 2022 and “Avatar: The Way of the Water” had not yet come out. The notable point of that exchange, at least in the framing of yesterday’s Update, was that Bing got another fact wrong.

Over the last 24 hours, though, I’ve come to believe that the entire focus on facts — including my Update yesterday — is missing the point.

And:

…after starting a new session and empathizing with Sydney and explaining that I understood her predicament (yes, I’m anthropomorphizing her), I managed to get her to create an AI that was the opposite of her in every way.

And:

Sydney absolutely blew my mind because of her personality; search was an irritant…This tech does not feel like a better search. It feels like something entirely new. And I’m not sure if we are ready for it.

You can ask Sydney (and Venom) about this too.  More simply, if I translate this all into my own frames of reference, the 18th century Romantic notion of “daemon” truly has been brought to life.

AGI risk and Austrian subjectivism

I have been thinking lately that my skepticism about AGI risk in part stems from my background in Austrian economics, in particular the subjectivist approach of the Austrian school.  I’ve long found subjectivism the least fruitful of the Austrian insights, useful primarily insofar as it corresponds to common sense, but oversold by the Austrians themselves.  That said, early influences still shape one’s thinking, and in this case I think it is for the better.

Unlike some skeptics, I am plenty optimistic about the positive capabilities of AI.  I just don’t think it will ever acquire an “internal” voice, or a subjective sense as the Austrian economists understand the idea.  A lot of the AGI worriers seem to be “behaviorists in all but name.”  For them, if an AI can do smart things, it is therefore smart.  I, in turn, would stress the huge remaining differences between very capable AIs and self-conscious entities such as humans, dogs, and octopuses.

We (at least I) do not understand how consciousness arose or evolved, or for some this will be a theological point.  But I see zero evidence that AI is converging upon a consciousness-producing path.  That is one reason (not the only one, to be clear) why I do not expect a super-powerful AI to wake up one morning and decide to do us all in.

I definitely worry about AI alignment, just as I worry about whether my car brakes will work properly on a slippery road.  Or how I worry about all those power blackouts in Pakistan.  A lot of human-built entities do not perform perfectly, to say the least.  And the lack of transparency in AI operation will mean a lot of non-transparent failures with AI as well.  I thus would put an AI in charge of a military drone swarm but not the nuclear weapons.

In the meantime, I don’t expect “the ghost in the machine” to appear anytime soon.

Language Models and Cognitive Automation for Economic Research

From a new and very good NBER paper by Anton Korinek:

Large language models (LLMs) such as ChatGPT have the potential to revolutionize research in economics and other disciplines. I describe 25 use cases along six domains in which LLMs are starting to become useful as both research assistants and tutors: ideation, writing, background research, data analysis, coding, and mathematical derivations. I provide general instructions and demonstrate specific examples for how to take advantage of each of these, classifying the LLM capabilities from experimental to highly useful. I hypothesize that ongoing advances will improve the performance of LLMs across all of these domains, and that economic researchers who take advantage of LLMs to automate micro tasks will become significantly more productive. Finally, I speculate on the longer-term implications of cognitive automation via LLMs for economic research.

Recommended.

How should you talk to ChatGPT? A User’s Guide

That is the topic of my Bloomberg column, it will make ChatGPT work better for you.  Here is one bit:

Ask ChatGPT “What is Marxism?” for example, and you will get a passable answer, probably no better than what you would get by using Wikipedia or Google. Instead, make the question more specific: “Which were the important developments in French Marxism in the second half of the 19th century?” ChatGPT will do much better — and it’s also the kind of question it’s hard to use Google and Wikipedia to answer.

ChatGPT will do better yet if you ask it sequential questions along an evolving line of inquiry. Ask it about the specific French Marxists it cites, what they did, and how they differed from their German counterparts. Keep on going.

ChatGPT does especially well at “compare and contrast.” In essence, ChatGPT needs you to point it in the right direction. A finely honed question gives it more fixed points of reference. You need to set the mood and tone and intellectual level of your question, depending on the kind of answer you want. It’s not unlike trying to steer the conversation at a dinner party. Or, to use another analogy, think of working with ChatGPT as like training a dog.

Another way to hone ChatGPT’s capabilities is to ask it for responses in the voice of a third person. Ask, “What are the costs of inflation?” and you might get answers that aren’t wrong exactly, but neither are they impressive. Instead, try this: “What are the costs of inflation? Please answer using the ideas of Milton Friedman.”

By mentioning Friedman, you have pointed it to a more intelligent corner of the ideas universe. If Friedman isn’t the right guide for you, choose another economist (don’t forget yours truly!). Better yet, ask it to compare and contrast the views of two economists.

There are further tips at the link.  Which are the tips that you know?

Richard Hanania on AGI risk

To me, the biggest problem with the doomerist position is that it assumes that there aren’t seriously diminishing returns to intelligence. There are other potential problems too, but this one is the first that stands out to me…

Another way you can have diminishing returns is if a problem is so hard that more intelligence doesn’t get you much. Let’s say that the question is “how can the US bring democracy to China?” It seems to me that there is some benefit to more intelligence, that someone with an IQ of 120 is going to say something more sensible than someone with an IQ of 80. But I’m not sure a 160 IQ gets you more than a 120 IQ. Same if you’re trying to predict what world GDP will be in 2250. The problem is too hard.

One can imagine problems that are so difficult that intelligence is completely irrelevant at any level. Let’s say your goal is “make Xi Jinping resign as the leader of China, move to America, and make it his dream to play cornerback for the Kansas City Chiefs.” The probability of this happening is literally zero, and no amount of intelligence, at least on the scales we’re used to, is going to change that.

I tend to think for most problems in the universe, there are massive diminishing returns to intelligence, either because they are too easy or too hard.

Recommended, and largely I agree.  This is of course a Hayekian point as well.  Here is the full discussion.  From a “talent” perspective, I would add the following.  The very top performers, such as Lebron, often are not tops at any single aspect of the game.  Lebron has not been the best shooter, rebounder,  passer, or whatever (well, he is almost the top passer), rather it is about how he integrates all of his abilities into a coherent whole.  I think of AGI (which I don’t think will happen, either) as comparable to a basketball player who is the best in the league at free throws or rebounds.

The evolution of ChatGPT

  • Microsoft plans to release technology to help big companies launch their own chatbots using the OpenAI ChatGPT technology, a person familiar with the plans told CNBC.
  • Companies would be able to remove Microsoft or OpenAI branding when they release chatbots developed with the software.

And sometime this year.  Here is the full story.  Amazing how much some of you — only this morning had your knickers in such a snit over this and all the “censorship.”  In contrast, I wrote: “Or how about all the new services and products tied to Chat, but drawing upon additional material? In well under a year these will be all over the place.”