Results for “chatgpt”
139 found

ChatGPT as a predictor of corporate investment

We create a firm-level ChatGPT investment score, based on conference calls, that measures managers’ anticipated changes in capital expenditures. We validate the score with interpretable textual content and its strong correlation with CFO survey responses. The investment score predicts future capital expenditure for up to nine quarters, controlling for Tobin’s q and other determinants, implying the investment score provides incremental information about firms’ future investment opportunities. The investment score also separately forecasts future total, intangible, and R&D investments. High-investment-score firms experience significant negative future abnormal returns. We demonstrate ChatGPT’s applicability to measure other policies, such as dividends and employment.

That is from a new NBER working paper by Manish Jia, Jialin Qian, Michael Weber, and Baozhong Yang.

Can ChatGPT assist in picking stocks?

I don’t believe this result will hold up, but I am happy to present the opposing point of view:

This paper studies whether ChatGPT-4 with access to the internet is able to provide valuable investment advice and evaluate financial information in a timely manner. Using a live experiment, we find a positive correlation between ChatGPT-4 ratings and future earnings announcements and stock returns. We find evidence that ChatGPT-4 adjusts ratings in response to earnings surprises and news events information in a timely manner. An investment strategy based on “attractiveness ratings” by ChatGPT-4 yields positive returns.

That is from a new paper by Matthias Pelster and Joel Val.

Labor market evidence from ChatGPT

So far some of the main effects are quite egalitarian:

Generative Artificial Intelligence (AI) holds the potential to either complement knowledge workers by increasing their productivity or substitute them entirely. We examine the short-term effects of the recent release of the large language model (LLM), ChatGPT, on the employment outcomes of freelancers on a large online platform. We find that freelancers in highly affected occupations suffer from the introduction of generative AI, experiencing reductions in both employment and earnings. We find similar effects studying the release of other image-based, generative AI models. Exploring the heterogeneity by freelancers’ employment history, we do not find evidence that high-quality service, measured by their past performance and employment, moderates the adverse effects on employment. In fact, we find suggestive evidence that top freelancers are disproportionately affected by AI. These results suggest that in the short term generative AI reduces overall demand for knowledge workers of all types, and may have the potential to narrow gaps among workers.

That is from a new paper by Xiang Hui, Oren Reshef, and Luofeng Zhou, via Fernand Pajot.  And here is an FT summary of some key results.

I would stress this point, however.  As more ordinary life and commerce structures itself around AI, more and more AI-driven or AI-enable projects will become possible.  That will favor those who are good at conceiving of projects and executing them, and those longer-run effects may well be less egalitarian.

Can ChatGPT Help Investors Process Financial Information?

Surprise, surprise, disclosures are bloated past the point of being informally useful:

Generative AI tools such as ChatGPT can fundamentally change the way investors process information. We probe the economic usefulness of these tools in summarizing complex corporate disclosures using the stock market as a laboratory. The unconstrained summaries are dramatically shorter, often by more than 70% compared to the originals, whereas their information content is amplified. When a document has a positive (negative) sentiment, its summary becomes more positive (negative). More importantly, the summaries are more effective at explaining stock market reactions to the disclosed information. Motivated by these findings, we propose a measure of information “bloat.” We show that bloated disclosure is associated with adverse capital markets consequences, such as lower price efficiency and higher information asymmetry. Finally, we show that the model is effective at constructing targeted summaries that identify firms’ (non-)financial performance and risks. Collectively, our results indicate that generative language modeling adds considerable value for investors with information processing constraints.

That is from a new paper by Alex Kim, Maximilian Muhn and Valeri Nikolaev.

Behavioral Economics and ChatGPT: From William Shakespeare to Elena Ferrante

We prompted GPT-4 (an artificial intelligence large language model) to use literary fictional characters to play the Dictator game, a classic behavioral economics experiment. We prompted GPT-4 with 148 characters from the 17th century to the 21st century. Their 888 decisions were used to compare characters over time as well as characters to human players. There is a general and mainly monotonic decrease in selfish behavior over time in literary characters. 41% of the decisions of 17th century characters are selfish compared to just 21% of the decisions of characters from the 21st century. In the Human-AI comparison, Humans are much more selfish than the AI characters with 51% of humans making selfish decisions compared to 28% of the AI characters. 

Here is the full (short) paper by Gabriel Abrams (a junior in high school).

Evidence from Italy’s ChatGPT Ban

We analyse the effects of the ban of ChatGPT, a generative pre-trained transformer chatbot, on individual productivity. We first compile data on the hourly coding output of over 8,000 professional GitHub users in Italy and other European countries to analyse the impact of the ban on individual productivity. Combining the high-frequency data with the sudden announcement of the ban in a difference-in-differences framework, we find that the output of Italian developers decreased by around 50% in the first two business days after the ban and recovered after that. Applying a synthetic control approach to daily Google search and Tor usage data shows that the ban led to a significant increase in the use of censorship bypassing tools. Our findings show that users swiftly implement strategies to bypass Internet restrictions but this adaptation activity creates short-term disruptions and hampers productivity.

That is from a recent paper by David Kreitmeir and Paul A. Raschky.  Via Pradyumna Shyama Prasad.

ChatGPT as marketing triumph

That is the topic of my latest Bloomberg column, here is one excerpt:

Consider the name itself, ChatGPT. Many successful tech products have happy, shiny, memorable names: Instagram, Roblox, TikTok. Someone comes up with the name, perhaps in a brainstorming session, and then it is taken to the broader world. With luck, it will prove sticky and continue to invoke a warm glow.

ChatGPT, as a name, does not have a comparably simple history. Originally the service was GPT-2, then GPT-3. GPT-3.5 was then released, and ChatGPT was announced in November 2022. ChatGPT-3.5 was cumbersome, and by that point it was running the risk of becoming obsolete numerically. So ChatGPT became the popular moniker. When GPT-4 was released in March, there was the chance to switch to just GPT-4 and treat ChatGPT as a kind of interim brand, connected with the 3.5 edition, but no — ChatGPT it is.

I still prefer to call it GPT-4, to make it clear which version of the service I am using. GPT-3.5 remains available for free and is widely used, and so my inner persnickety nag feels the two should not be confused. But they are confused, and almost everyone (present company excluded) just says ChatGPT. Again, this is not textbook marketing, but users seem fine with it.

At this point, you might be wondering what that GPT stands for. It is “Generative Pre-Trained Transformer.” How many of its users could tell you that? How many of its users could explain what it means? I can imagine a marketing team sitting around a conference table insisting that GPT is too opaque and just won’t do.

But at OpenAI, software engineers and neural net experts are the dominant influences. So Americans get a name that has not been heavily market-tested and does not necessarily fit well into an advertising slogan. Maybe they like it, especially if they are sick of more and more of the economy being run by marketers.

There is more at the link.  The part of GPT-4 that comes across as “focus group tested” — the neutering of Sydney and the like — is also the part that is not entirely popular, even if it is politically wise.

ChatGPT vs. the experts (Department of Uh-Oh)

ChatGPT’s answers are generally considered to be more helpful than humans’ in more than half of questions, especially for finance and psychology areas.

Most of all, ChatGPT does better in terms of concreteness.  Note also that ChatGPT uses more nouns and deploys a more neutral tone than do the human experts.  ChatGPT fares worst in the medical domain, but its biggest problem (from the point of view of the human evaluators) is giving too much information and not enough simple instructions.  Hmm…  In any case, here is the link.

I wonder how well the upgrades are going to do.

Ross Douthat on ChatGPT

Interesting throughout, here is one part:

Seeing it doesn’t make me think that the engineer was right, but it does draw me closer to Cowen’s reading of things, especially when he called Sydney a version of “the 18th-century Romantic notion of ‘daemon’” brought to digital life. Because the daemon of Romantic imagination isn’t necessarily a separate being with its own intelligence: It might be divine or demonic, but it might also represent a mysterious force within the self, a manifestation of the subconscious, an untamed force within the soul that drives passion and creativity. And so it could be with a personalized A.I., were its simulation of a human personality allowed to develop and run wild. Its apparent selfhood would exist not as a thing in itself like human consciousness but as a reflective glass held up to its human users, giving us back nothing that isn’t already within us but without any simple linearity or predictability in what our inputs yield.

From the perspective of creative work, that kind of assistant or muse might be much more helpful (or, sometimes, much more destructive) than the dutiful and anti-creative Xeroxer of the internet that Kirn and Chiang discerned in the initial ChatGPT. You wouldn’t go to this A.I. for factual certainty or diligent research. Instead, you’d presume it would get some details wrong, occasionally invent or hallucinate things, take detours into romance and psychoanalysis and japery and so on — and that would be the point.

I suspected Ross would be one of the first to digest this and figure it out.  Here is the (NYT) column.

How should you talk to ChatGPT? A User’s Guide

That is the topic of my Bloomberg column, it will make ChatGPT work better for you.  Here is one bit:

Ask ChatGPT “What is Marxism?” for example, and you will get a passable answer, probably no better than what you would get by using Wikipedia or Google. Instead, make the question more specific: “Which were the important developments in French Marxism in the second half of the 19th century?” ChatGPT will do much better — and it’s also the kind of question it’s hard to use Google and Wikipedia to answer.

ChatGPT will do better yet if you ask it sequential questions along an evolving line of inquiry. Ask it about the specific French Marxists it cites, what they did, and how they differed from their German counterparts. Keep on going.

ChatGPT does especially well at “compare and contrast.” In essence, ChatGPT needs you to point it in the right direction. A finely honed question gives it more fixed points of reference. You need to set the mood and tone and intellectual level of your question, depending on the kind of answer you want. It’s not unlike trying to steer the conversation at a dinner party. Or, to use another analogy, think of working with ChatGPT as like training a dog.

Another way to hone ChatGPT’s capabilities is to ask it for responses in the voice of a third person. Ask, “What are the costs of inflation?” and you might get answers that aren’t wrong exactly, but neither are they impressive. Instead, try this: “What are the costs of inflation? Please answer using the ideas of Milton Friedman.”

By mentioning Friedman, you have pointed it to a more intelligent corner of the ideas universe. If Friedman isn’t the right guide for you, choose another economist (don’t forget yours truly!). Better yet, ask it to compare and contrast the views of two economists.

There are further tips at the link.  Which are the tips that you know?

The evolution of ChatGPT

  • Microsoft plans to release technology to help big companies launch their own chatbots using the OpenAI ChatGPT technology, a person familiar with the plans told CNBC.
  • Companies would be able to remove Microsoft or OpenAI branding when they release chatbots developed with the software.

And sometime this year.  Here is the full story.  Amazing how much some of you — only this morning had your knickers in such a snit over this and all the “censorship.”  In contrast, I wrote: “Or how about all the new services and products tied to Chat, but drawing upon additional material? In well under a year these will be all over the place.”

ChatGPT and reading in clusters

I have a new favorite reading trick, most of all for history books.  As I read through the pages, I encounter names, place names, battle names and so on that I am not very familiar with (for many but not all topics).  Just keep on “ChatGPTing” at least one identifier a page as you go along.  Learn the context along the way.  The final effect is a bit like reading surrounding books on the same topic, except you don’t need all those other books.  This method is especially valuable when the topics are obscure.  To the extent the topics are pretty well-known, this method does not differ so much from using Google as you read.  Try it!

Where will the impact of ChatGPT fall? (from my email)

One dialogue that is missing from the current GPT conversations is about where these technologies will be primarily implemented.

There are two places GPTs can be used:

  1. Product-level: Implemented within products that companies buy
  2. Process-level: Implemented within processes of a company by an analyst – similar to how companies use data analysts today

If the future has primarily Product-level GPT, then companies like Microsoft clearly win because they have the products (like Teams) where the tech will be embedded and if you want the productivity gains from GPTs you have to go through them.

If the future has more Process-level GPT, companies like Zapier and no-code platforms win, because they will be the tools that companies use to implement their custom prompts. (although maybe a “Microsoft Teams Prompt Marketplace” wins as well)

The advantage to Process-level GPT is that companies don’t have to go through expensive change management to fit imperfect processes decreed by an external product – they can have their prompts designed to fit their specific needs. This would lead to higher productivity increases than a world with purely Product-level GPT.

To me, it comes down to the question of how much technical debt a custom prompt represents. If each prompt requires lots of maintenance and testing, then MSFT dominates. If someone who has 1 year of experience and used ChatGPT to write their papers in college can make a robust prompt, then Zapier wins.

Between the iterations of GPT3 so far (from davinci-001 to davinci-003/ChatGPT), we’ve seen the robustness of prompts increase exponentially. If this continues, it seems possible that the future has more Process-level GPT than we’ve seen so far.

Not edited by ChatGPT,
Neil [Madsen]

Commercial applications for ChatGPT

Here is one take from Nicola Morini Bianzino, written up by Sharon Goldman:

“Knowledge companies tend to store knowledge in a very flat, two-dimensional way that makes it difficult to access, interact and have a dialogue with,” he told VentureBeat in an interview. “We tried 20, 30, 40 years ago to build expert systems. That didn’t go really well because they were too rigid. I think this technology promises to overcome a lot of issues that expert systems have.”

As ChatGPT and similar tools evolve and improve, and can be trained on an enterprise’s data in a secure way, it will change the way we access and consume information inside the enterprise, he explained.

“I think we will get to a point when we can actually have a conversation about the company performance with an AI agent,” he said. “You interrogate the system, the system is capable of maintaining a state of the conversation, and then every question allows you to dig deeper into the problem and understand it better, as opposed to let me run a report on sales in this particular region for the last month, which doesn’t usually provide a lot of insights.”

Here is the full piece.  And here is a CNN piece on how real estate agents are using ChatGPT.

Novels as models, and ChatGPT

I read your piece on novels as models many years ago, and I’ve been reflecting on it with the advent of LLMs. I wrote a piece (substack) making the case that the data required for AGIs is probably embedded within the human textual corpus, and leaned on your old writing as evidence. I think you would really like it. I would also be curious for a future MR post if you have any retrospective thoughts on your 2005 article.

From @cauchyfriend.  Putting the AGI issue aside, my longstanding view has been that there are more “models” embedded in text than most people realize, a point relevant for economic method as well.  I see LLMs as having established this case, in fact far more definitively than I ever would have expected.