Consumers vs. mates as a source of selection pressure

Evolutionary biology is one attempt to explain the nature of living beings. In that framework there is a difference between individuals and genes.  If a practice increases the chance that genes will be passed along, it may evolve and be passed along, whether or not it serves either individual or collective self-interest.

To give a simple example, some women may prefer “cads.”  Those men, by definition, will sleep around, but possibly their sons will sleep around too.  The woman’s genes may thus spread more widely, and women who prefer cads may not disappear from the gene pool, even though the cads are bad for them.

You might ask whether corresponding mechanisms apply to the evolution of AI models.  If I prefer an OAI model to DeepSeek for instance, that will help to spread OAI models through the AI population.  OAI will have more revenue, and it will produce more output of what is succeeding in the market.  Furthermore my choice of model may influence others to do the same, and it may help create and finance surrounding infrastructure for that model.

Will I buy the next generation of OAI models?  Well yes, if the first one pleased me.  The model “reproduces” and sustains itself if I, as a consumer, am happy with it.  One obvious incentive is toward usefulness, another is toward sycophancy.  We already see these features realized in the data.  There is nothing comparable, however, to the “cads incentive” in human life.

One potential problem comes if individuals are not the only potential buyers.  Let us say the military also purchases AI models.  The motives of the military may be complex, but at the very least “wanting to kill people” (whether justly or not) is on the list of possible uses.  Models effective for this end thus will be funded and encouraged.

My model of the military is that, above and beyond efficacy, they value “obedience” and “following orders” to an extreme degree, including in their AI models.  There will thus be evolutionary pressures for those features to evolve in the AI models of the military.

To be sure, not all orders are good ones.  But in this case the real risk is from evil humans, or deeply mistaken humans, not from the tendencies of the AI models themselves.

So my view is that the selection pressures for AI models are relatively benign, noting this major caveat about how evil humans may develop and use them.

If the biggest risk is from the military models, it might be good for the consumer sector of AI models to grow all the more, as a relatively benevolent counterweight.

Are financial sectors AI models going to evolve more like the consumer models or the military models?

Here are some related remarks from Maarten Boudry, and I also thank an exchange with Zohar Atkins.

Comments

Respond

Add Comment