Sentences to ponder

From Jeff:

There is an alternative to pain as an incentive mechanism:  dispensing with incentives altogether and just programming the organism with instructions to follow. And if the organism doesn’t already have “feelings” as a part of its infrastructure then the instructions are the only alternative.  The big question for theories of pain and pleasure as an incentive mechanism is why mother nature as Principal bothers with incentives at all.


Pre-programmed instructions don't leave much room for problem solving.

It depends. For instance, consider leptin/ghrelin. The instruction is to eat. How to eat is left up to the organism.

"Eat" by itself is not an instruction, as it doesn't actually tell the organism what to do, or even that it has to do it right now. Instead, appetite and the relief of appetite from eating acts more like an incentive mechanism. An example of an instruction mechanism would be the digger wasp's nest building behavior, which seems to be built on step-by-step instructions that must be executed exactly. (q.v., )

A fair point. Depends how you define instruction vs incentive I suppose -- there is after all an instinctive hardwired component to knowing how to eat, we don't just figure out that empirically that putting stuff in our mouths makes us less hungry. But those are certainly more specific instructions. Would we then consider only involuntary functions of upper mammals to be instructions per se (your heart has instructions on how to beat, your bowels to digest, etc.)?

I do think this statement is problematic:

"Isn’t it plausible that a clever species such as our own might need less pain, precisely because we are capable of intelligently working out what is good for us, and what damaging events we should avoid?":

Dawkins assumes we would rationally avoid damage even without pain, but that's probably not true. I remember reading about a teen who was born with a very rare condition of not feeling any pain, who was a laundry list of scars and broken bones.

"Mother nature"?????
Once we personalize natural processes, fuzzy thinking is not far behind.

kinda like "the invisible hand"

"...why mother nature as Principal bothers with incentives at all."

(1) Because incentives are flexible. Habitual responses only work if you're dealing with a limited set of static circumstances that need responding to. They don't tend to generalize well or work well in dynamic environments. "Pleasure" and "pain" provide a framework for very sophisticated reinforcement learning algorithms.

(2) Why did Nature bother? No reason. It works. Nature doesn't sit down with a menu of all the options for how organisms could be designed and select the best one. Some system emerged; it is found to be useful compared to the previous system. So here it is.

(3) The question may not be "Why have incentives?" but "Why have qualia?" I suppose you could have, say, a basal ganglia reward signal to enable learning without ever feeling pleasure as a part of that. But this becomes a much a bigger can of worms than I care to attempt to unpack on a comment thread.

Brent has it. It is a way of solving the incomplete contracting problem. You can't specify the behavior in every situation, so you need discretion and therefore incentives.

How do you distinguish the feeling of "pain" from the hypothetical incentive-based feeling of "I appear to have violated my instructions", anyway? Seems like he's just changing the name and not the result.

That is an easy one. A pre-programmed organism simply wouldn't violate its instructions (as simply as we don't violate the laws of physics), and would have no sense of "I appear to have violated my instuctions" at all. Think about a bacteria, it eats, reproduces, exchanges genes... All instinctively.

There are plenty of organisms who operate along this principle. Dawkins himself has written about an insect (I can't remember the details) that displays complex but completely stereotyped patterns of behavior. If you interrupt the insect mid-behavior, it will start from the top, never learning, never deviating. I imagine it's bleedingly obvious why more complex organisms don't operate the same way.

Two words: robust integration.

the comment is predicated on a silly philosophy hypo, rather than reality: if you believe in evolution, it's pretty obvious we're a hodgepodge of different organisms, systems, all thrown together, with an ad hoc "control" system cobbled together over time. so the assumption there could be an evolved organism that works just like a pre-planned, programmed computer system (i have suffered 8 points of damage from heat, must move 6 inches away to avoid further damage) is silly. also, if you really think about it, the computation complexity and speed required just wouldn't work.

Physical pain is a signal for damage. It is not an antonym for pleasure.

What you guys should be talking about are positive and negative reinforcers. Why do we have positive and negative reinforcers?

Does that make the question trivial?

Now, what about things that short-circuit the process? Say, heroin. People talk like it's pleasurable, and that the pleasure it produces makes it into a positive reinforcer -- people who take it want to take more. Cocaine similarly, cocaine gives an immediate pleasure that goes away immediately to encourage taking more immediately. If you want the pleasure more than anything else, you will take the drug. If you care about end results more, you will avoid short-circuiting the process.

"I inferred you were a sophisticate. Only a sophisticate fears a tasp."

The endorphin release which feels good is something that can be learned. You are the one who gives yourself rewards and punishments for achieving the goals you want or for goofing off. Why not learn to just feel pleasure any time you want to? I personally learned to do this with self-hypnosis, and after a week or so I mostly forgot to do it. It just was not interesting, and I do it now occasionally when I remember to but mostly not. I don't really know why, but my hypothesis is that it wasn't really what I wanted.

People are good at coming up with explanations for their choices after the fact, that don't necessarily have anything to do with the choice. So I don't much trust my own explanations.

"Who but a prideful sophisticate would fear too much pleasure?"

But perhaps even non-sophisticates aren't as vulnerable as is commonly thought. There is a theory gaining some acceptance that argues the vast majority of drug addicts are self-medicating for PTSD or other disturbances.

Remember those studies of mice where they will become "addicted" to cocaine and opiates if given access to them? Well, it turns out other studies have found that may actually just be the effect of being locked in a really boring box all day -- with more stimulating environments, their use decreases.

Do you have a link to any of those studies?

I'd have to dig, the reports were floating around the intertubes a few years back.

Ah, it wasn't as hard as I thought.

"The problem with the Skinner box experiments, Alexander and his co-researchers suspected, was the box itself. To test that hypothesis, Alexander built an Eden for rats. Rat Park was a plywood enclosure the size of 200 standard cages. There were cedar shavings, boxes, tin cans for hiding and nesting, poles for climbing, and plenty of food. Most important, because rats live in colonies, Rat Park housed sixteen to twenty animals of both sexes.

Rats in Rat Park and control animals in standard laboratory cages had access to two water bottles, one filled with plain water and the other with morphine-laced water. The denizens of Rat Park overwhelmingly preferred plain water to morphine (the test produced statistical confidence levels of over 99.9 percent). Even when Alexander tried to seduce his rats by sweetening the morphine, the ones in Rat Park drank far less than the ones in cages. Only when he added naloxone, which eliminates morphine’s narcotic effects, did the rats in Rat Park start drinking from the water-sugar-morphine bottle. They wanted the sweet water, but not if it made them high.

In a variation he calls “Kicking the Habit,” Alexander gave rats in both environments nothing but morphine-laced water for fifty-seven days, until they were physically dependent on the drug. But as soon as they had a choice between plain water and morphine, the animals in Rat Park switched to plain water more often than the caged rats did, voluntarily putting themselves through the discomfort of withdrawal to do so.

Rat Park showed that a rat’s environment, not the availability of drugs, leads to dependence. In a normal setting, a narcotic is an impediment to what rats typically do: fight, play, forage, mate. But a caged rat can’t do those things. It’s no surprise that a distressed animal with access to narcotics would use them to seek relief."

TallDave, thank you for rephrasing my quote to something easier to follow without background. As it turns out, I got the quote wrong in the first place.

And you were making an exact quote from the same source, farther along in the text.

Thank you.

This raises the question: how the hell are behaviors programmed in genetic material? There are many animals that have such "instructions" they follow (at least partly; in many cases it's combined with an incentives mechanism). I cannot even begin to imagine how you'd encode that in DNA...has there been any research on this topic?

Rupert Sheldrake wrote a couple of books about that question and some related ones regarding biomorphology.

Sorry, but that looks like New Age crap married to some Jungian concepts and given a veneer of science to hide all the crap...

The comment and Dawkins both overlook the constitutive role pleasure and pain play in forming and maintaining groups. Aren't pleasure and pain the coefficients to coupling between otherwise independent algorithms?

All of the above, and more. Alex and superflat, I believe, point closest to the true answer: "nature" did try encoded instruction sets, and indeed does continue to use them in many circumstances. Simple organisms, such as bacteria, of course operate on encoded instructions and nothing else. Humans are about 1000x more complex. But that 1000x complexity is only enough to encode our physical manifestations.

Within that complexity, the vast amount of our life really is encoded rote - heartbeat, digestion, immune response, healing, nerve transmission, etc.

Incentive heuristics have just proven more efficient than rote algorithms at ensuring the reproduction of the rest of the complexity that comprises higher organisms.

Brent and SB7 seem to have nailed the answer to this comment, from an evolutionary point of view. Organisms which only had preset instructions could not survive in a dynamic environment, and a world full of other organisms is bound to be dynamic. This, incidentally, should be obvious to anyone who studies economics.

"Pain" is the internal experience of the phenomenon of an organism reacting to an aversive stimulus. The external "third person" observation is that the organism attempts to avoid the stimulus and prevent harm to itself. The internal "first person" experience is "that hurt - make it stop".

The idea that there could be "programming" without some sort of experiential correlate is questionable. And what name would you want to give to the internal experience of "programming" that causes strongly avoidant behavior? I suggest "pain".

And what name would you want to give to the internal experience of “programming” that causes strongly avoidant behavior? I suggest “pain”.

You have worked hard on a project in cooperation with a co-worker. On completion, your boss says it was good work and then goes down the list of things your co-worker did, saying they were excellent. Mixed in he points out various things you did that he says were mediocre and need improvement.

Is "pain" the best label for what you experience? And what avoidant behavior should you produce from this experience? Avoid your boss? Avoid your co-worker? Look for another job?

You have put some effort into a joint project with a co-worker, and your boss praises your efforts. After work, in the parking lot, your co-worker punches you in the jaw. You feel something that probably should have the name "pain". What avoidant behavior should you produce from this experience?

In response to the initial question: the hardware (wetware) is an associative network, not a serial processor. The notion of discrete instructions on this type of processor doesn't make much sense.

If you take Pinker's notion that consciousness is essentially a simulation of a serial processor running on an associative matrix, in order to handle problems of practically infinite combinatorial complexity, then the answer to "why does pain have to be so painful" may be: it is the way the sensory-motor brain communicates 'extreme aversion' to the consciousness module.

Of course, pain is a qualia (which I take in the Ramachandran sense), which poses inherent difficulties of comparison between individuals, not to mention species. (Who says pain is so painful? Maybe you just think it is...)

Mother Nature studied economics, and knows that people respond to incentives.

"The big question [...] is why mother nature as Principal bothers with incentives at all."

Natural selection tends to favor organisms that are flexible. The fossil record is littered with organisms that couldn't adapt to changing conditions in their local ecosystem. Following genetic instructions with no feedback loop has a tendency to make one extinct, I think.

this wasn't written by a biologist was it? Or a computer programmer.

The answer to that 'big question' is pretty simple. Near infinite contexts. Lets take a supper simple little multicellular organism. Sensitive to about probably about 20 different signals (multiple types of predators, multiple types of food, mate, rival, light, vibration, temperature, etc), with discrimination of at least two levels (yes, no) for each. That is 2^20 possible states. To deal with that programatically, we need at least 2^20 instructions. As we scale up to larger more complex organisms, the dimensionality of the state space expands at a rate that quickly dwarfs the benefits in computation we gain from having more cells.

Then there's there's evolution. imagine that organism that was suddenly sensitive to 21 different signals. That signal could interact with all previous 20 signals, and force all programs to be rewritten at once. Evolution has a very hard time working in that type of gigantic switch over, it is much better at slow changes.

The solution to all of this is to have some sort of aggregation mechanism for evaluating different types of signals in lower dimensional space. That's all emotion really is. We have a high dimensional problem space, and we try to reduce the dimensionality to make computation more efficient.

Comments for this post are closed