Toward a theory of random, concentrated breakthroughs

I don’t (yet?) agree with what is to follow, but it is a model of the world I have been trying to flesh out, if only for the sake of curiosity.  Here are the main premises:

1. For a big breakthrough in some area to come, many different favorable inputs had to come together.  So the Florentine Renaissance required the discovery of the right artistic materials at the right time (e.g., good tempera, then oil paint), prosperity in Florence, guilds and nobles interested in competing for status with artistic commissions, relative freedom of expression, and so on.

2. To some extent, but not completely, the arrival of those varied inputs is random.  Big breakthroughs are thus hard to predict and also hard to control.

3. A breakthrough in one area increases the likelihood that further breakthroughs will come in closely related areas.  So if the coming together of the symphony orchestra leads to the work of Mozart and Haydn, that in turn becomes an inspiration and eases the path for later breakthroughs in music, not just Mahler but also The Beatles, compared to say how much it might ease future breakthroughs for painting.

4. Some breakthroughs are very very good for economic growth, such as the Industrial Revolution.  But most breakthroughs do not in any direct way boost gdp very much.  The Axial age led to the creation of significant religions and intellectual traditions, but the (complex) effects on gdp are mostly lagged and were certainly hard to see at the time.

5. Even if Robert Gordon is right that we will never have a new period of material progress comparable to the early 20th century for improving living standards, the next breakthrough eras still might be very important.

6. One possibility is that the next breakthrough will be some form of brain engineering.  People might be much happier and better adjusted, but arguably that could lower measured gdp by boosting “household production” in lieu of market activity.  At the very least, gdp figures may not reflect the value of those gains.

7. Another candidate for the next breakthrough would be institutional changes that make ongoing international peace much more likely.  That would have some positive effects on gdp in the short run, but its major effects would be in the much longer run, namely the prevention of a very destructive war.

8. Judged by the standards of the last breakthrough, the current/next breakthrough is typically hard to see and understand.  It almost always feels like we are failing at progress.

9. When a breakthrough comes, you need to ride it for all it is worth.  Arguably you also should embrace the excesses of that breakthrough, not seek to limit them.  It is perhaps your only real chance to mine that mother lode of inspiration.  So let us hope that Baroque music was “overproduced” in the early to mid 18th century, because after that production opportunities go away.  For that reason, “overuse” of the internet and social media today may not be such a bad thing.  It is our primary way of exploring all of the potential of that cultural mode, and that mode will at some point be tamed and neutered, just as Baroque music composition is now dormant.

10. Progress in (many forms of) science may be more like progress in Baroque music composition than we comfortably like to think.  But I hope not.


" Even if Robert Gordon is right that we will never have a new period of material progress comparable to the early 20th century for improving living standards"

This is the same Robert Gordon who said at the LSE a few years ago that Moore's Law collapsed in 2005.

As you see from my comment below, total agreement.

Didnt know these would be posted

Don't worry, someone pays a fair bit of attention to the comment section.

If not precisely consistently in how that attention is directed.

Most "breakthroughs" are the result of a dedicated or truly mad individual who spends a lifetime working on a problem. The next breakthrough is not really predictable as to what or when.

In defence of Robert Gordon, his latest tome is a very good piece of analysis of the decline in total factor productivity or growth per person/worker, which he puts down to diminishing marginal returns from research. His examples I find realistic and my knowledge and experience is with Australian agriculture and a much enhanced research effort is delivering only marginal returns compared to some phenomenal productivity improvements from research 70 or 80 years ago. There is still room for marginal improvements and for laggards to catch up to the technological frontiers but the frontier is not moving very much nowadays.

What has been going on with Moore's Law lately?

Moore’s Law.
The number of researchers required today to achieve the famous doubling every two years of the density of computer chips is more than 18 times larger than the number required in the early 1970s.
From Are Ideas Getting Harder to Find?
Bloom, Jones, Van Reenen, Webb February 15, 2019—Version 3.0

Moore's Law has stayed on track but is expected to end within a few years. Yet even though Moore's Law (the doubling of transistors on a chip) will end, recent techniques will continue to accelerate computer power for at least the 2020s.

Spintronics, Quantum, Optical computing all hold tremendous promise. I question the data on the number of researchers required to make computing advances. There's definite survivorship bias in older data -- like backfitting S&P 500 returns to 1923 or 1870. Hello! The index was created in 1956. Any data from before then is re-created.

In the same way, we have no idea how many fundamentals of electronics were drawn upon to develop the first 8086 chip, along with how many, many blind alleys. Scientific advance has always required a scientific community. That's why we have journals, and partly why science took off during the enlightenment.

I like the notion of a breakthrough model. It should be applicable to market breakthroughs as well: assembly lines, Xerox machines, personal computers, smartphones, etc.

Moore's law is not going to 'end in a few years', due to 3D stacking of 'chips'. If anything, we might see MORE than 2x chips every two years (i.e. breaking Moore's law on the upside, not the downside), with 3D stacking of 'chiplets'. Heat issues are a problem but they might even add in-chip liquid cooling to solve them.

Bonus trivia: the entire thread fails to mention patents. Pathetic and not worth my time. I have a lot to say on this issue but I won't be bothered. TC's 'model' is not bad but incomplete. You guys should read this book: "The Forgotten Revolution: How Science was Born in 300 BC and Why It Had to Be Reborn" by Lucio Russo (Springer: 1996)

"Moore's law is not going to 'end in a few years', due to 3D stacking of 'chips'. "

Moore's Law is ending in a few years but because of methods like 3D stacking the computer acceleration will not end in a few years.

The president of Nvidia says otherwise.

This guy gets Moore's Law wrong as well by adding " that for the same essential manufacturing costs you would be able to double the number of transistors you could cram into a chip every year."

Moore's Law says nothing about constant costs. Rock's Law states costs increase with every cycle. Still, fun to read: "Intel, Nvidia, please shut up about Moore’s Law...."

Without a fundamental breakthrough in materials, Moore's Law is done in a few. If you have any ties to industry, you'd know this. Also, Robert Gordon is not wrong. If you count performance and not transistor count, then Moore's Law ended a long time ago.

Why would I need a tie to industry? I do know a leading chip maker but that is irrelevant. Moore's Law is about transistor count so how can you say not to count it?

Performance is measured in millions of instructions per second at a constant cost. (Moore's Law says nothing about costs.)

Look at the top graph. Where is there any indication of Moore's law "breaking down" in 2005 (Gordon) or "ended a long time ago" (you)?

I don’t care about being first.
Big problems with #5.
And what a narrow definition!
The internet has fundamentally changed the way I “recreate” and work.
(My current job - which doubled my salary - is based on me taking online courses + my previous experience).

Yeah, I agree. We've had massive material progress and bumps in living standards just in the last 20 years. Internet plus China makes an embarrassment of riches for regular people contrary to what Trump thinks. The amount of knowledge and media at our fingertips surpasses even billionaires from the 1980s.

Where does "Mechanization Takes Command A Contribution to Anonymous History" by Siegfried Giedion fit in?

"One of the twentieth century’s best-known architectural theorists examines the impact of mechanization on daily life
First published in 1948, Mechanization Takes Command is an examination of mechanization and its effects on everyday life. A monumental figure in the field of architectural history, Sigfried Giedion traces the evolution and resulting philosophical implications of such disparate innovations as the slaughterhouse, the Yale lock, the assembly line, tractors, ovens, and “comfort” as defined by advancements in furniture design."

"Arguably you also should embrace the excesses of that breakthrough, not seek to limit them."

Like electric guitar music in the second half of the 20th Century -- an age of excess, but there's something to be said for it.

#9 strikes me as profound and original. "It is our primary way of exploring all of the potential of that cultural mode, and that mode will at some point be tamed and neutered"

It is probably the most interesting and hard-to-evaluate of the 10 points. I'm not sure if I agree with it thogh. Maybe progress in Baroque music faltered because people had decided they'd had enough of it, had already made the most monumental achievements, and would get more growth and higher returns by moving on to what is annoyingly called the "Classical period". (I say "annoyingly" because we also call that whole artistic genre "Classical music"; it'd be like having a subfield of economics called ... economics. )

Not to mention that after having tamed and neutered the harpsichord, clavichord, and lute, progress in instruments led to new opportunities. The piano comes to mind, along with a more modern version of the guitar, not to mention the bass drum and cymbals.

Strangely enough, this progress in instruments is already seen in point 3, and then ignored when talking about 'taming and neutering.'

The real breakthrough of the Baroque period (1600-1750) was well-temperament, which enabled music be to written in all keys, along with the more complex harmonies this made possible. But this wasn't exploited until well into the Romantic period (1820-1900).

Or, the Economic Period of Economics. One that may have ended.

My theory is that too much counterpoint just does not work for the listener- it’s really fun to write but it’s a resource hog without much payoff. Hate to say that but I can’t say I can really pick out more than two independent voices, and I’d had a lot of practice.

This is a good point (along with the ones about technological progress in instruments and well or equal temperament). The composers, musicians, and audiences of the Baroque era may have realized that pushing further into Baroque styles would've amounted to decadence.

Rather than try to push further in that direction, or remain stagnant with the existing Baroque styles, it was time to push in new directions, aided (or pushed?) by new instruments and tuning.

The problem with #9 is #8, "the current/next breakthrough is typically hard to see and understand." You might be riding some dead end fad.

We're in agreement that most of your 10 points are rubbish; I guess? 1. "Big Breakthrough" isn't explicitly defined but later points imply it is somehow related to gdp (except when it's not, see point #4 and note the different language "breakthrough" vs "big breakthrough"). As usual, TC wants to have it both ways. I'm confident that there is exactly zero worthwhile meaning behind the idea that "many different favorable inputs had to come together". I am typically a contrarian (no surprise) so the first thing I do is consider the contrary of the claim. Can you name any event in history (Big Bang excluded, not that it's part of recorded history) that didn't occur because of many different "favorable" inputs? No, you can't. It's called causality. #2. arrival is "random". No, it's not. (Speaking of the Big Bang) The Universe is 13.8 billion years old and a uniform random distribution (equi-probable) is inconsistent with any historical event happening during the (arguably) 50,000 years (i.e. 0.0004% of time (so far, and asymptotically zero if we assume the Universe has a lifetime of trillions of years) that man has been more than a bunch of separate tribes/family groups. OK, I'm nit picking, but not really. My deeper point is that like the man said, we stand on the shoulders of giants. Kinda like the turtles who support the world. Atom theory of matter existed for thousands of years, for example. It didn't suddenly develop at the same time that we developed the equipment to investigate it. Who would predict in 1950 that in 2010 the anti-vax movement would be killing kids? So, I don't disagree with the statement that we lack both control of and accurate prediction of the future. You don't think this is a new idea, do you? We can predict lots of stuff, Heat death of the Universe, Loss of all (surface) water on Earth, Climate change, our own mortality...lots of things. But if we could control them, then would they become more or less predictable? And no, I don't find "more predictable, except when they're not" an acceptable answer. Recall the war to end all wars? LOL. How'd that prediction work out? We are a contrary species. We build AND we destroy. Predicting which we'll do going forward? Good luck with that! (Of course, we'll do both.)#3 I don't have any idea what this means. It seems you (again) want it both ways. Modern Sanitation led to vast social changes - not just in personal health. The integrated (transistor) circuit also led to...well, hopefully two counter-examples (and I could go on) are sufficient. #4. I had to look up "Axial Age". Rubbish. We're finding more and more evidence that Trade occurred BCE across most of the old world (and across most of the new, too). These cultures, the ones that survived, adapted. "Nearly" isolated isn't the same thing as "completely" isolated. The major problem I have with this, is the lack of definition about what is meant. Can you give me an example of a "breakthrough" which didn't result in a change in habits? Would you call something that didn't result in significant changes in behavior a "breakthrough"?? I doubt it. #5. Yup, TC, it sure "might" be. (interpret my tone to be your choice of snide, derisive, or contemptuous.) (the problem with saying this as if it isn't tautological requires a clear definition of "big breakthrough" which doesn't imply it's something "important".) Again, good luck with that. #6 & #7. Couldn't it be claimed that both HAVE been (already) accomplished? Our psycho-drugs have dramatically improved a lot of people's lives (to the point of river pollution!) and our wars trivial for generations (US involved wars, I mean, but there's definitely room for improvement about both how we handle mental health and how we spend our children's blood, but breakthrough? When will they ever learn?) #8. Comments on how "it feels" don't merit a response. #9. X-Rays were a breakthrough 100 years ago. I had my feet x-rayed (I kid you not) instead of measured for fitting my shoes. A friend of my mother had her nose dissolved/destroyed when they (her doctor) used x-rays to remove a mole on her face. And then there's hydrogenated fats, antioxidants, and hormones for menopause. Ride 'em all, huh? Oh, you mean the "good" breakthroughs, right? (or the "real" ones) #10. I know nothing about baroque music. But there is a real argument to be made that beyond a certain level of complexity, science will no longer be evidence-based. That it'll become (String Theory is a great example) which narrative you prefer, since many (contradictory) ones will predict the same outcomes. Personally, I don't see us being anywhere near knowing ourselves well enough to worry about that happening in the behavioral sciences in the next 20 years. But with AI coming on strong, who knows?

I think you may want to consider slightly more condensed posting, as this comment section has a tendency to remove longer text posts, regardless of content.

"I think you may want to consider slightly more condensed posting"

Coming from clockwork_prior, that's some Maury Povich "You are not the Father" type diss.

Exactly the opposite - a number of my longer text posts have been deleted since the site was 'redsigned.' However, I have been archiving this comment section for years, so when it seems reasonable, I simply repost a more condensed version of the post.

It truly seems to be a neutral 'filter' - content does not seem to play any role in such deletions. Meaning that if one spends the time to write a longer post, having it be deleted may be disappointing - and since such disappointment is easily avoided, it was easy enough to provide such a suggestion.

But it seems that even a single sentence was too long for you to read, as this is just an expanded explanation of '... this comment section has a tendency to remove longer text posts, regardless of content.'

One can only hope that when a new civilization rises from our ashes thousands of years from now, the only record of our civilization and culture will be an archive of prior_approval’s comments on Marginal Revolution.

My name is prior_approval
King of Commentors
Look on my works, ye mighty
And despair

Not bad, but maybe:

My name is prior_approval
King of Commentors
Look upon my Wikipedia Links
And despair

I'm not totally infatuated with Tyler Cowen and GMU, I just archive this comment section for years on end because that's totally normal and not obsessive behavior.

Besser - aber nicht Perfekt.

What is interesting is to see the patterns in what is deleted, and of course, Save Page As is simply Ctrl+S in Seamonkey. There are areas that our moon shot launcher really, really does not discussed here, starting with the fact that the Mercatus Center is in no way, shape or form part of GMU. Nor is anything the Mercatus Center sponsors or funds in any way, shape, or form a part of GMU.

It continues from there, and it is fascinating to see that pattern running over the years. Three is something fascinating in seeing such a devotion in presenting a facade that the person presenting it knows is just a facade, and yet the effort spent in maintaining that facade provides a bit of light in an area that is only supposed to be known among those playing the game, not the people being gamed.

The comments have no value in themselves, of course, and very few people pay attention to them - mine or anyone else's.

And really, Prof. Tabarrok continues to be ignored by just about everyone, because the deletions here are not just in connection with one of the featured web site owners.

And as proof of how little anyone cares about the comments, this is not the first, second, or tenth time mentioning that comments are saved - it makes it much easier to repeat the point in a different formulation using cut and paste, along with a tag line like 'as seen by a select few previously.'

Though maybe a little bit of progress is being made - it has been several years since anyone attempted to seriously argue that this comment section is not carefully manicured.

This is an excellent distillation of TC’s weaknesses as an intellectual (as reflected in his blog posts). I hope he reads this and takes these criticisms to heart.

TC "I don’t (yet?) agree with what is to follow, but it is a model of the world I have been trying to flesh out, if only for the sake of curiosity. "
since when has objectivily presenting 10 premises
been a distillation of intellectual weakness?
r u a sociologist?

I'm wondering whether the key to the Florentine Renaissance was the concept of the genius, or what we might call the celebrity. There no doubt were European architects and sculptors of genius before the Renaissance, as the great Gothic cathedrals suggest, but we seldom know their names.

In contrast, a large share of the most famous people in history from, say, Dante around 1300 into the 1500s were associated with Florence and its neighbors. For example, 3 of the 4 Teenage Mutant Ninja Turtles were more or less Florentines. (Similarly, Athens c. 400 BC is full of famous individuals.)

We have interesting narratives about many famous Florentine individuals, which makes Florence's accomplishments more engaging since we can attribute many of them to specific personalities.

In contrast, the identities of most of the contributors to Chartres Cathedral are very hazy to us. So, sometimes Chartres is said to embody the spirit of the age, but other times it fades from attention because we know so little about the personalities of the individuals involved.

My impression is that once Florence began to publicize the identities of its great men, that a snowball got rolling.

Vasari's "Lives" of 1550, with its bias toward Florentine art, is the culmination of the Florentine Celebrity mindset, but this process must have begun far earlier.

I assume that the spread of the printing press out of Germany from the 1450s onward played a role in making the fame of Florentines permanent. But, paradoxically, Venice was the center of the printing industry c. 1500 and obviously had great artists (e.g., Titian). Yet I could tell you off the top of my head far more about Florence-associated individuals like Leonardo and Michelangelo than about Venetians.

This is part due to the enormous influence and heavy Florentine bias of the Florentine Vasari's 1550 book Lives of the Painters, Sculptors, and Architects. He left Titian out of his book until a later edition.

So, Venice had the technology and economic infrastructure to Write the History Books of the Renaissance in terms of men of genius to appeal to modern tastes, but Venice just didn't seem to have as much motivation to write celebrity-oriented accounts as Florence did.

The Florentines produced celebrities because they were wealthy. Talented individuals could flourish in that milieu of prosperity, which was based on trade and, most importantly, the adoption of double-entry accounting. The printing press spread not only awareness of notable personalities but also the advanced techniques of bookkeeping as presented by Luca Pacioli in his 1494 book "Summa de Arithmetica, Geometria, Proportioni et Proportionalita". The Renaissance was even more important to commerce than it was to art.

"Following 1,000 years of cultural decline and societal collapse known as the Dark Ages, the 15th century brought forth the Renaissance, an unprecedented resurgence in learning and the arts, which four or five guys pretty much just strapped onto their backs and carried the whole way.

"Our research indicates that da Vinci, Michelangelo, Shakespeare, and Galileo basically hoisted the entire intellectual transformation of mankind onto their shoulders while everyone else just sat around being superstitious nimrods," said Sue Viero of the Correr Museum of Art in Venice, Italy. "Here's da Vinci busting his ass to paint such masterpieces as The Last Supper and the Mona Lisa, while some loser like Albrecht Dürer is doing these dinky little woodcuts that are basically worthless."

Artificial general intelligence will hockey-stick scientific progress.

Then they will accidentally turn the planet into grey goo/unliveable atmosphere in a massive runaway process because two or more technological tweaks interact unpredictably and their creators weren't allowed (by competing nations/corporations) to communicate properly before releasing their tweaks into the wild.

Some say the world will end in fire
And some in goo.

Well, you can put out a fire.

What can you do with goo? It's sticky malleable and it gets all over you. And then it devours you.

This is the way the world ends
Not with a bang but with a plop

I am surprised that Tyler doesn't see us as living in the middle of one of these concentrated breakthroughs right now. The nexus of AI and robotics is going to completely turn the world upside down over the next 30 years. (BTW, Bran Ferren has a nice YouTube video on the idea of concerted breakthroughs)
I'm actually pretty optimistic about the whole thing, although I think the transition period will be critical. I tend to think that most human conflict is, at its foundation, about a struggle for resources. With an end of scarcity, there will also be much diminished conflict (assuming we can share the benefits).
The question I have of economists is, what happens when we experience simultaneous deflationary pressures as ai/robotics begin to chop away at labor prices throughout the value chain? I think we have experienced this to some extent, but not at the rate that we will in the next several decades.

For #2 and #3 (arrival of breakthroughs), Stuart Kauffman's thoughts on "The adjacent possible" may be useful. Basically, systems which reprocess many building blocks ( e.g. chemistry and life) may have a certain "speed" with which they discover new blocks.

Quote from Kauffman via

"And the fourth concerns the idea of the adjacent possible. It just may be the case that biospheres on average keep expanding into the adjacent possible. By doing so they increase the diversity of what can happen next. It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible. If they did it too fast, they would destroy their own internal organization, so there may be internal gating mechanisms. This is why I call this an average secular trend, since they explore the adjacent possible as fast as they can get away with it. There's a lot of neat science to be done to unpack that, and I'm thinking about it" .

+1. I was about to make the same point when I saw your message.

This is an area where an understanding of complexity theory would really help people understand how this stuff works. We exist in a world of nested, interconnected complex systems. Society is a complex adaptive system.

Stuart Kaufmann introduced the idea of the 'adjacent possible' to explain how biology evolved to be more complex over time. Applied to technology and breakthroughs, it looks like this:

Society always contains people who are lookling for new ways to do things better - to make money, to learn more, to achieve more happiness, whatever. The unexplored space of possibilities around us is the 'adjacent possible' - those things we could actually do with the technology we have, but which we just haven't discovered yet.

Now imagine someone invents something new. The laser, for example. Not only does the existence of the laser increase economic growth, but it expands the adjacent possible. A world with lasers in it can do more things that one without lasers. So now the adjacent possible might include things like fiber optics. So people discover fiber optics, and the adjacent possible expands again.

One of the reasons it's so hard to predict the future is that future progress is essentially a random walk through the adjacent possible, and while you might be able to predict new inventions in the current adjacent possible by looking at what's now possible, trying to see past that into future adjacent possibles is impossible because they are created through the intersection of millions of changes constantly happening.

It's actually a process of massively parallel computation, and it's likely the most efficient way society has to evolve. One piece of evidence for this efficiency is parallel invention. For example, for powered flight to become part of the adjacent possible we first had to invent a whole lot of other things such as an engine that had a sufficiently high power to weight ratio. But once those things existed, powered airplanes became part of the adjacent possible. And almost immediately there were people all around the world building powered aircraft. If the Wright brothers hadn't flown at Kittyhawk, any one of numerous other aircraft builders around the world would have done it very soon thereafter.

Newton and Liebnitz seemed to have invented calculus independently at the same time. Darwin had to race to publish his theory of evolution because another scientist had come up with the same thing simultaneously. These kinds of things happen all the time. This process can seem almost random, but when dealing with a search space of unknown unknowns, random searching is highly effective. When ants forage, they do it by spreading out and milling around randomly. There's a reason they evolved that way. They could have evolved some top-down directed structure, but we almost never see those in nature, because they are not efficient.

When governments get involved with directed technology 'investments' or try to force breakthroughs, all they do is focus more resources on a specific area, reducing the search space overall. Sometimes these efforts discover important new things, which gives us the illusion that this is a highly effective way to advance. What we can't see is the stuff we missed because the search space went unexplored.

There is the additional problem now that 'big science' needs big money. If there is a slowdown in the rate of 'breakthroughs', it may be because the breakthroughs we are looking for require very expensive science like colliders and huge fusion projects and such, and therefore the rate at which we can search the adjacent possible is lower.

the simple breakthrough which will positively affect world economic growth is lunar solar power with costs less than 1 cent per kWh
and autonomous robots, heavy launchers etc which are necessary to realize this idea do appear now

This is a very interesting post. I wonder, to develop a theory of breakthroughs, do we also need to consider the processes of reversal and decline? We often need to understand the causes of the opposite phenomenon to understand the causes the phenomenon of interest. For example, to identify the causes of war, we need to identify the causes of peace. Would a theory of dark ages help us develop a theory of breakthroughs? I'm not sure. Curious what others think.

#6: It's not clear what brain engineering means but I assume it implies hardware and electricity. Chemicals that act over the brain are part of medicine or biochemistry.

A few days ago there was an article linking depression and gut bacteria. Advances in Biology can be among the breakthroughs. Most of us will experience dementia at some point.

This all seems like an attempt to attach an appearance of rigor to something that can barely be described, let alone measured. "Sh!t Happens: A Regression Analysis."

How about this breakthrough: a return to reason. Here's a new book that traces the history of reason, from its inception, its ascent, and recent descent, and offers some ideas on a return to reason:

#6: Re; neuroticism (generalised "negative" emotion) we pretty much already have mass brain engineering through SSRIs and various other drugs.

Are these a boon or a curse? You tell me.

What might be more interesting than generalized broad brush positive / negative emotion is finding elements where large segment of people in society are maladapted to modernity and adjusting those.

Particularly: -

"A considerable proportion of the population in post-industrial societies experiences substantial difficulties in the domain of mating. The current research attempted to estimate the prevalence rate of poor mating performance and to identify some of its predictors. Two independent studies, which employed a total of 1,358 Greek-speaking men and women, found that about 40% of the participants experienced poor performance in either starting or keeping an intimate relationship, or in both areas."

It has been proposed that one reason behind the high prevalence of poor mating performance is the mismatch between ancestral and modern conditions (Apostolou, 2015a). More specifically, selection forces have adjusted the adaptations involved in mating to function optimally in a specific environment. When the environment changes, selection forces will adjust these adaptations to work optimally in the new setting (Lynch, 2010; Nielsen & Slatkin, 2013). Nevertheless, this process takes several generations, and in the meantime, there would be many individuals who have adaptations that are not well adapted to the demands of the novel environment, a problem known as evolutionary mismatch (Crawford, 1998; Li, van Vugt, & Colarelli, 2018; Maner & Kenrick, 2010). How many people are affected depends on how drastic and how recent the change in the domain of mating has been. If the change has been small, it is expected that most adaptations would interact with the novel environment reasonably well, so few people would be affected. On the other hand, if the change in the environment has been substantial, several adaptions may not be able to interact effectively with the very different environment, resulting in many people experiencing poor mating performance.

More specifically, anthropological evidence from contemporary preindustrial societies, along with historical evidence from ancestral preindustrial societies, suggests that the contemporary environment associated with mating is very different from the ancestral environment. In more detail, anthropological evidence indicates that in a preindustrial context, mate choice is typically regulated and individuals are not free to choose their mates, who are chosen by their parents (Apostolou, 2007, 2010). Evidence from a sample of 190 contemporary foraging societies indicated that the most frequent mode of long-term mating in about 70% of cases was arranged marriage, while free courtship marriage was a practice in about 4% of the societies (Apostolou, 2007). Evidence from contemporary preindustrial societies that are based on subsistence agriculture indicated that free courtship marriage was practised in 7% of societies for women and 23% of societies for men, while arranged marriage was the most frequent form of marriage for both sexes (Apostolou, 2010).

It's an oft repeated canard that 'dating' and open courtship in modern societies is a return to ancient human forager norms, while 'arranged marriage' is the product of a patriarchal civilization. Not so. We're probably better adapted to winning status with our group through expertise and good disposition, then relying on our kin to use that to arrange us a marriage. We're probably not well adapted to 'approach' or 'seduction' (males) or 'attracting' attention (females). This is probably a major cause for low intimacy and loneliness, as the bigger drains on happiness in modern life. "Brain engineering" could change that.

Adaptation of appetite to dealing with a very food rich environment is probably another one ("fix" people to have a taste for bland food, not very much... consumed at restaurants popular with attractive women ;) ).

Mating difficulties in Greek society? Some people would suggest moving to the Philippines.

Yes on 5, 8, 10, and possibly 9. Let's hope on 7, meh on 6. Yes on 1.

I think of:

1. AGRICULTURE -> jural stratification (jural superiors)

2. PAPYRUS -> Axial age

3. PRINTING PRESS -> Reformation, war, natural jurisprudence, liberalism, print culture, "the public," the nation-state, democracy, political collectivism, war (e.g., WWI, WWII), modern social-democratic statism (Hayek atavism thesis)

4. INTERNET -> ?????

This all seems like an attempt to attach an appearance of rigor to something that can barely be described, let alone measured.

By which you mean "a rough draft of a Tyler Cowen book."

But I think I liked it better when it was called "Tipping Point" instead of Breakthrough.

#7: Required institutional breakthrough would probably be a non-Marxist-Leninist PRC. Seems not so likely? It's difficult to see how institutional integration of a Marxist-Leninist PRC is going to be possible, maybe some kind of compromise where China and other nations accept restraints for acceptance of status quo is possible though.

"Global governance" wouldn't do it. In practice it is generally simply the dream idea that the Western technocratic class would run everything without impediment of democratic control, unacceptable to both foreign autocrats and Western public.

Regional integration a la EU achieving full federalism would probably also not help peace. Two competing hegemons of USA and EU in western sphere, neither singularly threatening to China, both more independent, over a looser primarily military alliance of the West (preferably integrating Russia) doesn't seem like it does so much good. EU doesn't do much to prevent war between European democracies, while increasing the perception of threat by neighbouring weak democracies (Russia, Turkey, etc.) which it can't integrate, so probably increases risks of war. (We should probably hope Pax Americana continues in the west, even if it can't hold in the east.)

Stopped at #7 and can proceed no further:

any credentialed academic (BU grad or no) or any cosmopolitan provincial pleased to predict on the cusp of Spring 2019 that humanity will reach century's end without dabbling in another World War or two (or three) arguably DOES NOT operate with any requisite sense of anthropological realism or of history (perhaps not even the specific history of applied technology).

The advent of Technogenic Climate Change ALONE will likely wind up provoking at least one World War all by itself: desperate wars for (depleted and/or vanishing access to) resources and even for territories deemed valuable or comparatively "non-threatened" likely stand to issue invitations of their own.

Granted, our lying and spying Tech Sector is full of hope that advances in AI will prove fruitful enough to help beleaguered humanity avert the very worst of the manifest troubles and plagues of its own creation: courtesy of our lying and spying Tech Sector, however, it's possible NO ONE will be willing or able to credit AI as any new gift to humanity once humanity understands it has its lying and spying Tech Sector to thank in no small measure for the advent of Technogenic Climate Change.

"Applied technology giveth and applied technology taketh away: blessed be the name of applied technology."

So far, no world war in almost 75 years. Climate change starting a world war? Okay...

What civil unrest might break out globally once climatologists and meteorologists issue guidance calling for the revocation of alllllllllllllllllllllllllllllllllllllllllllllllll those frequent flyer miles?

I plumb forgot about that.

An interesting counterpoint in science/math is fractal/chaos. It lied dormant after being very fully developed for many decades buried in obscure Russian academic journals until someone saw how useful it could be and resurrected the field in the 1980s at which point this fueled many uses since we were ready for it.

Oh brave new world that has such comment sections, where referencing a book titled with a Shakespeare quote is apparently unacceptable.

Still fascinating to see how someone is edging ever closer to DeLong style insecurity when it comes to being confident in the assertions they are (not yet?) making.

This framework is good, but it is interesting that "gene" and "dna" and even "CRISPR" do not appear on this page.

The rapid progress on gene sequencing exemplifies the early part of your list. Computer and lab automation progress triggered massive progress in biology, and now we stand on a cusp.

We go from learning to read (dna) to learning to write (living things).

It is a strange moment to fear limits to growth.

I think this ties in with my own thinking (uninformed as it may be) that the collective wealth of a region is intimately tied to knowledge, and breakthroughs provide an engine for increasing that knowledge which grows collective wealth.

And I also happen to believe that while the big breakthroughs (like the discovery of crystal clear glass or the invention of mass production manufacturing techniques) make a lot of headlines, small breakthroughs and incremental learning also contributes a lot behind the scenes. Thus, one company who comes up a better way to assemble widgets or a better design for their product to reduce part count (and thus, manufacturing costs) may not get the same headlines as (say) the invention of the Cotton Gin, but it does contributes incrementally to national wealth.

It's why I suspect entrepreneurialship (which is, in some ways, putting money behind those incremental and often minor refinements) has helped the United States economy excel in modern times. And it's also the way corporations compete and help drive costs down as they compete. (Meaning the supply curve cannot magically dip below the cost of manufacturing as supply exceeds demand, unless the suppliers figure out--through these minor breakthroughs--how to drop manufacturing costs.)

In other words, while Mozart through The Beatles may grab the headlines, don't underestimate the value of the work-a-day schlubs writing ad jingles... :-)

On Moore's Law discussion above .. computation and storage are essentially free. Greedy to demand much more progress there.

"5G" is also not on this page. That one seems obvious as a short term economic and technology driver. Possibly the ubiquitous computing capstone..

Well, the very fact that the the revolution in cell phones/networks pretty much took place outside of the U.S. over the last generation (and from the outside, the U.S. still looks quite backward in this area in general) is not something apparently open for discussion here.

Bell and Motorola feature prominently, though there was certainly global innovation.

Have you travelled to other countries? For the last couple of decades, people coming from major swaths of Europe and Asia are amazed at just how backwards the U.S. is in this area.

It isn't about the innovation per se (though the original iPhone was a laughably poor phone by European standards), but the infrastructure - and cost.

"On Moore's Law discussion above .. computation and storage are essentially free. Greedy to demand much more progress there."

Not if you want to build a Holodeck.

"Random" merely means "acts according to a system not included in this model". If you look at scientific advancements/paradigm shifts, they are far from random; various models have been built to explain them, and the fact that multiple people were working on the same revolution (evolution, calculus, even relativity) demonstrates that once you have the data necessary, the revolutionary theory is almost certainly going to occur.

This makes a certain amount of sense. These theories are facts of reality (or approximations thereof), not made-up stories. Once you have the data in place, someone will eventually figure it out. "Eventually" may be three generations down the road, but so long as scientific investigation continues it'll happen. It's the Law of Large Numbers; have a million scientists working on something and a one-in-a-million shot becomes a near-certainty.

Was "random" an argument for or against broad public research?

I was arguing against the entire framework of the argument made in the original blog post. I'm arguing that his understanding of breakthroughs is fundamentally flawed, and more specifically that his first and second premises are not true. That doesn't necessarily mean that the arguments/assertions built upon that foundation are wrong, just that we can't use this line of reasoning to support them.

Not relevant, but does anyone have a recommendation for a history of the Spanish civil war? I'm especially interested in the . . . eclectic group of people who fought with the international brigades.

Concerning #10, Baroque music and science, breakthroughs in music are mostly about the human mind (& its linkage to the body) while breakthroughs in science are about both the structure of the world and our capacity for thought (in constructing theories and models, in constructing means of observation and measurement). Up above Aretino notes that "the real breakthrough of the Baroque period (1600-1750) was well-temperament." Given that, why was Baroque subsequently passed over in favor of the so-called Classical style? One could conjecture that the best composers found other things more interesting that complex counterpoint, but that only pastes a label ("more interesting") on the problem. Still, cultural history (evolution?) is full of such problems.

As for progress in science (and the like) a couple of months ago, I had a longish post in response to Bloom, Jones, and Van Reenen, "From Are Ideas Getting Harder to Find?" where I made arguments about two of their case studies, Moore's Law, a drug discovery: Stagnation 1: The phenomenon and a simple-minded model with some remarks on search (pharmaceuticals) and process re-engineering (semiconductors). Here's a summary:

Bloom, Jones, Van Reenen, and Webb (2018) have examined three cases of R&D productivity: 1) Moore’s Law in semiconductor production, 2) crop yields, and 3) drug discovery for cancer and heart disease. In each case R&D costs rise more rapidly than increases in productivity. I introduce an informal spatial model, the White House Easter egg hunt, as a way of thinking about the problem. Then I consider two of their cases and suggest real-world interpretations for that model. In the case of Moore’s Law we face process re-engineering costs imposed by the fact that the character of physical phenomena change as scale decreases (with different laws coming into play in the quantum realm). In the case of drug discovery we’re up against search through a high dimensional space sparsely populated in an irregular pattern. These two factors seem rather general and not specific to these particular cases.

So if excess is always necessary and desirable, practically and ideologically, I guess a green revolution (sorry, I don't know what to call it - too bad that name was given to something else already, and so inaptly at that) is not going to be the next big thing? I've never heard Amory Lovins mentioned on this blog, for instance, and the one time I mentioned Dave Foreman's long-ago radical plan for species conservation, I was quickly shut down with the objection that he was against immigration because against population growth (as so many of us were once) and either had become, or this stance was enough to make him, a white ethno-nationalist or something equally toothless.

Anyway, environmentalists don't seem to have any prominence now. But it did seem like "sustainability" or whatever you want to call it was going to dominate the national conversation, at one time, but I guess a very serious and considered objection was made: it was equated with putting on a sweater, and for some reason that was risible. True, people enjoy arguing over climate change, but in a fashion largely unconcerned with the other animals, and the plants, that are here with us now, that we might reasonably have treasured.

Brain engineering? Do you mean education? :) I suspect increasing use of increasingly easy to use and rapidly improving models connected to real time data will lead to steady improvements in the quality of everyone's decision making and "breakthroughs" on many fronts.

Having just listened to the CWT podcast with Sam Altman, the changes and growth from infinite, cheap, non-polluting energy (from fusion) and A.I. both seem like *big* breakthroughs that we can pretty easily see coming, and could be of the sort that contradict Gordon's assertion.

Since inventions that are useful generally happen soon after the requisite enabling inventions exist, I'd suggest a good rule of thumb for predicting the near future would be to look at what has been invented in the near past, and what new useful things they might enable. Then more speculatively, you can look at the inventions that seem to be coming in the relatively near future, and try to figure out what they will enable once they are there.

For example, we have very recently seen a rapid reduction in the cost of access to space, almost by an order of magnitude. So that creates a new 'adjacent possible' for applications that were impossible or uneconomical when space access was more expensive. The obvious candidate right now is a global network of low earth satellites providing high speed internet access, which was too expensive to do until now.

Then you could ask yourself, "What could we do once we have cheap, global internet access for everyone? How will that change things?"

In the near future, if Musk's very heavy lifter works as planned, space access could drop in price by another order of magnitude. At that point, building a hotel in space starts to look possible. Sending prospecting robots to the asteroids starts to be something that medium sized corporations can do. A human can fly into space for not much more than the cost of a luxury cruise. Bringing many more entities into the 'space economy' will increase the rate at which the new adjacent possibles are explored, and the rate of progress will accelerate, assuming we keep finding new things of more value than the cost of finding them.

Another area to look for is materials science. We have made rapid advancements in 3D printing technology, including printing high strength metal components. The SuperDraco rocket engines on the new SpaceX Dragon capsule are 3D printed.

We've cracked protein folding, and now understand how biological nanomachines work. This could lead to breakthroughs in all kinds of new targeted medicines.

3D printing promises to not just revolutionize manufacturing, but to make materials we've never seen before, such as 'mesomaterials' that get their characteristics from the geometry of the material. Think chain mail, which is flexible like cloth but also hard like steel. With 3D printing, we can start making materials that behave in unique ways at a micro level. We have no idea what kinds of new things this type of technology will enable.

Imagine if we also figured out how to print transistors and circuitry directly into 3D printed materials that can intelligently change their properties as needed. Imagine a shirt that monitors your body temperature and opens and closes its weave dynamically to keep you perfectly comfortable at all times. But even that would be a trivial example. If we had this kind of breakthrough in materials science, it would change everything.

It's extremely difficult to predict what and when the future innovations will be, but these seem to me to be very good predictions, utilizing our knowledge of the adjacent possible or next-adjacent possible.

I have to wonder about that claim that R&D efforts are failing because, by gosh! it takes 18x as many researchers to keep pushing Moore's Law along. I mean ... the USA spent about 2.75% of GNP on R&D efforts of various kinds in 1970; fifty years later it is spending ... about 2.75% of GNP on R&D.

If there are so many extra people now working on electronics, it strikes me we've got a lot fewer people wotrking on other areas of scientific/technoloigical development. Pretty clearly the space program has not become 18 times larger. Nanotechnology hasn't gotten much further than eric Dresler's papers 30 years ago. Deep sea mining and oceanography haven't gotten much further along since NOAA was founded 50 years ago. High energy particle physics seems to be stagnating. And so on. Other than the internet and modern computers, the only really brand new technology I can think of is 3-D printing.

Maybe I miss something, but it looks to me that technogical progress is stagnating because of deliberate choice. Our legislators, our government bureaucracies, our investers, our businessmen have decided to plow their resources into computers and communication and let every other possibility slide.

"Nanotechnology hasn't gotten much further than eric Dresler's papers 30 years ago. "

This is crazy talk.

Next generation sequencing...
Look at biomedical science. No one is seeing stagnation. And - yes- this is leading to new therapies. Immunooncology has changed the landscape of cancer treatment. There are many new oncology drugs in development. Look at CAR-T cell therapies. Again, real treatments...

One can say both have been around for decades so nothing new even though significant breakthroughs have been fairly recent.

Hmmm ... Nanotech used to pictured as very tiny tiny machines shifting about molecules or even atoms in great quantities -- as an industrial process. We've got CRISPR for manipulating genetic material, which is quite astounding but it's not at that level. We've got graphene manufacturing down -- single cell-layers of pure carbon -- and might hope to extend the technique to other elements, but we aren't there yet. We've got IBM researchers shifting single atoms about, but we had IBM researchers doing that forty years ago. As a transformational technology, nanotech hasn't yet arrived and I don't see evidence that businessmen and venture capitalists are going to change this soon. Mostly when I see a reference to "nanotechnology" outside of Drexler's blog, it's a fancy way of saying "chemistry."

To say we aren't there yet is not the same as saying there have been no advances over the past 30 years.

#8 "It almost always feels like we are failing at progress." ...may be a necessary state for progress to happen. As you used to be so fond of saying, "incentives matter" and thinking that we are progressing swimmingly is something of a disincentive, or at least softening of incentive. See Dunning-Kruger, et al.

Curious about the endgame. We’ll get AI, cheap, clean energy, end of disease, etc. We might even figure out how to govern in a multidimensionally sustainable, equitable fashion. My question is ... then what?

I love reading everybody's comments.

Comments for this post are closed