Assorted links


Why would they go from chocolate milk to skim milk? Skim milk is terrible. Skim Milk is to Whole Milk as Natty Light is to Guinness.

Busybodies never really concern themselves with "unforeseen" consequences.

This study was not done by nutritionists, but rather behavioral economists. The main point of the study was to support the sponsors' preferred approach to dealing with problems: nudging people (including six year olds) into making better decisions. Thus, they substitued chocolate milk with skim to best show just how bad forcing children to accept the "healthy" option is. Even then, the best they could show us was that kids who already eat way too much of everything might get a little less calcium and protein.

Expect a follow up study where the chocolate milk is placed behind the whole milk in unattractive and hard to open packaging that shows that, by nudging our kids fat bellies, we can get them to overconsume "healthy" foods instead of junk.

except they are concerning themselves with unforeseen consequences.

I'd like to see the experiment repeated, but going from chocolate milk to coffee. And I want test scores as part of the evaluation.

Milk is bad enough, skim milk is really, really bad.

In a Piketty perspective it's interesting to read what Sober Look (UK housing prices) states about the return on housing capital: "UK residents will be paying increasingly more for shelter in the years to come." This does not seem to be the kind of 'risk-related' return Kling talks about: "My second disagreement with Solow is that he, like Piketty, omits any discussion of risk as a component of “r.” In that regard, Tyler Cowen’s skeptical review better accords with my own thinking. The way I see it, Piketty and Solow work with models that incorporate homogeneous workers (with no differences in human capital) and homogeneous capital (with no differences in ex ante risk or ex post returns)."

Wow. Does 'r', as a % of total capital, really stay level because the additional amount of capital (estimated at market prices, i.e. largely caused by increasing house prices) is so risk taking that it's average ex post return (that's what Piketty is talking about) is high enough to compensate diminishing returns to 'normal' capital? Or might there be an element of rent taking in the UK housing market? And indeed: Piketty explicitly states that his estimate of capital does not encompass human capital. He looks at people. So what? Does human capital explain the much higher flow of 'inheritance capital' and decreasing estate taxes?,_1982_-_2010.gif

Or might Michael 'R is for Rentier' Hudson be right after all?

I'm not sure how a consideration of the risk component of r challenges Piketty's conclusions. Let's say that the long-term return on capital is r, but that there is a volatility component (risk) so that the return in any given year could be positive or negative. This may cause some short-term dyspepsia for wealth-holders, but over time, the long-term process of increasing concentrations and returns to wealth will hold. That is the outcome Piketty wants to avoid. I don't see how a consideration of risk challenges that.

I wrote out a response, realized it was dumb, and then erased it. I just thought you should know that.

I have never understood the fanatical fetishization of "authenticity" by upper middle class food reviewers. Who cares if the food there is made the same way as it is made in some foreign land, so long as it tastes good?

Because the dining experience is not 100% about "tasting good" but also about learning something about another culture, and expanding one's horizons.

When I first had japanese food, it was all I could do to eat a california roll. I could have just kept looking for things that tasted good to my current palate, or keep pressing, expand my horizons and learn to like the Japanese flavors on their own.

I agree completely...but if anybody tries to take away my fake American Chinese food, we are gonna have a problem. :)

Expanding your sensory horizons sure, but what are you actually going to say, learn about China or the Chinese from anal retentively perfect dim sum reenactment? Nothing.

You learn what that dim sum tastes like. Then, if you prefer it, you can order more.

Agree it is a fetish for some, but the differences in taste are substantial.

For example, a modern pan-Asian restaurant with a high priced executive chef is likely to be fresh, colorful, delicious, and expensive, but it is more "American" than something you would find in Chinatown. The latter is more like grandma's cooking, and some of us have a preference for that.

1. Until just now, I've read "service related trauma" to mean injuries while in the military. But, Panda Man probably considers dining out a life or death experience.

4. Most of that is wrong or incomplete. Russia does not have to fight a war so it is mute, but they have the cushion to hold firm in their position through next year. That's really all that matters. They know Obama will sign off on anything to get the issue off the table long before the economics are an issue for them.

6. Cat intelligence is a puzzler. I have all sorts of beasts and cats are the most inscrutable. Dogs are bred to be our companions so they seem smart when they are simply doing what they are bred to do. Cats are a different matter. Our relationship with them is accidental and transactional.

Why is Russian newspaper necessarily wrong here? The economy has already slowed and likely to continue slowing if the crisis continues. (The sanctions are too limited to make an impact at this point and in reality the Russian wealthy are moving assets out of the country is doing the damage.) But what if the crisis last 12, 18, or 24 months and the Russian military is draw into the fighting. At this point, Putin ambitions appear to larger than Crimea so this situation could escalate into a war. The median Russian population median incomes are not that high and if inflation hits 20 - 30% for over a year that is going cause a lot of unrest very quickly.

Remember if the Iraq war did last 6 months, then W. would have been a hero.

I'm surprised the article says cats pass the pointing test, because that contradicts my observations. If I put a nice morsel on the floor and point at it, a dog will look where I'm pointing, see the morsel, and run over to gobble it up. A cat will look at the tip of my finger. I have to go over to the morsel and touch it with the tip of my finger before the cat will notice it.

Every dog I've had has been about what i expected in terms of intelligence. No two cats have been the same. I had one cat that appeared to be retarded. At the other extreme I had cat that was eerily intelligent. The latter could open doors, figure out the alarm clock worked (he would turn it on if he was hungry) and so on. The lack of human intervention probably results in a wider range of outcomes.

No two cats have been the same.


Aye. What their preferences are; whom they trust and whom they do not; modes of communication; esoteric agreements with other cats about who gets which swatch of territory when...

I tend to assume you're 'retarded' cat just was not amused by performing such tricks; no cat takes much of an interest in impressing the staff around the house. A proper servant is just a pair of hands....

"Dogs are bred to be our companions so they seem smart when they are simply doing what they are bred to do."

Dogs are smart. Clearly not human smart, but compared to most of the animal kingdom, they are well in the top bracket.

Indeed, from the article: "Researchers have shown that Fido can learn hundreds of words, may be capable of abstract thought, and possesses a rudimentary ability to intuit what others are thinking, a so-called theory of mind once thought to be uniquely human. "

Defining smart in animals is dodgy stuff. Brain mass to body mass rations probably get close. But, the areas of the brain that are more or less developed matter too. Otherwise ants would be the smartest animal on earth. Early humans had developed complex behaviors, but were not smart by human standards.

As to dogs, they eat their own poo so let's not get carried away with their IQ.

I have similar experiences, and I feel less retarded about all the times I add an N to the end of "ratio."

Typing lessons were amazingly useful for me, but the typing habits can be amusing.

It looks like the US needs to ease lending standards or else we are in danger of pricing the middle class out of the housing market. Cash purchases of homes are hitting new record levels. A modern economy should take advantage of more debt.

I dunno. By that standard, the US has a pretty modern economy:


We are getting older. If an empty nester downsizes, they are likely to do it in cash.

#2 A claim that "Reverse-Engineering of Human Brain Likely by 2020" sounds astonishingly stupid to anyone who actually understands the underlying biology. Here is the thing: Brain is nothing like any computer. And *every single living cell* (in brain or elsewhere) is already infinitely more complex than any of the supercomputers in existence today.

Reverse-engineering of human brain is not likely in a lifetime of anyone living today, if ever.

Care to bet?
We are lot closer than you think... and we have now have multi billion dollar R&D efforts working on the problem.

We 'reverse-engineered' birds flight without a deep understanding of a birds biology, or even crude aerodynamics. I don't think it is out of the realm of possibility that in 10 years we have a 'Wright Brothers' type event analogous to simulating the function of the brain.

I think PZ Meyers has a drastically higher bar on what it means to reverse engineer a brain that what a practical engineer has in mind to reverse engineer a brain. For example, we don't need to know how each protein works because the computer brain won't be using proteins to connect/control artificial neurons. The hard, undiscovered part is understanding the general learning algorithms of the neocortex.

Oh yes, I'd be more than willing to bet! Any amount would be fine with me. To make it worth the bet, let's bet some good money! But first, do describe your threshold for "reverse-engineered brain", OK?

The number of the insane predictions about biology and biotech made by people who do not understand biology is ridiculously high - wish I could bet on every one of those cases. Even the innocuous-sounding "$100 genome" is not going to happen...

Well really the comment “Reverse-Engineering of Human Brain" doesn't really mean anything. Without specific benchmarks, it's not even a bet that you can declare a winner on.

Exactly. That why I started with asking what threshold "jpa" would like to use.

#2 is the usual stupid stuff from non-technical folks about the "special" nature of the human brain, just like the way we discovered that 1) the earth is not the centre of the universe 2) "organic" is just regular chemistry and 3) we are animals as well, eventually everyone will accept that the brain is a wet computer and "thinking" is just another form of computation. And yes, the gene does contain all the information needed to code the human brain, anyone who doesn't accept that is not worth listening to.

All Kurzweil is doing generally is just extrapolating Moore's law. He makes the point very strongly that people do not have good mental understanding of the impact of doubling processes (which this link and some of the responses here prove). Basically these look for a very long time like nothing is happening, and then very rapidly we see massive change (the rice grains on a chessboard is a good example). So slow progress on AI is exactly what we should see with Kurzweill's model. But it does not mean AI is not just around the corner. Postulate a lot more complexity than we think now and all it does is delay the time by a few years in a doubling scenario.

Of course you can argue that Moore's law is about to fall of the edge of the cliff. But many people have been wrong about that before and there is certainly no theoretical barriers to increasing computing power for many more doublings, regardless of whether or not the current silicon based technology is the one to get us there. Ironically the human brain is the best example of this that we know. So AI and uploading will happen, because it is technically feasible. I personally really hope it takes a lot longer than 20 years though. There are some bad people out there and they will be working to be the first to be uploaded and take advantage of the rest of us. I don't know if the human race can actually survive this.

Kurzweil fan-boy.

What a cogent, incredibly well thought out and principled rebuttal, Beefcake the Mighty. I look forward to your further expositions and analyses with great interest.

Ironically, the people doing the most to understand how the information in the genome is a function of the gestalt of the genome are the Intelligent Design advocates, because they want to highlight how much information is there (too much, in their opinion, for evolution). It's actually quite interesting, and protein folding is but one example.

Remember, the Bircher's were crazy, but also far more prescient than the then-current conventional wisdom on socialism.

Warning: link #2 is PZ meyers. Other than being sarcastic and often nasty, I've yet to read anything by him dealing with the mind - whether it be EP or psychometrics (two of his common foes) - where he hasn't been completely ignorant of the topic, gotten nearly everyhting wrong, while at the same time using straw man arguments.

(See his back&forths with Robert Kurzban on EP or past blog posts on race and/or psychometrics where a psychology prof (forget his name) often appears in the comments section and makes him look like a fool.)

Can't waste my time reading this unless others opine that he's represented Kurzweil fairly and gotten some things right.

I am not a fan of PZ, but in this case he got everything right.

I'm always enjoyed Kurzweil, but it sounds to me like PZ is right here, and a prick about it.

PZ does not even consider the power of the block chain in computational neuroscience applications. Sorry but with more and better engineers available in the Valley the timeframe for Kurzweil's plan could even be moved ahead.

Another small point where PZ gets it wrong is that he makes a separation between code and data. In many AI languages (like Lisp), code is data and data is code. It's not a big deal, but it does show that he/she is not that familiar with AI computer science at undergraduate level, and should probably not be taking someone to task on the topic of AI.

I was too late for the warning! PZ Meyers ... yish

You beat me to it: Kurzweil may not understand the brain but Meyers *really* doesn't understand the brain.
It may be worth reading through some of the comments, though.

#2 For what it's worth, I've never thought much of Kurzweil. I actually read his book on the Singularity and also his book on aging and thought his predictions were simplistic and some even downright silly. Also, for someone who takes supplements and aging treatments that have supposedly lowered his biological age, he seems to be physically aging rather normally. Again, just my nonexpert opinion.


It sounds to me like both Ray and PZ are wrong.

PZ seems to not really understand information theory when he says "Kurzweil knows nothing about how the brain works. It’s design is not encoded in the genome"; it seems that the point Ray was making about the brain is that it's kolmogorov complexity is bounded by the number of bits of the compressed genome, whereas PZ misunderstands this to be a claim that we can literally reproduce embryonic development in situ - an entirely different claim.

If that is the substance of Kurzweil's claim, it is speculative to the point of being not-even-wrong. No one has demonstrated that the complexity of a biological system is limited by the informational complexity of the genome. As PZ Myers indicates, other mechanisms can and do transmit biological information, including "epigenetic" mechanisms which lead to bases such as methylcytosine, hydroxymethylcytosine, and formylcytosine in the genome sequence. In some cases the conformation of the genome -- the literal 3D arrangement of the DNA strands -- may also transmit information. If you expand the DNA alphabet to include these extra three characters (seven total) instead of 4, already the number of bits required to represent the genome has nearly doubled.

Now, consider that the conformation of the DNA -- literally the 3D shape of the strands as they wind (or don't) around chromatin, etc., may also transmit information. Each base of DNA contains say 40 atoms. So there are about 40 x 5 billion = 200 billion atoms in the genome. Each of these atoms has 3 (x, y, and z) translational degrees of freedom, meaning that there are 600 billion degrees of freedom in the genome. And each degree of freedom is a continuous variable. Let's say we could represent each d.o.f. with 8 bits. That means now we need 4.8 trillion bits to represent the genome. Applying the dubious "lossless compression" correction that Kurzweil does, we get to 37.5 billion bytes required to represent the genome. That's about 1000 times different than Kurzweil's number.

Next, consider that quantum effects might be important, and repeat the calculation.

This comment is a good example of how people mock Kurzweil based upon arguments that are much sillier than those of Kurzweil; and because Kurzweil's conclusions are wacky, such critics get away with it.

No, I'm using exactly the same "arguments" as Kurzweil, but different starting assumptions. And my assumptions aren't demonstrably worse than his. Neither are they better. But that's the whole problem. The assumptions aren't empirically tested, and may not even be falsifiable.

I think you are right here.

PZ appears to be saying that the complexity of the human brain is orders of magnitude more complex than the information contained in the genome; they are not blueprints.

PZ does appear to give a tangential and fleeting acknowledgement that complete replication of the brain isnt necessary to have an effective simulation. He also intimates that our ability to measure how well the simulation compares to an objective benchmark of cognition depends crucially on understanding more about the brain. In effect, he is saying that we cant simulate it because we dont know enough about it AND we cant judge how well we have simulated it because we dont know enough about it.

He seems to understand far more than Kurzweil. Have I given this prick too much credit?

I do think that simulations can reach effective comparability to biological systems without being as physically or technically complex. Biological systems are not at any point in time optimally efficient and there is no guarantee that they will be optimally efficient in the long run. Given an infinite amount of time they would, but evolution is as much a product of accident as it is a measure of success. The most perfect human who ever lived died when her father left her in the car on a hot day because the Bulls were about to tip off.

Tyler, as a restauranteur I find your recent fixation on online reviews to be fascinating and depressing in equal parts. Are you planning on ditching academia to open a pupusería?

Economists love data.

People discussing Piketty are always discussing 19th century rentiers and the robber barons as if they were the same thing. It is clear that those who sat on their wealth (mostly land) and tried to earn passive returns did very poorly. The late nineteenth century wealth inequality at the top of the distribution was driven by returns to risk taking, entrepreneurship, and in some places cronyism, but never to being a rentier. In fact, THAT was the true lesson of Balzac (and even Zola who scorned the clueless landlord as both being oppressive and swept aside by the waves of progress).

1. Our prissy, whining society has gone too far when bad restaurant service is considered "trauma." Now, dealing with an Indian call center for a hardware issue - that is traumatic!

4. It isn't. The Soviet Union wasn't geared up for war in the late 80s (not that we knew at the time) and Russia is no better now. But as Z says, they don't have to be. Our Nobel Peace Prize winner won't attack no matter what Russia does.

6. Because humans are the test subjects of the cats.

Dogs are genetically disposed toward mirroring the behavior of others. Cats certainly learn from others but their self-learning skills are incredible. Cats are tenacious, but they are also energy conservers much more than dogs. Unfortunately, the biggest failure of the researchers was to recognize that their inability to observe controlled experiments on cats is a valuable observation in itself. Dogs are cohabitants while cats are opportunists. All cat behavior from grooming, to burying their feces, to hiding, to cuddling is part of their sophisticated survival skills. They are nearly the perfect survival machines.

7. I for one look forward to the end of Piketty's days in the spotlight. A well-adorned incorrect argument is ultimately incorrect. And even if every conclusion he reaches is correct, his proposed solutions are antithetical to liberty.

I talked about the tension between diminishing returns and technological advancement here just a few days ago. I suppose one has to have a Nobel Prize to gain attention for stating the obvious. That Solow is enamored with Piketty should surprise no one. That he has, as observed by Kling, softly accepted Piketty's glaringly unsupported assumptions is also not surprising.

The economics profession is, I'm afraid, a little too collegial in its criticism of those within the academy. Kling gets away with being critical because he is an outsider. Mankiw, for example, wouldn't blatantly disagree with Piketty in a way that diminished Piketty's esteem. They take indirect action. I find TCs criticisms more honest and direct albeit also too kind. Piketty knew his weaknesses before his book was published. Never let facts or reason get in the way of creating a highly politicized sensation worth millions!

#5 ANother example where greater immigration the UK could help spur the innovation needed to respond to the housing shortage. Indian software engineers seem like good candidates to help British companies developed block-chain based solutions to housing construction.

You know, I had never thought of it that way before.


Reminded me of a quote from my favorite video game, Alpha Centauri:

"Remember, genes are NOT blueprints. This means you can't, for example, insert "the genes for an elephant's trunk" into a giraffe and get a giraffe with a trunk. There are no genes for trunks. What you CAN do with genes is chemistry, since DNA codes for chemicals. For instance, we can in theory splice the native plants' talent for nitrogen fixation into a terran plant."

Did you just discover gene-splicing? Lol.

Brad DeLong has one of the best critiques of Piketty that I've read. It really lays out the issues well (although he still raves about the book.)

According to DeLong, a key to Piketty's argument is that the political system will insure that as capital increases, labor is not paid its marginal product. This is his answer to criticisms like Rognlie's. "And here we have passed out of neoclassical economics entirely...we have arrived at the point that Piketty needs to write another book."

What's it smell like?

As Solow elaborated Piketty (“steady state”), the view proposed seems mechanical. Every so often, something like a war mucks up the gears for a bit, but then the mechanism is cleansed and we're right back into a well-functioning mechanism. This view has a number of unpleasant consequences, in my view.
1. It takes a major social dislocation like a war to muck up the gears. This can't be good news.
2. If it is mechanical, there isn't really a lot we can do about it other than to blow it up.
3. Any social changes can look like epiphenomena, meaning that they can easily be reversed.
4. It's mechanical, not human agency based. A mechanical view can entomb a viewpoint in a lot of serious problems trying to explain why people are doing one thing rather than another, leading to unhelpful generalizations.
Now, I know Piketty doesn't want his view to be mechanical, but, if it isn't, it's not clear how it can relate our current economic and social circumstances to a hundred years ago, say. In 2008, I was bothered by comparisons between the 1930s and now because the world of the 1930s seems to be much more frightening than our current world. In the 1930s, for example, lots of people across the political spectrum believed capitalism was dead. Summing up, I suppose I'm saying that we need to focus on our current situation to get anything done, and it's rather its uniqueness that's causing our problems, including the problem of getting people in power to employ the few things we can learn from the 1930s. But I'm still not quite finished with the book, so who knows?

The 1930's were in one respect less frightening than today - technological progress as indicated by measures like growth in total factor productivity were increasing much more rapidly than in recent decades. This may be more threatening to capitalism's prospects than Marxism-Leninism.

I am not sure whether it is the french economist or solow that cannot imagine that wealth is also part of the 99% I at least invested in the stock market and hope to increase my retirement funds that way. Saving is also deeply ingrained in the German culture. We believe that saving and a small austerity in personal lives is prudent for the economy and for ourselves.
Actually, Piketties arguments only validate this belief imo.

However, I also think that many of the commentators are wrong who blame the problems in the US on the loss of labour unions and the idea that only well-payed jobs create productivity gains and thus economic growth.
First of, in this case France, Italy, Greece and pre-2000 Germany should have been world-leaders in growth and invention. They weren't.
Germany was stifled by a rigorous labor regime the Tarifvertrag, which was partly countered by the Agenda 2010 and its legalization of low-wage jobs, which then accelerated German resilience during the 2nd Great Depression as it is called nowadays.
Then we have France, Spain, Italy and Greece. They all have jobs that are compared to f.e. US or German jobs, high paid especially for low-skill jobs. A French teacher earned at least as much as a French engineer, after taxes. The situation was similar in Greece, Italy and Spain. The situation was worsened by an even more rigid labour market than Germany. In France and employee in a firm bigger than 50 people was almost untouchable. You couldn't fire him.
Following the logic of most of the commentators, the economies in all these countries should have started to show higher productivity and invention paired with a more equal society. I don't think so. At least, not in my opinion (as much as I still believe that Japan is one of the most unproductive countries in the world: 16 h work-days and they can't finished on-time).

Also, in all these discussions on inequality, I am missing the progression of prices and the depression of poverty due to technological progress. It is something which Don Boudreaux stresses and I think it is very well underappreciated. Especially, when the 99% send twitter news from their occupy camp in the middle of NYC from their tablets or their smartphones while drinking a star bucks coffee...

Look at food stamp usage, medicaid enrollment, social security disability numbers, etc. Of course a bunch of college kids have computers.

Instead, take a look at the computer screens in public libraries. The users of these computers have to be people that can't afford either computers, internet access or both, that's why they've wandered over to the library for free computer access. Are these people filing job applications by email or blogging about their financial straits? No, they're playing games, watching movies and youtube videos of dog tricks and goofing around on facebook.

Sorry, Chuck, some actually do use the computer to fill out job applications. My wife was a librarian and taught some of those folks how to use computers to do just that. Maybe more people at libraries use computers to fill out job applications than a like percentage who have computers at home.

High UK housing prices are pretty much a London and South-East phenomenon. Outside of there this loads of cheap housing stock up and down the country.

Beware #2 is nearly 4 years old and the projection of 2020 in the title was published in error (should have been 2030, corrected at Wired, still wrong at Gizmodo), which might change some opinions here (although PZ didn't care at the time). To be fair, Kurzweil's response can be read at

Even when I agree with him it's difficult for me to read PZ. He comes across like a cable news pundit of science blogging. It is interesting in any case how threatened people seem to be by Kurzweil's ideas and prognostications, and how rabidly he is attacked (and defended for that matter).

Sure thing, PZ is a dickhead and an ideological zealot. Still, he is right in this case and changing the date to 2030 is not making Kurzweil's prediction any less silly.

13 years ago, I heard one well-respected structural biologist predicting that in 10-20 years there will be no need to solve protein structures because folding problem will be solved. He then went further claiming that in 30 years there will be no need for experimental biology at all because we will be able to model the whole organisms and predict functional consequences of any individual mutation. I could barely hold myself from laughing out loudly. 13 years later, his numbers are just as unrealistic as they were then. And protein folding is by accounts a trivial problem in comparison to AI.

I will agree that PZ succeeded in kicking the leg off his own straw-man reconstruction of Kurzweil's argument. But as Kurzweil said, " discussion of the genome was one of several arguments for the information content of the brain prior to learning and adaptation, not a proposed method for reverse-engineering." We may someday have the ability to boot up an organism from a genome (along with a suitable environment), but whether PZ thinks that is possible soon or ever doesn't prove or disprove the (given slippery) assertion Kurzweil did make.

I think the key observation in what Kurzweil did say is that "basic principles of operation" can mean different things to a biologist and a computer scientist. If we take it as "basic principles of operation required to reproduce the externally observable phenomena of the brain on a transistor based silicon substrate", then it is readily observable that it is perhaps wholly unnecessary to know anything at all about the RHEB protein or perhaps even the operation or existence of proteins in general (which is not to discount that such knowledge may in fact help in understanding).

I first heard of Kurzweil in 2004 and am a fan as I had made strikingly similar predictions since the 1980s including arguing in 1985 that the Soviet Union would fall within 10 years and that Russia would become a democracy. Our errors are also similar, like when cancer treatments will be almost cures (Kurzweil said 2009, I said 2014) and that Western disease would be mostly overcome:. Kurzweil said 2019, I said in 2002 by 2020. Six years to go. When a computer beats the world's greatest players: Kurzweil knew of Deep Blue so said in 1989 it would happen in 1999. I was thinking of a PC, so argued in 1989 it would happen in the mid 2000s.

But when it comes to strong A.I., I said in 1990 it would be decades from then -- if ever. Kurzweil has stuck with 2029.

As to "fully reverse engineer the brain", I'm not sure what to make of that.
Also, the PBS News Hour transcript is no longer up (Ray, did you ask them to take it down?...) I'm almost positive that Kurzweil told David Gergen "By 2020, we will have completely reverse engineered the brain -- but that is just the hardware. It will take until 2030 for the software"' to produce human level A.I. .


You must be on the Harvard faculty.

I think that's funny... somehow...
Nope. Just a MR reader like you.

OK, one more. The point that I was going to add is that these predictions , with being off at times -- Kurzweil thought driverless cars would be common on highways by 2009, aren't hard to make if you keep in mind Moore's Law.

Re 4: I find it useless to try to predict occurence of a war based on accounting. If bean-counters decided things, or had major influence in them, most wars in modern history wouldn't have happened.

I almost never find these discussions informative, as someone who has studied both the brain and computation for 30 years. The same conceptual errors are made over and over again. Anytime you hear someone say "the brain is not a computer", it is isomorphic to "the mechanisms that implement the (informational) mind in brains are not based on physics". The error is the assumption that if-then-else statements of a computer can directly implement higher levels of cognition. Of course not. But if you look at emergent behavior from Monte Carlo simulations, you realize the behavior is not predictable from the if-then-else statements, any more than higher levels of cognition are predictable from synapse firings, in principle. In general, when people say, "You don't get it, biology is so much more complex than xxx", they are saying we lack an understanding of the organizational principles that make the apparent complexity understandable. Cognitive and biological scientists have made great progress on this over the last few decades, and a lot more is understood and being understood than these conversations indicate, but, ok, they are not all the way there yet. Progress is happening on all fronts, driven by competitive pressures, but predicting the rate at which key insights will be arrived at, has historically not been very reliable. However, dramatic advances in computation capacity makes more and more elaborate experimentation with models of our theories, and testing of ideas will accelerate. I don't think we have the basis to predict when we will have a critical mass of understanding needed to model minds in brains or other substrates, but it seems it will be likely to happen to me, given the trajectory of progress. A useful discussion would outline what we do understand, and what we don't. We are learning a lot about how complex, hierarchically organized systems are organized and operate, but not much yet about how experience and education uses such a substrate to construct mature minds.

1. No one is saying "brain is not a computer". A lot of people are saying that that brain is infinitely more complex than any computers we have today.

2. "In general, when people say, “You don’t get it, biology is so much more complex than xxx”, they are saying we lack an understanding of the organizational principles that make the apparent complexity understandable."

Exactly false. We do have a solid understanding of the organizational principles of the existing complexities. And this understanding results in realization that most of biology is simply not tractable computationally. Crude models demonstrating basic principles - easy. Anything "real" in silico - no way; too bloody complex.

"infinitely more complex" -- is that a technical term? By what measure? Physical connections? Information processing? Minds are far more complex than any informational thing we have constructed so far, but brains? When you are familiar with descriptions of what emergent phenomena occur in brains and other advanced substrates that evolve information structures, it appears less mysterious, less impossible.

"not tractable computationally", "too bloody complex" -- you make it sound like this is an in-principle statement rather than a current state of (accelerating) technology. When you see combinatorially explosive computations required to model complexity, people assume it is impossible to beat until -oh- they find exponentially-convergent algorithms such as adaptive, learned-bias sampling (what goes on in cryptography and biology). Systems biology is making great progress, as are many other computational models that people previously thought were "too bloody complex". Computational models are a representation of our understanding of such complex systems. Our ability to deal with complexity is accelerating. Don't assume progress will be linear towards daunting problems. Looking at early stages of an S-curve and extrapolating linearly misses the inflection point.

"many other computational models that people previously thought were “too bloody complex”

Nope. There is NONE.

Talking about this to people who don't understand basic biochemistry and cell biology is evidently hopeless. Do put money where your mouth is and bet me. How about a really, REALLY simple one: ab initio (no previously known domain structure) predicted structure of a protein larger than 50,000 Da confirmed experimentally with an RSMD of less than 1 Å? I guarantee that it's not happening by 2020. (And that is infinitely less complex than brain emulation! And yes, in this context "infinitely" is a technical term that signifies too many orders of difference to even bother comparing the two).

One question on #4: Why is a link to an English-language media source being washed through Microsoft Translator? It reads identically in the direct link:

Myers seems to have three objections: (a) "10 years," (b) "The genome is not the program; it’s the data.", (c) "You aren’t going to be able to simulate a whole brain until you know precisely and in complete detail exactly how this one protein works."

(a) Billions of dollars in purchases of A.I. companies in the past year suggest something less than "magic solutions completely free of facts and reason."

(b) The genome provides the code for three-dimensional chemically regulated molecular machines and is activated by other parts of the genome.

(c) Myers doesn't indict Kurzweil's revealed expectation for a continued exponential rate of increase in computation. The first iPad was released on April 3, 2010. Here is a video of rat neurological tissue controlling a robot:

I always spent my half an hour to read this blog's posts all the time along with
a cup of coffee.

Howdy! Do you know if they make any plugins to safeguard against hackers?
I'm kinda paranoid about losing everything I've worked hard on.
Any suggestions?

Comments for this post are closed