Sunday assorted links


So, is 2 'an example of how the economics profession emphasizes one kind of rigor and almost completely neglects rigor in argumentation and the application of concepts' or not?

Because spending inequality seems a fascinating concept, at least when one accepts the following - 'Take 40-49 year-olds. Those in the top 1 percent of our resource distribution have 18.9 of net wealth but account for only 9.2 percent of the spending. In contrast, the 20 percent at the bottom (the lowest quintile) have only 2.1 percent of all wealth but 6.9 percent of total spending. This means that the poorest are able to spend far more than their wealth would imply—though still miles away from the 20 percent they would spend were spending fully equalized.'

See, things aren't that bad really - why, the poor can go into debt in a fashion only available to the landed gentry a couple of centuries ago. And really, who cares that the bottom 20% of the population only spends a third of what they would be spending in a fully equalized setting.

Naw, I think the many Trump voters in that lowest quintile deserve at least 10% not 6.9% though, which would be a solid 40% higher amount than what they spend now.

I'm sure Trump has an answer how to achieve such a desirable, anti-establishment goal. One that never even once raises any scary communist ghosts.

I highly doubt that the poorest 20% consumed even 6% of consumption under Stalin.

Yes, the study in #2 simply takes into consideration that the poor spend more than the rich, proportional to their income. Apples and oranges, since most people think of inequality in terms of wealth, not spending.

Wealth (capital) is just deferred consumption. This is why lifetime consumption is a better measure than current period income/consumption/wealth. There are complicated borrowing/saving, tax/benefit and persistence dynamics that are missed with snapshots.

Not really. Wealth is not deferred consumption unless you plan to leave nothing to your kids. For your model to work, you would have to discount to infinity all inheritances, and that's a problem since discount rates are unreliable if over about 40 years.

Someone will eventually consume it. Otherwise it's the same as setting the money on fire (actually even better than that since it's probably invested). You never consume the real resources so it's no skin off anyone else's back.

2 was very interesting, but seemed to become ungrounded when it moved from a description of current spending inequality to a claim about disincentives. I saw no foundation for that behavioral psychology.

FWIW, I made my money and entered retirement in my 40's. Eyeballing the numbers I am top lifetime quintile, but not top 5%, let alone 1%.

I started fly fishing because it's fun, not to avoid tax.


I always ask people this- if jet aircraft had no human pilots, would you fly on one? Self-driving cars have a big psychological hurdle to overcome, and I suspect at least 30 years is a good prediction.

Or another question - if everybody else apart from you were in self driving cars, would you feel less safe, or more safe. I think I would feel safer.

Now that seems like an odd question to me, probably because in my country authorities have put a lot of effort into convincing us that humans are untrustworthy scum that should not be permitted to control vehicles. I don't think it is even possible for a human being to follow our road rules to the letter. While I don't think their intent has been to promote self driving cars, they certainly do try to get humans to act like machines - with some success.

I would absolutely fly in one. Also, it was not that long ago that elevators had operators.

On what basis do you make the decision? Have you read the source code? Are you aware of the testing protocols in place? What is your experience with automation?

I'm afraid that I wouldn't get in one. I have done too much computer programming and fixing mechanical issues to trust this stuff. The more complex the more likely it will fail. Elevators are trivially simple as a control challenge, as is flight. I think even if possible it will be prohibitively expensive.

I'd be curious if Google engineers have run into situations where the error inherent in the sensor inputs have created a chaotic system where it is utterly unpredictable?

"On what basis do you make the decision? Have you read the source code?"

On the other hand, have you met people? Why would you trust a complete stranger to fly a plane?

Pilotless vehicles are not adding a new risk. They are replacing the risk of human error with the risk of computer error, and it is not such a high bar to make the latter less than the former when it comes to cars.

Also, have you been involved with programming and testing of critical systems? The fact is that we trust our lives to software all the time, and there are proven processes to ensure high reliability.

But most modern jets are indeed piloted by robots already: it's called "auto pilot".

"I always ask people this- if jet aircraft had no human pilots, would you fly on one?"

My answer would be: "Yes, if cargo aircraft had flown tens of thousands or hundreds of thousands of flights without mishaps in similar situations."

Home delivery of all consumer goods--including groceries, building supply goods, medicines, clothing, etc.--will be a reality. The thing to do is to put a person inside the delivery truck in the most safe seat possible...maybe even with a crash helmet. The delivery person does not drive the truck at all, but instead delivers the items to the door. That way, the delivery person just needs to be a strong person, not a good driver.

After computer-driven vehicles have been shown to deliver cargo safely, acceptance for delivering people will be greater.

It looks like the (fatal) crash rate for (piloted) commercial passenger flights is about 1 in 30 million. So even after hundreds of thousands of flights by a hypothetical AI system without a single crash, you would be nowhere near showing the system to be as safe as the current piloted system.

Well, the chance of a commercial passenger flight having an accident with one or more fatalities is about 1 in 3.4 million. But for the top 39 safest airlines in the world it is about 1 in 10 million. So in practice it would take a lot of self piloted flights to be certain that it was as safe as piloted flight. But it would still be possible to gather strong evidence that pilotless flight would be safer without that many flights. This can be done by using telemetry from millions of flights to test pilotless flight systems, carefully examining accidents to see if autonomous flight would have avoided them, and flying millions of piloted flights with the autonomous system as a passive trainee.

Do you have a reference for comparing all airlines to the safest airlines? Is the difference statistically significant? Fatal crashes are so rare that I expect sample size issues to be a problem there.

Dan111, I took figures for fatal accidents per flight from here:

And apparently their source is: OAG Aviation & accident database, 20 years of data (1993 - 2012).

Looking at the list of notable accidents, I see lots of bombings and lots of runnings out of fuel.

"It looks like the (fatal) crash rate for (piloted) commercial passenger flights is about 1 in 30 million. So even after hundreds of thousands of flights by a hypothetical AI system without a single crash, you would be nowhere near showing the system to be as safe as the current piloted system."

I was not saying that tens of thousands or hundreds of thousands of flights without a mishap would demonstrate equivalent safety to piloted systems. I was saying that after tens or hundreds of thousands of flights without a mishap, *I* would be willing to fly on a computer-piloted plane.

Europeans at least have been flying in planes with autoland since the 1960s thanks to poor visibility pilots often have to contend with there. The reason why planes don't use auto takeoff is because it is a much simpler problem. If a pilot misses the point in the sky they are aiming at by a few meters it is not such a big deal. Between autopilot and autoland millions of people have flown in self flying and self landing planes. And so there's probably not much of a psychological hurdle to overcome. At least not for Europeans.

As a commercial (cargo pilot) allow me to make the following points.
1. If a 2 to 5 % chance of failure during a auto land does not scare you, be my guest.
2. Only the largest airports have the requisite CAT 2 or 3 auto land approaches required for safe auto lands, and then not for all runway (wind) directions.
3. Winds may easily exceed the crosswind limitations of the autopilot (15 knot crosswind, etc.)
4. What happens when there is an emergency not covered in the checklist (Souix City DC-10)
These and other problems may be worked out in the future, but for now I like to think that the autopilot and I together are much better than either of us working separately.

Actually my company hired a think tank to look into the feasibility of flying cargo jets without pilots a couple of years ago. The answer came back that it would take about forty years. One issue was ground operations and another big problem was how to keep the pilots from walking off the job months before the company was ready to eliminate their jobs.

Your last point is interesting and probably means that whenever pilotless cargo planes actually happen it will probably be a brand new airline doing them and not one with existing pilot employees. The new airline would in theory be able to undercut the incumbents with much lower labor costs. Otherwise what's the point of pilotless aircraft?

msgkings - They could just open up a subsidiary to get around it. Or, but any old defunct airline and direct all the new tech into that one. I don't think the cost is as relevant as with driving, for example, because the cost of one pilot is divided between a couple hundreds passengers, whereas the cost of a driver is usually borne by just one or a handful of people.

Established airlines could let their pilot-free and piloted flights compete head to head. As usualy, the market will decide. But I think there will be a market for piloted flights well beyond the point that it is essentially proven to be just as safe or safer without a pilot.

Dr. D, the 2-5% chance of failure during an autoland does not scare me as there are trained pilots overseeing the landing process. If the pilots were poisoned or something and the plane had to land using autoland alone, then yes, I would experience some concern. Because autoland is nothing at all like an autonomous flight system that would need to be superior to human pilots in order to be put into service.

The speaker only mentions 30 years as almost a throw away line "Will it be 3 years or 30 years? It will be a little of both." He emphasizes it will be soon and gives no reason it would take until 2045 to have 99%+ self-driving cars.

As I recall, an engineer from Google said on a Discovery show that featured the 2004 or 2005 DARPA Grand Challenge, self driving cars would be sold around 2020. I figured since he was a Google guy, he was well versed in the potential of exponentially increasing computer power by that year and probably made a good estimate. Add 5 years to be conservative, and 2020 to 2025 is the likely time at least 10% of cars will be driverless and rapidly climb from there. 95% by 2030, easily.

I think it a mistake to suggest it's the IT/computing powers that will drive the timing. There are already completely autonomous vehicles in mining and no accidents. Some have already mentioned planes as well. The US auto manufacturers have been testing self driving cars since the 90s and apparently had successful test on closed courses at race speeds (think screaming tires and at the limits of traction).

While coordination on the highways is a bigger issue it's not so complicated that it would require another 30 years or better processors, sensors and algorithms.

The big thing remains the shift of all the social institutions around liability and who to insure and who will pay -- and how to punish for that matter. The other challenge is probably agreeing of what the general approach (centralized or decentralized) to adopt as the governing approach -- though both will be required for success so knowing where to change paradigms will matter. Those questions are just beginning to be worked out (though in reality, given our legal system in the USA, one might expect that significant portions will only be addressed once the autonomous car is a reality. Thought, not thinking about it, will this type of innovation produce the demise of the idea of common law in society?)

"and 2020 to 2025 is the likely time at least 10% of cars will be driverless and rapidly climb from there. 95% by 2030, easily."

Cars are expensive pieces of capital equipment. Maybe 95% of the cars sold in 2030 will be autonomous, but the number of cars actually on the road that are autonomous would be substantially lower than that. Currently in the US the average age of a car on the road is over 11 years. So you wouldn't even hit the 50% mark by 2030.

"Cars are expensive pieces of capital equipment. Maybe 95% of the cars sold in 2030 will be autonomous, but the number of cars actually on the road that are autonomous would be substantially lower than that. Currently in the US the average age of a car on the road is over 11 years. So you wouldn’t even hit the 50% mark by 2030."

The average car in the U.S. driver of working age drives about 15,000. I could easily see the average computer-driven car driving four times that amount. If that is true, the percentage of total miles traveled by computer-driven cars would be 80%, even if the number of computer-driven cars was only 50%.

Now consider the fact that the average age of a car is determined partly by the fact that, in the past, cars haven't improved rapidly. Suppose you could drive your old car, or order a car similar in a manner similar to a computer-driven taxi for less than half the cost per vehicle mile traveled. Why would you keep your old car and throw away money?

No, but if well-trained pigeons were the pilots, I would. It is time to revive Project Pigeon.
Those pilots think they are big shots, but they can be replaced by birds.

Look, I am aware of the automation that is in aircraft today- what I am asking is, would you get into an aircraft with no human pilot whatsoever? That it the hurdle that has to be overcome. Only one of you really answered, and that answer was contingent on a history of such success in flying in cargo jets, which is as it should be- something yet to be done commercially, and we have had good automation in large jets for decades now.

Ronald Brak, get back to me when the first jet takes off with commercial passengers and lands without a pilot on board. I am sure those fine Europeans will be first to do it- in about 30 years.

Yancey, if Australia's Civil Aviation Safety gave the okay for pilotless planes I would have no problem at all being a passenger on one. Since they'd only be likely to approve if there was good evidence it would be safer than a piloted plane, I would feel safer than on a human piloted plane.

But while there are passenger aircraft in service right now that could take off, fly a route, and land with no one at the controls with just a change in software, there is no rush to get pilots out of planes because both the economic and safety considerations of pilotless flight are very different from self driving cars.

The pilots on passenger airliners are generally responsible for hundreds of lives, are highly trained individuals whose skills are frequently updated and evaluated, and if something goes wrong a crashing airliner could potentially kill thousands on the ground, as in the 9/11 mass murders in New York.

So if human pilots are able to increase safety to any degree at all, they will be kept on board.

But the situation for self driving cars is very different. The requirements to hold a car license typically boil down to not having caused too many accidents or been caught driving dangerous too often recently. After getting their license few drivers ever have their skills evaluated or updated again and most lack the ability to safely handle dangerous situations. If something goes wrong with an self driving car all it has to do is deaccelerate to a stop without hitting anything. It would be nice if it didn't end up blocking the road, but that's not necessary. Pilot pay makes up an insignificant part of the cost of air travel, but in developed countries is a very large part of the cost of traveling in a taxi. In places such as Australia, Europe, or Japan, a self driving taxi used for two eight hour shifts a day could save over $60,000 US a year in wages.

So getting cars to drive themselves is far more likely to save money and increase safety than getting passenger planes to fly themselves.

Modern commercial flight is extremely safe and in this century safety has gone from good to wonderful. A large part of this has been due to improved automation on planes. Human pilots and onboard computers work as a team, each compensating for the weaknesses of the other. Software looks out for human error, such as forgetting to put the landing gear down - a mistake that has killed a lot of people in the past, while human pilots look out for software glitches and a lack of common sense on the part of the automation. But a question facing aviation now is, would flying become safer if sources of human error were eliminated by getting rid of human pilots? Currently, despite being highly trained and having automation to warn them when they are likely to be making a mistake, pilots still make fatal errors. And sometimes pilots deliberately smash their planes into things. It doesn't happen often, but with other causes of accidents becoming so rare they are starting to become a major source of fatalities. So at some point I'm sure humans will be taken out of the loop, but I wouldn't want to guess when. It might be more than 30 years. But I suspect that in 30 years human pilots will have moved from being a central part of the control system to being backup.

In order for self-driving cars to become the norm they will have to be made mandatory. Drivers are so compelled to pass those ahead of them that passively enduring following one would make them insane.

Not if you can just work on your laptop the whole time.

I, for one, welcome our new auto overlords.

One day maybe they can also invent robots to take care of my kids, mow my lawn, cook my food, and bonk my wife. Then I can finally achieve the dream of working nonstop 17 hours a day every day until I day.

If I worked on my laptop in my car, it would be to reduce the hours spent at work, not increase the total hours working.

Time spent driving is wasted time (unless you are a driving enthusiast), so whatever your priorities, being able to do something else would be an improvement.

"Time spent driving is wasted time" A Puritan concept. Time is just time, it can't be wasted or saved. There are no time "piggy banks".

I would be concerned about hackers and the like, and would rather have someone ready to take over from auto-pilot on a moment's notice.

Relative to all the costs built into a plane ticket, a pilot's salary is not that huge. Even if it's just irrational psychology, I think the vast majority of people would be willing to pay a few dollars more to fly with a pilot at the helm, even if flights were fully automated, just in case anything went wrong. There would need to be ridiculously overwhelming evidence that it was MUCH safer than pilots, and even then, I think I would pay a few dollars more to have a pilot on board. The Ryan Airs of the world are likely to be early adopters, and might have pilots dual purpose to other functions as well, perhaps as flight manager to oversee the stewards or something.

As Ronald pointed out, pilot suicide is becoming increasingly relevant as other sources of accidents increase. Probably the lowest hanging fruit in airline safety is to devise mechanisms where auto-pilot can take back over management of the route. Say, the pilot starts to go off course and central control can "hijack" the plane and set it back straight again while ensuring that the co-pilot or other staff is able to take over. But, then, you still need a super duper "hacker over-ride" so that the pilot is ultimately able to re-establish control of the plane. The military has a long history in verification systems to address roughly analogous problems, and even they fail sometimes.

Cars without self-driving capability will be banned first from crowded, high-speed highways.

Probably, but I think insurance rates will largely eliminate manned driving before it is banned.

Insurance rates will be significantly lower for those that use auto-driving. People who routinely drive the speed limit and tend to be safer drivers will probably be more inclined to use auto-driving, since the penalty (largely measured by a slower driving speed) will be less for them. As they adopt auto-driving, the costs of insurance will go up for those who haven't adopted auto-drive because the pool off manual drivers will tend to have higher claims. This will create a feedback circle, until you only have a small percentage of (I love to drive regardless of the costs) tooling down the road. At that point the legal bans will start appearing.

Satoshi Kanazawa

K den.


Well, that's one explanation for Satoshi Kanazawa's writing, but I don't really think it is sufficient.

To consort with the crowd is harmful; there is no person who does not make some vice attractive to us, or stamp it upon us, or taint us unconsciously therewith. Certainly, the greater the mob with which we mingle, the greater the danger.



"Be civil to all, sociable to many, familiar with few, friend to one, and enemy to none."

-- Benjamin Franklin

2) Is this a credible representation of Republican thinking? Reduce inequality by cutting taxes on the rich and removing benefits for the poor? Does it not rely on claims that trickle-down will eventually come? I don't hear a lot of voices from the right promoting trickle-down thinking these days, so it's hard to believe that Republicans actually view this as a strategy to address inequality.

Anyways, excellent methodology, in my opinion.

I strongly agree that after-tax earnings are the appropriate basis of comparison. While it obviously implies lower measured inequality, this was one of the few methodologies promoted by Fraser Institute (Canada) which were not patently driven by a desire to lend credibility to the right and attack the credibility of the left. It is an obviously superior methodology. Additionally considering lifetime income and considering age cohorts removes an additional bias, since almost everyone earns less income when young and more income when older, and so it is not correct to compare the incomes of the young and old in an aggregate statistic.

The use of pre-tax and yearly income still has strong policy relevance however, in particular in its role in devising well-targeted and properly-costed interventions. But they are not the right indicators of social inequality as experienced by the population.

(I prefer consumption indicators myself because this also reflects access to various services which will lead to higher or lower estimates of inequality depending on who actually has access to those services - often, resources purportedly for the poor are in fact located and selected in a manner where they systematically benefit the rich far more than the poor. Due to privacy concerns and the multiplicity of information systems across service providers, surveying is the only available mechanism to estimate the additional impact of access to services.)

"I don’t hear a lot of voices from the right promoting trickle-down thinking these days, so it’s hard to believe that Republicans actually view this as a strategy to address inequality. " - but nowadays you have extremists running in the Republican party, so their arguments are not rational.

Trickle down economics is quite rational. It's based on this logic: if, given zero inflation (or given real, inflation adjusted numbers) if a year from now the rich 1% were 50% richer while the remaining 99% were $1 richer, would that make the 99% better off? Logically, the answer is yes, they are $1 better off than before. Emotionally, the answer is no. Traditionally however, while the pie was growing, it was considered un-American to covet your neighbors wealth (or it was considered not cool or polite). Times have changed in the USA, and it's now become like Europe, where people are very envious of their neighbor. The old joke about the genie giving three wishes to somebody in the Balkans, and one of the wishes being that their neighbors stay poor has become the reality in the USA as well, sadly.

Where did I hear another three genies joke recently ...

Three guys on a desert island, each gets one wish. The first wants to go back home, gets his wish. The second, I think it's the same, wants to go home and gets his wish. The third, feeling lonely, wishes for his friends to come back.

Yes, I was forgetting that trickle down economics tends to coincide with the view that doubling the income of a millionaire and a $1 gain for the poor is consistent with trickle down economics working. In very poor countries with high levels of inequality, in recent years you see more language of "pro-poor growth" entering into development plans. The view is that the growth strategy should be growth-focused, but also aim to increase the share of market income earned by the bottom 20% (e.g., through investments in agricultural productivity or investment in infrastructure which allows rural areas to get their products to market). Whether this approach maximizes growth is debatable, but it is deemed socially important.

Conservatives typically do not frame the problem in terms of "inequality" and would not "address inequality" as a goal.

@#5 - the author is right, there's no "Santa Fe Institute" complexity theory type machine out there these days. Even and especially the AlphaGo neutral network software that won in Go was a 'dedicated learning machine', not a 'general learning machine'. It had a specific algorithm it used to win at Go. The algorithm was based on an idea from 2006, not invented by Google, and essentially it was a way of pruning the Go tree using an improved evaluation function that relied on pattern recognition. Most of the public however, ignorant of computer science, doesn't see it that way: they probably bought the hype that the AlphaGo machine "learned how to play Go from studying millions of master level games" (which is how the Google PR team represented this 2006 algorithm to the ignorant lay public)

Author: "In AI, simplistic “complexity” oriented approaches — e.g. large, recurrent neural networks self-organizing via Hebbian learning or other local rules; or genetic programming systems — haven’t panned out insanely well either."

Then there is this: "Google Machine Learns to Master Video Games", BBC


...The difference with DeepMind's computer program, which the company describes as an "agent", is that it is armed only with the most basic information before it is given a video game to play.

Dr Hassabis explained: "The only information we gave the system was the raw pixels on the screen and the idea that it had to get a high score. And
everything else it had to figure out by itself."

The team presented the machine with 49 different videogames, ranging from classics such as Space Invaders and Pong, to boxing and tennis games and the 3D-racing challenge Enduro.

In 29 of them, it was comparable to or better than a human games tester. For Video Pinball, Boxing and Breakout, its performance far exceeded the professional's, but it struggled with Pac-Man, Private Eye and Montezuma's Revenge.

"On the face it, it looks trivial in the sense that these are games from the 80s and you can write solutions to these games quite easily," said Dr Hassabis.

"What is not trivial is to have one single system that can learn from the pixels, as perceptual inputs, what to do.

"The same system can play 49 different games from the box without any pre-programming. You literally give it a new game, a new screen and it figures out after a few hours of game play what to do."

Knowing a bit about chaos and complexity, it always surprised me that seemingly smart people saw it as a path to easy answers.

I saw chaos and complexity as much as limits on knowledge as anything.

When you know weather is self-similar you know you can't know the weather next week. You can "predict" but you also better "re-predict" every day between now and then.

I'm a big fan of using current weather as a predictor of future weather (not talking climate here). What's the weather going to be like this afternoon or tomorrow? Probably simliar to the weather right now. I never check unless there's a major outdoor outing in the works. If the weather's actually gonna be dangerous, it usually makes the news (in Canada this is basically limited to big snow storms and delaying travel at least a few hours until the snow ploughs are out in full force - it doesn't matter how good of a driver you are, there's always the possibility, no, likelihood, of some idiot who doesn't realize that you don't drive 100km/hr in a blizzard - I get behind a transport truck and stay put in the worst conditions where you might not even really know where the road is).

I won't really be impressed by the AI stuff until I see a challenge where the AI and a human are given some goal to accomplish, a collection of resources to accomplish the end, as equivalent a set of knowledge about the resources and then see who figures out the task.

Maybe even something like the intelligence tests some give to other animals -- like blocks that could be stacked and a banana hanging 30 feet in the air for chimps.

It would be a bit tricky to design but that's really the type of test that needs to be passed.

Current AI is not about general problem solving. It has even moved away from that, to sort of non-cognitive intelligence.

Computers are better than humans on facial recognition (are these two pictures of the same person) but that isn't a very general intelligence.

I wouldn't consider facial recognition to be "intelligence".

Smart people, or more accurately the ones who think they are really really smart have few people able to tolerate them, but being smart they come up with post hoc rationalizations.

So how do you explain smart people spending a lot of time with friends being less happy?

Cauality could be the other way around. Smart people who are happy on their projects may not spend a lot of time with friends. Smart people who are unhappy may seek out friendships and spend more time with friends as a means to pursuing happinness (being smart and all, they may connect these dots cognitively instead of intuitively).

Also, some may be frustrated in dealing with "dumb people" all the time. But I don't think this is a big part of the story, I think the above suggestion is more relevant.

#1(re: smart city living) is this why economists don't understand why average people aren't overjoyed about living in a service sector megalopolis with novel ethnic food?

I'm not voting for anybody! Until I can hire a president as god intended, I am withholding my precious vital essence from the political process.

Regarding Ben and his use of the apparently already existing term "complexicated," the one useful thing in his post is to remind people that "complex" and "complicated" are not the same thing, although John von Neumann and some other pretty smart people have argued that they are. In any case, more recent complexity theorists have long emphasized their difference (yours truly included). Beyond that, the realization that reality is both complex and complicated is not exactly earth shaking, even if some people at SFI did not quite realize this "back in the 80s."


From the article, this quote grabbed my attention: " but more impressive progress has been made via taking simple elements and connecting them together in highly structured ways to carry out specific kinds of learning tasks (e.g. the currently super-popular deep learning networks)."

Do you think the most surprisingly difficult part of AI (and 'genius' of evolution) is in fact how the highly structured ways of connecting simpler elements arose?

#4...I would say that the main benefit of studying the Talmud is studying the Talmud, not deciding the issues studied. To that end, there are certain traditional logical tools, implicit assumptions, etc., that one can profitably use in studying the Talmud. I would say that many of the tools used in this analysis are not among them. In any case, as a Baroque Talmudist, I can render any Talmudic issue unsolvable simply using traditional methods.

I don't want to frighten anybody here, but the main reason I study the Talmud is to get closer to G-d. For me, it is the primary way I can do this, and I will continue to do so in the next life, as a matter of fact. Of course, my actions are what really define me in life, but, spiritually, as it were, studying the Talmud defines me.

+1 (I'm not Jewish, but I've read Elie Wiesel, "Celebration Talmudique" and found it very inspiring.)

After confessing up front that those charts of income distribution which seem so important to many people strike me as empty of meaningful content, I would go on to observe that consumption is the right way to think about quality of life and that quality of life is what we should care about to the extent we are bothered by the effects of wealth inequality at all. If you are going to do that, it is fair to bring in quality and availability of goods at various prices, it is fair to observe size of home and number of cars and amount of going out to the bar people can engage in at various income levels. OTOH, it is also fair to note the role of debt, insurance, savings and general financial security. Adjusted for those things, my guess is the state of affairs is nowhere near as bad as income and wealth inequality suggest, but the adjustment isn't as significant as the authors suggest here.

Number of children, size and location of home, and automobiles affect the financial health of second and third quintiles to a large degree. Much less so the lowest quintile as they don't have access to the kind of credit needed to consume too much above means. You may look at that and say the choice for debt and low security is a choice due to those reasons or you may take some alternative approach that tries to pull in behavioral concerns, but those are drivers.

#1. They're called aspies. There's probably some correlation between smarts (genuine smarts) and aspines.

All the "Aspies" who socialize a lot with their friends are less happy? That is what you think is driving the results?

Yes, I was thinking aspies don't like to socialize, or even if they do, they find it hard to do so when they try. Hence increased frustration with more socialization.


'Black Sails' is an extremely well written exploration of libertarian political philosophy. I highly recommend libertarians check it out, especially since there are so few libertarian shows produced in Hollywood. In the last episode, there were no less than three scenes in which the relationship between man and the state is discussed on subtextual, or not-so-subtextual dialogue.

A sceptic might argue that the narrative of resistance against British rule can be interpreted as either either left or right wing, but in this case, it is hard to make that case since the pirates are most certainly not communists. They are unapolagetically self-interested egoists, nearly to a man.

Well yeah, pirates were the original seasteaders weren't they? A mini society, everyone (except the slaves of course) there of their free will, knowing the deal, getting paid, no state to adjudicate. Not a big leap to Somalia by the way. Pirates and all.

My smartest, most morally worthy friends lean or completely are libertarian. If all people were like them then a pure libertarian state would work just fine. But, of course, most people are not smart or strongly moral.

The pirates didn't generally own slaves, many of them were escaped slaves. Slaves that were captured were generally pressed into joining the pirate crew and received a share of plunder.

On the show this season they are allied with a colony of escaped slaves, which is another angle of the debate.

The show kind of works like a debate between pure anarchist and minarchist libertarianism. Is freedom guarenteed by the rule of law, or does it only exist in a completely anarchist system?

I've often wondered the extent to which many "pirates" might have been freedom fighters in a sense, seeking a place of freedom away from British crown, and seeing it as entirely legitimate to engage in theft from a monarchy that they perceived as legitimate. Roughly akin to how a freedom fighter will be labelled a "terrorist" by any state these days. Presumably most pirates were such plain and simply pirates, but there are some stories (sorry, forget which) of some "pirates" who seemed to have very idealistic views on the micro-states they might secure for themselves very far from the reaches of European monarchs. I imagine an essentially liberatrian philosophy combined with practical communist production strategies (practical because if there's only a few hundred/thousand of you then the market may not produce enough of certain goods required for viability) may have occurred within some number of "pirate" operations.

Personally, I would hardly feel like a "pirate" if I were stealing the property of an authoritarian system bent on imperial conquest. After all, didn't they basically steal it in the first place as well? (It's hard to negotiate a good price for exports when you're under military occupation of an imperialist.)

There were many pirate havens in the Carribean that serves as trading ports for pirates. So there wasn't any need to be self-sufficient in food production. The pirates could buy what they needed. Nassau (the focus of Black Sails) operated as a pirate micro-state for 11 years.

Re 30 years, at least. They're facing up that the pre-mapping requirement just isn't going to fly, and they just have no idea how long it's going to take to advance the technology enough to drive unmapped.

#5 - If true complexity can't bubble up from cellular automata or the equivalent, you are left with the "where did the engineers come from?" / ID question. It's hard to imagine evolution working on multiple hierarchical scales at the same time, or is it?. We also cant figure out how RNA could come to pass through random combinatorics. Who built this thing?

The linked piece doesn't claim that complexity can't emerge on its own - just that there is no simple, universal rule or set of rules you can apply that will cause a complex system to evolve in the direction of intelligence.

The article is talking about the early days of complexity, when cellular automata captured the imaginations of a lot of people and led them to believe that if you just came up with the right set of rules for your automata, perhaps combined with some kind of genetic algorithm that would allow for mutation, you could 'evolve' an intelligence in a controlled way.

As it turns out, Complexity is a little more, uh, complex than that. This should not be that surprising in light of what we have learned since, and it has nothing to do with complexity science 'going' anywhere.

#5: Nothing 'happened' to complexity theory. It's more important than ever. The reason you're not seeing it promoted as much is probably because the conclusions it leads you to tend to invalidate the work of a lot of people who think they can understand and control such systems.

The linked article doesn't refute complexity theory - it calls into question one simplified conclusion of it as applied to artificial intelligence. The notion that complex systems are hierarchical is not novel - it's been understood from the very beginning. Economies are complex systems, driven in part by culture, which itself is a complex system. Culture is driven by the actions of people, whose brains are complex systems. Those brains are affected by the human biome, which is another complex system. And so it goes - complex systems all the way down, perhaps even to the molecular or quantum level or lower.

None of this invalidates complexity theory - it makes it more relevant. But embracing it means we need to be more humble when trying to intervene in social systems or in economies... and who wants that? Better to pretend that we are masters of our own fate and with just enough study we can grasp the levers of society and the economy and steer them where we want to go without consequence.

Then when that fails we'll sweep the wreckage aside, re-work our models, and try again.

Terence Tao has taken "complexity" in a new direction where it starts to hook up with "computational complexity" in his study of conditions under which ordinary fluids could undergo catastrophes---see his NYT Magazine profile last July 26 and a 2014 article in Quanta.

The term "complexity theory" can mean either. One aspect alluded to in comments here and by/responding-to pcconroy in the linked essay by Ben, and in which DeepMind AlphaGo is pertinent, is that complex algorithms may make determinations that are not easily traced. See the third bullet in Lance Fortnow's post "Go Google Go"; we have discussed this further on the Gödel's Lost Letter blog.

Comments for this post are closed