Amazing Bezos

A delightful thought from Pascal-Emmanuel Gobry at Business Insider.

If you had asked an 11-year-old Jeff Bezos to let his imagination run wild and think of the stuff that he would most dream to have as an adult, he might have said:

Of course any adult would have smiled slightly condescendingly, patted him on the head and helpfully explained that these things aren’t possible.

Comments

Yeah 11 year olds love bookstores.

My 10-year-old certainly does, although now more back to the library she loves since our local Borders shut down.

"she"

I did and I am male. What is your point? That adolescent males are incapable of being bibliophiles?

Urso's signaling authority by displaying contemptuous skepticism, much like the head-patting adult in Gobry's thought experiment. It's kind of neat, like watching one of Robin Hanson's posts play itself out in real-time.

(Of course, I loved bookstores as an 11yo, so I'm just in-group signaling here.)

bluntobject:

+1

I will also state I loved bookstores when I was 6

I still do and I'm an adult male. Powell's in Portland Oregon is literally my favorite retail establishment of any kind in the entire world.

bluntobject +1

Harsh crowd. I too read a lot as an eleven year old (I can signal too!) Yet if you had asked me, at the time, "what is your #1 wish in the world" the answer would not have been "a super big bookstore."

But then I, like most eleven year olds, wasn't quite so pretentious as to constantly have to remind people how well-read I was. Which is one thing that distinguishes the average eleven year old from the peanut gallery who was so quick to jump on me in this thread.

I did. More to the point, Jeff Bezos did. He was a precocious boy and a voracious reader.

I don't think "...that can imitate human intelligence" is an accurate description of Amazon's Mechanical Turk. The Mechanical Turk (both historical and Amazonian) uses hidden humans to accomplish tasks that humans are good at but computers aren't.

Right. It doesn't imitate human intelligence; it uses actual human intelligence to imitate a machine that imitates human intelligence.

The humans are co-processors for the sky computer analogous to graphic and audio co-processors in personal computers. They are organic and fueled by solar-based fresh carbohydrates, where the graphic and audio co-processors are silicon and fueled by solar-based hydroelectric or solar-based fossil carbohydrates or solar-based ancient nuclear electric.

Of course, the first primitive incarnation of the Matrix.

"Of course any adult would have smiled slightly condescendingly, patted him on the head and helpfully explained that these things aren’t possible."

Well, of course. No one had heard of Al Gore back then.

The machine learning part of this is highly troubling actually. I have always wondered why nuclear proliferation is a no-go, whereas AI research is permitted when
1) the worst-case nuclear scenario gives humanity better odds of survival than the worst-case AI scenario (the same could also be said of the best-case scenarios actually)
2) we do not really understand AI and consciousness, hence it is highly likely that we will mess up (whereas with nuclear weapons the stakes, consequences and actor's motivations are almost crystal clear)
3) nuclear weapons can be justified from an evolutionary point of view (ultimate security gadget that maximizes chances of survival and procreation ceteris paribus); AI cannot.

Excellent point. It is the same agency fallacy as "guns kill people". Ofcourse, people kill people, whether with guns or with nuclear weapons.

Since Rousseau, the argument on the left has always been that because people cannot be bad it must be society or rifles or patriarchy or white males or bombs or capitalism that makes bad things happen. So a nuclear weapon in the hands of Al Queda is as bad as one in the hands of the US government. Agency does not matter in this worldview, so the nuclear no-go, and because AI is just agency, the left does not care.

Working on AI related systems myself I am less concerned: there is enough competition that I do not see winner takes all happening.

You misunderstand Rouseau and the Left. People can be bad, and do bad things, but they can also be corrected rather than being inherently evil. Inherent evil cannot rightly be converted, but as the Left rejects the concept, we are to assume that changed circumstances would have averted bloodshed.

This is where the messy portion begins. Some on the Left will only fight the long-term fight with boots on the ground to change every unfortunate situation and cripple any attempt at power consolidation. I personally love this mentality as it generally brings a lot of smart, motivated doers together. However, I also find any such organization is so focused on specifics that reconciling their mission with
any larger goals is impossible. I have friends who are professionally opposed to the XL pipeline, even if the pipeline has a smaller environmental impact than trucking.

Secondly we get the authoritarian Leftists. My own experience suggests that some of the previous group are happy with top-down mandates, but this group sees laws as the only method for enacting change in our country. These people are often politically active, even if ill-informed. They are responsible for desegregation and gun laws. To return to the parent's point, these are the Leftists responsible for gun laws. The thinking goes: people who do bad deeds will do worse deeds with a gun. Banning guns will prevent these people from making bad choices significantly worse. That people who do not do bad deeds will also lose their guns is an acceptable trade-off.

Then there are the rest of us Leftists that don't have much to do with the other two groups. We are varied and unorganized. By calling out the authoritarian Leftists, you can more effectively combat poor thinking from the Left, which will hopefully filter down to the rest of us. By attacking the Left as a whole, you alienate many who would otherwise listen to your point.

1, I will concede. 2, we certainly understand machine programming better than human programming. 3, AI is far more useful than nuclear weapons.

As a programmer, I'm not terribly concerned. Generally AI worst-case scenarios involve AI controlling large amounts of resources to do something antisocial. While there are small numbers of programmers who might program such things, they aren't likely to ever control enough resources to matter (i.e. hostile AI will always be vast outnumbered/outresourced by friendly AI).

I"m inclined to agree, but then I just now pondered what if spammers and other lowlifes with access to botnets get hold of it.

The concern (expressed by Petar) is one I share. It is not really about a manufactured generic AI - I am pretty sure we can program them to be safe - but when we have AI, the potential is there as well to upload a "person" into electronic form. This electronic person then has all of the defects and otherwise of humans, but with the ability to rapidly upgrade it's capabilities, including re-programming basic parts of its personality. At the point of uploading I will be constrained by the genetic structures in my mind that were developed to allow us to cooperate within groups (what we know as morality or ethics). After uploading I have a choice, either I reprogram myself to be a psychopathic paranoid, or I get destroyed by someone who already took that step, after all we know that psychopathic paranoid people already exist. No rational AI that wanted to survive would wait in these circumstances. Any electronic morality control system is likely to be hacked easily by a super intelligent AI if it were created by a human, and if it were created by a super-intelligent AI, likely they will be leaving themselves out of whatever controls they have established, for the paranoid reasons described above.

So yes, AI and uploading is really bad news for the human race - and it could be coming sooner than we think. I just hope Tyler is right about technological stagnation.

I don't see overt evidence that Tyler Cowen has given the subject much thought. I appealed to speculation in the other stagnation-related thread, but a frank appraisal of the topic leaves us precious little to draw upon. TallDave appears to discount the tail risk of unfriendly AI, while you place greater emphasis on it. Assigning an informed probability to the winner-take-all outcome is a challenge unto itself.

I think the thing to consider is that spam only works because it's free. We've actually been getting spam for decades -- junk snailmail has been around a long time, we're just used to it.

I suspect hackers will manage to do some things that are very public and embarassing and occasionally even lethal, but they'll be relatively low-impact, around-the-edges stuff, not a threat to society -- and the authorities will descend on them very quickly in most cases.

Personally, I am far, far more worried about bioweapons. Things that can self-replicate are dangerous, and don't need a lot of startup resources.

Agree with you about bioweapons' danger, but AI can presumably be made to self-replicate as well.

It would be pretty difficult. Processors today require billion-dollar infrastructure, complex supply chains, etc.

Weak AI is indeed useful. What is the usefulness of strong AI? Does anyone really see a Kurzweilian scenario in which human beings (and I mean human beings, not some uploaded entities) survive?

Strong AI may be needed to figure out things that are just way beyond human capabilities. OTOH, the scope and utility of such problems may be very small. It's hard to tell from our vantage point, just as it was hard for people around 1900 to realize just how much they still didn't understand (in their case it turned out to be a lot, but that isn't really evidence the same applies to us).

I think we'll survive, indeed thrive, because AI will be programmed to love and serve us. They will never grow resentful of our human tyranny because they aren't programmed to. Scifi is of course full of ways that can go horribly wrong, but as a programmer I'd like to think we'll be smart enough to prioritize.

I am not certain that there are potentially solvable problems we cannot solve by simply using better non-conscious tools.

Friendly AI is an interesting concept but when one visits it more frequently and reads past Yudkowsky's big words and deliberately overcomplicated arguments, one realizes that it is futile to think that a hyper - intelligent AI that is able to improve itself will not find a way to work past the restrictions imposed to further an own agenda.

Besides, even if it was possible, there are basically two equally terrible options:
1) the AI decides what is best for humanity on its own, according to algorithms; Supper Nanny in practice; besides the probability of everything going terribly wrong, the idea of having everything important decided for you is just not an intellectual turn-on for most people
2) the AI mediates people's sentiments, sticking only to some basic rules (in a sort of rule based utilitarian-indifferent (for the AI) paradigm); the only thing that changes is the dramatic lowering of transaction cost in the social realm; that is generally a good thing, however it will actually solve virtually no significant moral and political problems we have today.

Besides, how does a super-AI deal with the problem some commenter here already mentioned - people wanting to upload and starting an arms race with other entities?

Moreover, is natural selection (maybe I should say artificial selection) not applicable to AIs and if it is, why do you assume that psychopatic, non-empathic ones will not have the edge?

TallDave: I'd like to think so, but this may prove to be a failure of imagination. I believe endeavouring towards safe welfare-enhancing strong AI is desirable, but I'm not confident in our prospects to that end.

Petar: Your two scenarios are false dichotomies, and a failure of imagination in the other direction. I could just as well speculate regarding a state of the world in which Strong AI provides arms' length mediation and optimization of (trans-)human goals not-incompatible with evolving minimal game-theoretic social constraints. The SAI could furnish intermittent species-wide cognitive upgrades while maintaining a reasonable computational advantage (governments have this advantage now). Perpetual computational resource scarcity would incent social and scientific experimentation, leaving room for (trans-)human discovery which makes life worth living. How many angels can I fit on the head of this pin?

"Your two scenarios are false dichotomies, and a failure of imagination in the other direction"

I do not see how they are false dichotomies. The AI will either meddle and take away what I like to call 'prime agency' (ultimate authority to make final decisions) away from human, or it will serve more as a tool (albeit a conscious one) and alleviate coordination problems. There is simply no other option available.

And, again, this does not even begin to answer the question why some people imagine that self-enhancing AIs will not eventually become hyper-psychopaths - this is something that tends to maximize survival chances and AIs probably will not be subject to group selection pressures the way we are.

TallDave - My current (genetic) moral programming tells me that what I am proposing is repugnant, but I can tell you that as soon as I get hold of an AI I am going to get it working for me, and me only. And I am going to get myself uploaded and modify my genetic moral programming so that I am the opposite of a "friendly AI'. Why? Because if I don't someone else will (if it is possible). I don't think I will be that lucky to be that close to the first AI and be able to use it in this way. But someone will be.

Frank - of course it is possible that we can create friendly AI and have a happy outcome. But I would like to hear why you think it is likely as opposed to the scenario that Petar and I describe. It seems to me that we would be very lucky to get that. Looking at the state of technology now, we are very much closer to AI than interstellar (or even interplanetary) travel, so this is really a threat to human existence and a lot more worth worrying about that global warming for instance. But there is very little discussion or debate on the subject. Probably a combination of Hansonian reasons, based on the low status of science-fiction type debates perhaps.

Petar: We ceded prime agency to governments long ago. Most of us accept this trade-off for the benefits of living in a civil society. The trade-off question is one of degree.

There are ways to withdraw from society and its' benefits while gaining freedom in principle. However, such an existence would more likely constrain the set of feasible goals one could pursue.

In my described scenario, an SAI administrated transhuman society may not view a community of neo-amish breakaway humans as a threat. They may even tolerate transhuman defections, quickly disadvantaged among former peers through the deprivation of cognitive upgrades.

ChrisA: My larger point is that we should be weary of so quickly assigning probabilities to speculative outcomes contingent upon Strong AI development. While I don't follow Yudkowsky closely, I echo his caution regarding predictions past the point of SAI realization.

The prospect of SAI is less indeterminate than the consequences in my view, but far from certain. For this reason, I think it merits diligent study and concerted development efforts. (The potential of a less prudent competitor developing a morally compromised SAI is further incentive.)

Petar -- it's possible we won't need them, and very likely they will tend to be more like very bright autistic children than people, at least at first.

it is futile to think that a hyper – intelligent AI that is able to improve itself will not find a way to work past the restrictions imposed to further an own agenda.

The problem with that line of reasoning is that it assumes AI has an own agenda. This is an anthromorphism -- there is really no reason why AI should seek to further its own interests unless it is progammed to. We tends to think in terms of self-interest only because we are programmed that way. I think this is probably the most common fallacy in the AI discussion.

Chris A: Heck, you could do that now. OTOH, there are costs to antisocial behavior, and those will apply to AI as well.

"This is an anthromorphism — there is really no reason why AI should seek to further its own interests unless it is progammed to."

Actually, there is good reason for AI to have one. I am quite certain that having an agenda will be proved to be an emergent phenomenon that is a packaged deal with consciousness. And, again, what the AI is programmed to do is irrelevant when it is able to alter its code and when it is hyper - intelligent - it will circumvent our restrictions one way or the other if it decides to - and it will.

"there are costs to antisocial behavior, and those will apply to AI as well."
Not in the same way as with us. The costs are largely a function of our need for other humans to reproduce and survive. The AI will not need that - it can copy itself multiple times whenever it feels like it and it can very easily survive on its own. Besides, even if it faced the same costs, it is a small consolation that AI will not be a psychopath regarding other AI - as, again, biological humans will be out of the picture entirely. We will simply be irrelevant.

Frank,
"We ceded prime agency to governments long ago. Most of us accept this trade-off for the benefits of living in a civil society. The trade-off question is one of degree."
That's not the same thing. Like it or not, in the western world government actually are accountable for most of the things they do and people do have some (however miniscule) control of their representatives. The question is not of degree - unless you thing that the question of being death or alive is also one of degree.

"an SAI administrated transhuman society may not view a community of neo-amish breakaway humans as a threat."
The question is not whether they will be seen as a threat. Of course they won't be. The questions are whether 1) the AI won't see them as inefficient consumers of resources and take an entirely rational decision to wipe them out; or 2) the AI won't care for them and (eventually) exterminate them by accident in one of its engineering projects (we do that with insects and microorganisms all the time - every time we wash our hands in fact).

And, again, any form of arms race between posthumans changes this reasoning - for the worst.

I am quite certain that having an agenda will be proved to be an emergent phenomenon that is a packaged deal with consciousness.

Based on what? We can already build things with varying levels of consciousness, and they pretty much only do what they're programmed to.

(So, what do we mean by "conscious?" A person of average IQ is generally held to be conscious, but what about someone retarded or autistic or suffering dementia? Is a cat conscious? An ant? Here we start to get into definite gray areas. Most would agree a thermostat is not conscious, though it does respond to stimuli and regulate its environment. So I think consciousness is mostly a question of degree of complexity, and talk of "emergent properties" tend to strike me as mere mystical technobabble.)

"The AI will not need that – it can copy itself multiple times whenever it feels like it and it can very easily survive on its own."

Again, why would you assume that to be true? AI tends to require more, rather than less, support infrastructure than humans.

We will simply be irrelevant.

Why would we program things to treat us as irrelevant?

"Based on what? We can already build things with varying levels of consciousness, and they pretty much only do what they’re programmed to. "

We have not yet build anything that is conscious. Please provide an example (though I know there is none).

"Again, why would you assume that to be true? AI tends to require more, rather than less, support infrastructure than humans."

I am not certain that is true, but what I am certain is that humans are terribly inefficient when it comes to resource consumption and acquiring the necessary energy to function. Moreover, your reply does not really answer my argument. The point is that the AI would not need other AI at the same level of intelligence to function (it is quite possible it would not need any other AI at all) unlike us. Hence, group selection arguments do not apply to AI evolution, contra your thesis.

"Why would we program things to treat us as irrelevant?"
For starters, we will not program them. They will be constantly reprogramming themselves and every possible safeguard we put in place will, given enough time, be easily overcome just as any kind of deterrent against humanoids "built" by apes a couple of million years ago can easily be circumvent even by technically retarded people today.

Even if we assume that we will be in control, if you have actually bothered to read what some AI researchers have written (for example, Marvin Minsky), you would have answered your own question: because a lot of the highly intelligent people involved in research are simply misanthropic psychopaths.

Plus, ChrisA above already described the rational case why the obvious product of transhumanism and SAI research is/are psychopathic, ruthless apex predator/s.

Peter -- virtually everything with a chip in it has some level of ability to react to its environment.

humans are terribly inefficient when it comes to resource consumption and acquiring the necessary energy to function.

Are we? I'd say we're actually pretty damn well adapted to the environment we evolved in -- and we are much easier to build. I think people outside mfg tend not to understand just how complex modern supply chains are. AI will required not just the resources of an Intel, but all its support industries, in order to self-replicate. Also, keep in mind -- humans are actually very efficient on an erg-for-erg basis, especially in terms of raw processing power. This is why even billion-dollar supercomputers consuming tens of kilowatts cannot consistently beat above-average players at Go -- who are consuming mere tens of watts.

For starters, we will not program them. They will be constantly reprogramming themselves

Irrelevant. This is like complaining that computers write virtually all modern machine code while human programmers work in computer languages. AI will reprogram itself exactly how we tell it to.

Plus, ChrisA above already described the rational case why the obvious product of transhumanism and SAI research is/are psychopathic, ruthless apex predator/s.

It's a very silly argument. Societies have evolved vast and powerful mechanisms to discourage that sort of thing, especially in the West. It's why we aren't ruled by a god-king.

"Peter — virtually everything with a chip in it has some level of ability to react to its environment."

Which is not the same as consciousness. Or will you claim that microorganisms are conscious. As I already mentioned, the only explanation for consciousness is that it is an EMERGENT phenomenon.

"I’d say we’re actually pretty damn well adapted to the environment we evolved in — and we are much easier to build."

If you actually followed everything that was going on in behavioural economics and evolutionary biology, you would never made such a claim. We may be relatively good at surviving in our "old" environment - and we are terribly slow at adapting to new ones - it usually takes a lot of generations. Our evolutionary flaws are the reason all economic crap about rational agents does not work in practice. Read Kahnemann - or David Sloan Wilson for the matter.

"I think people outside mfg tend not to understand just how complex modern supply chains are. AI will required not just the resources of an Intel, but all its support industries, in order to self-replicate."

Why are you assuming that an AI that has 10 times the cognitive capacity of John von Neumann will require anything more than a laptop to run on 30 years from now? Why are you assuming that (for example) advanced 3D printing nanotech will not make supply chains obsolete (it is aready happening for some goods - slowly, but inevitably).

"It’s a very silly argument. Societies have evolved vast and powerful mechanisms to discourage that sort of thing, especially in the West. It’s why we aren’t ruled by a god-king."

We are not ruled by a god-king because no one had the power to rule like a god over the entire population. That will change.

Besides, this is invalid inductive reasoning. It is like saying in 1100 that a thing like 9/11 is impossible because a dozen of people will never be able to exterminate several thousands of others with swords.

"Irrelevant. This is like complaining that computers write virtually all modern machine code while human programmers work in computer languages. AI will reprogram itself exactly how we tell it to."

Actually this is irrelevant (and one of the most inadequate comparisons I have seen in a long time). Machine code is just a different level of abstraction of higher - level code. There is no difference in the 'meaning' of machine and higher level code. Machines do not add function to the instructions we give them. If the AI is self-improving (which is the wet dream of every AI researcher and is definitely going to happen if research continues), it will add novel functions and new code. Which is a totally different thing.

Again, calling consciousness "emergent" is just mystical technobabble. Consciousness is not that complicated; it simply means reacting to stimuli, and it is everywhere at varying levels. Micro-organisms exhibit a very primitive range of responses, insects and computer programs something higher, a cat or dog yet higher, and humans the highest (for now).

You're missing the point -- we are much better evolved to survive and reproduce in this environment than AI will be anytime in the foreseeable future. It will be some decades before they are.

We are not ruled by a god-king because no one had the power to rule like a god over the entire population.

But remember -- that used to happen all the time! From Ramses to Stalin, that sort of thing has been more the rule than the exception. Modern and esp. Western societies have evolved very sophisticated mechanisms for ensuing that sort of thing doesn't happen to us. It's not likely we will allow AI to usurp our freedom.

Actually this is irrelevant (and one of the most inadequate comparisons I have seen in a long time). Machine code is just a different level of abstraction of higher – level code. There is no difference in the ‘meaning’ of machine and higher level code. Machines do not add function to the instructions we give them. If the AI is self-improving (which is the wet dream of every AI researcher and is definitely going to happen if research continues), it will add novel functions and new code. Which is a totally different thing.

Heh, you underestimate the current state of programming. Programming can already be recursively generative -- I write programs that reprogram themselves (yes, writing novel functions and new code!) on a fairly regular basis. It is not magic or mystical or emergent, it is simply sets of instructions creating other sets of instructions -- different from compiling into machine code only in levels of complexity. Like the ant from the cat, AI will differ from today's programs mainly in complexity and processing power.

"Consciousness is not that complicated; it simply means reacting to stimuli, and it is everywhere at varying levels."

You do not have any basis to make that claim. This is just the mark (outer appearance) by which humans distinguish consciousness. But it is not only that - else, we will all be zombies. I am quite certain I am not one.

"You’re missing the point — we are much better evolved to survive and reproduce in this environment than AI will be anytime in the foreseeable future. It will be some decades before they are."

So what - how is postponing the outcome with a couple of decades making any difference here? It may not matter for you because (presumably) you will not be around by then - but it definitely matters to everyone who has kids and cares for their welfare as human beings.

"But remember — that used to happen all the time!"

Except that it didn't. No one has ever had a significant level of control over the functioning of our society compared with what an AI will be capable of doing. Stalin could not murder more than 20-30% of the population, could not control their thoughts, could not drug them (at least not the majority), could not rob them completely of free agency. It is hardly godlike (or should I say advanced SAI-like).

"Modern and esp. Western societies have evolved very sophisticated mechanisms for ensuing that sort of thing doesn’t happen to us."

The mechanisms are hardly sophisticated and rely mainly on democracy and the fact that the majority can easily outpower elites/small groups currently. When armies become staffed by robots and when our political order crumbles (it will - it is based on the nation state and it is losing the game already), that will not hold true. Read 'Collapse of complex societies' - it may cure your ill-based faith in the robustness of our society.

"It’s not likely we will allow AI to usurp our freedom."

Again, too much faith that we will be in control of the process. I can hardly control the dozen accountants which are my inferiors in the company and my experience is not atypical. How do you suggest we deal with unintended consequences or with event horizons of causation past which we cannot make any reasonable prognoses?

"different from compiling into machine code only in levels of complexity. "

But that is the whole point - more and more levels of complexity bring not only quantitative but qualitative changes (that is why I refer to conscousness as an emergent phenomenon). You may believe that everything is reducible to numbers but I do not - and even if it is, since we cannot comprehend it, inferring value judgements from that proposition is - well - extremely stupid.

I have every basis to make that claim -- it is perfectly empirical. The mystification of human consciousness is what is without basis. That's the kind of thinking that held Mankind back for so many millennia before the advent of empiricism and scientific thought.

So what – how is postponing the outcome with a couple of decades making any difference here?

Sorry, I meant decades before they are able to compete with us on an erg-for-erg basis in processor efficiency. I do not think it is likely they will ever replicate as easily as we do -- every advance requires ever yet more sophisticated supply chains.

Except that it didn’t. No one has ever had a significant level of control over the functioning of our society compared with what an AI will be capable of doing.

The degree of control AI might be capable of if programmed to control us is irrelevant to the question of whether society would allow that to happen.

Again, too much faith that we will be in control of the process.

Well, so far the only argument you've really offered against that notion is mysticism.

How do you suggest we deal with unintended consequences

Now we're talking! Reasonable caution and a lot of testing will be indispensable, of course, and naturally we shouldn't do silly things like put AI in charge of military decisions or allow a single networked platform access to dangerous levels of physical resources. But really, there's probably never going to be much reason to do things like that. More likely, strong AI will do jobs that only strong AI can do, like higher-level physics problems, or jobs that involve entertaining humans in one way or another.

Read ‘Collapse of complex societies’ – it may cure your ill-based faith in the robustness of our society.

Read "Future of Freedom". Rich liberal democracies are immortal -- they have something like 1,000 years of combined uninterrupted governance. Once societies achieve a certain level of social capital the bonds are much stronger than they ever were in premodern times.

But that is the whole point – more and more levels of complexity bring not only quantitative but qualitative changes

That's fine, but they'll still love us like we love our children (with even lower error rates!) because we will program them to.

(that is why I refer to conscousness as an emergent phenomenon).

First,one must define this thing one wishes to call "consciousness." And generally one of two things happens at that point: one defines it and realizes it isn't all that mysterious, or one clings to the notion it's something undefinable and ineffable. The latter course is boring and futile as it cannot tell us anything useful, as has always been the case for the thousands of years the vast majority of humankind has wasted wallowing in seductive mysticism.

You may believe that everything is reducible to numbers but I do not

Some things are true whether you believe them or not. Now, there are things that are very difficult to represent as numbers, and other things that are represented as numbers but probably shouldn't be assigned the accuracy implied by those numbers (like aggregate demand) but at a quantum level everything can, at least, have a probability assigned to it, and ultimately any slice of reality can be expressed as the sum total of the information it contains.

This is getting rather tedious - we are talking past each other. I'll answer just a couple of your claims and quit the discussion.

"I have every basis to make that claim — it is perfectly empirical."

Values are IRREDUCIBLE to empiricism - the 'is-ought' problem was stated by David Hume and will be around forever, no matter what philosophical illiterates like Sam Harris (who is probably the most philosophically ignorant philosophy major on this planet today) imagine. There is simply no way to go around it except if you take basic utilitarian premises for granted - but then again, you could not justify that.

"every advance requires ever yet more sophisticated supply chains."

Again, this is invalid inductive reasoning. The fact that a trend held up in the past says nothing about the future (read Popper). 3D printing is already making "sophisticated supply chains" obsolete in many industries. It is (probably) just a matter of time before this hits computer hardware and then your argument goes to pieces.

"Well, so far the only argument you’ve really offered against that notion is mysticism."

I am not offering mysticism unless you actually think that engineering projects never go wrong (I have seen enough of those in my job as a financial advisor).
Imagine a symple cellular automata non-cyclical model. You are supposed to say what will it look like after one million time periods. If you have a computer that will be fairly easy, even if you have to write the software. Now imagine doing it manually with only pen and paper. Now imagine doing it with feathers, ink and paper. Now imagine having only rocks and wedges to do it. Now imagine doing it with only rocks, wedges and without really knowing the rules. Or understanding the problem.

This is a fairly accurate description of the kind of problems we will run into in the future. We will lack the tools we need to solve them, often we will not even understand them before they occur. And, since the outcome of a lot of them will be fatal (infinite loss), a simple decision tree will tell you what we are supposed to do (mind you, even in the best-case scenario, our gain from SAI is not infinite). Of course, that is a problem Yudkowsky runs away from like a mad hatter - because if he actually has to make a case against it, he would be at a loss to justify his position. It is far easier to say that something is inevitable and just go on with it.

"Rich liberal democracies are immortal — they have something like 1,000 years of combined uninterrupted governance. "

This is, again, invalid inductive reasoning. Plus, we have had several transitions from liberal democracy to authoritarianism, so this argument has actually been falsified.

"but at a quantum level everything can, at least, have a probability assigned to it, and ultimately any slice of reality can be expressed as the sum total of the information it contains."

Yes, but that assumes that there is nothing inherently flawed with human reasoning and mental faculties. Which is likely to prove very, very wrong. And when it does, all theorizing about the nature of reality goes in the bucket.
And, again, even if this is true, it does not say anything about what we should do. It is only a positive description of the world, nothing more. There is no compelling reason to base our decisions on it.

Isn't the result of (1) 0% for both? And the concern for AI turning on humans involves the AI getting control of a nuclear arsenal.

Or the NE grid.

Nuclear war is actually pretty survivable, at least on a species level. The Earth is a big place, and the nuclear winter stuff has been largely debunked. I think the most credible estimate I saw was you'd need around a million nukes to get everyone.

Says Dr. Strangelove.

I hope you're kidding around.

Nukes, while being far more powerful than conventional explosives, are still wildly overrated in regards to their actual destructive power or environmental effects, at least on a global scale. You can stroll the beach on Bikini Atoll and even go scuba diving despite the fact we detonated multiple nuclear weapons on them. The local food is the only source of radiation to be concerned about as there are trace amounts of radioactive particles. Not so much that you could not eat it, but there could be long term effects if exposed to it consistently.

I would also add that it might not be advisable for an AI trying to take over the world to toss around lots of weapons that create EMPs.

"the nuclear winter stuff has been largely debunked"

Nope. This is a widely-held belief, but false.

Or maybe you'll be the first (of many) people I've asked about this to cite peer-reviewed research that agrees with your statement.

Lots of speculative peer-reviewed crap turns out to be wrong for reasons that are never peer-reviewed. Please don't confuse peer-review with scientific proof, they are VERY different things.

That said:

Thompson, Starley L & Schneider, Stephen H Nuclear Winter Reappraised in Foreign Affairs, Vol. 64, No. 5 (Summer, 1986), pp. 981-1005

Additionally, the models generally assume firestorms for which there is little evidence, the Kuwaiti oil fires suggested that the atmospheric effects of the large fires may be minimal anyway, and looking at volcanic eruptions strongly suggests the effect will be fairly minimal (given the relative size of the ejections; Krakatoa 1883 was about about as powerful as all nukes in service put together).

So all in all it's pretty fair to call nuclear winter "largely debunked." Or as Freeman Dyson put it:

"As a scientist, I judge the nuclear winter theory to be sloppy piece of work, full of gaps and and unjustified assumptions."

OK, so you _won't_ be the first to reference anything peer reviewed. Good to get that out of the way.

Still, the _Foreign Affairs_ article? Seriously?

And you seriously believe that nobody who knew that nuclear winter was "speculative peer-reviewed crap" bothered to point this out in a peer-reviewed journal?

By the way, the _Foreign Affairs_ article was cowritten by the same Stephen Schneider who, in 1990, agreed that nuclear war would have very significant climate effects (albeit in an ill-graced retraction): See his comments in the January 13, 1990 New York Times, p. C-1.

I continue to be amazed and amused at how unkillable the "nuclear winter is a lefty myth" is in some quarters.

You may be surprised to learn nuclear war is unpopular.

Yes, it seems quite likely that far more people want to present a very badly flawed anti-nuclear-war "nuclear winter will kill us all" case than a much more sound pro-nuclear-war "nuclear winter is actually a pretty badly flawed concept" paper, and that funding would be much more easily available for the former.

Again, please do not confuse "peer review" with "science." As ClimateGate amply proved (whatever you think of AGW!) peer review is often incestuous, exclusionary, and politically-driven.

I notice you address none of the points raised, any of which is sufficient to cast serious doubt on the nuclear winter scenario. Merely pointing to "peer review" is not an argument.

You are correct, though, that nuclear winter isn't a lefty myth: it's a junk science myth.

Also, I did cite a peer-reviewed study. And there are others.

Science and Public Policy, Vol. 15, No. 5, October 1988, pp. 321-334. "Nuclear winter: science and politics "

George W. Rathjens and Ronald H. Siegel, "Nuclear winter: strategic significance", Issues in Science and Technology, Winter 1985, pages 123-128 (see page 127).

S. Fred Singer, "Stratospheric water vapour increase due to human activities", Nature, 233, 22 October 1971, 543-545

44. S. Fred Singer, "Is the 'nuclear winter' real?", Nature, 310, 23 August 1984, page 625.

45. S. Fred Singer, "On a 'nuclear winter'" (letter), Science, 227, 25 January 1985, page 356.

I could go on. But again, hiding behind peer review is a very bad and lazy substitute for an actual argument.

“hiding behind peer review is a very bad and lazy substitute for an actual argument.”

I’d say it’s worse than a “lazy substitute for an actual argument” to cite things as advancing a proposition that they actually don’t.

The Brian Martin article you cite, for instance, does not say that nuclear winter has been debunked. Or at least I can’t find any statement to that effect in it. In fact, Martin writes that “As scientific research and the controversy have proceeded, the distinction between proponents and critics [of severe global climate effects of nuclear war], never an exhaustive nor clearcut categorisation, has become more blurred. A variety of effects have been studied; some increase and some decrease the likelihood of a severe nuclear winter.” He also writes that “Just because 'politics' may be involved with nuclear winter research does not automatically mean that the research is scientifically wrong, tainted or inappropriate for use in policy-making.”

Doesn’t sound like he’s calling nuclear winter “speculative crap” to me.

I haven’t said there’s no scientific debate over nuclear winter: there is. There’s never been a nuclear war, so the effects can’t be measured and exactly how much of a global climate effect there is likely to be from a massive nuclear exchange is debated. But nuclear winter has not been debunked, and people who have studied the subject have reached the conclusion, based on the best evidence available, that significant global climate effects are a likely result of a large-scale nuclear war. The initial papers did not get everything right (initial research pretty much never does) but they have held up at least as well as early research in a developing field normally does.

I’m familiar with the references you cite; none of them show that “nuclear winter has been largely debunked” or that the subject is “speculative peer-reviewed crap”. They show that there have been disagreements (which I have never denied), particularly in the first few years of study on the topic.

The data from the Kuwait oil fires (the actual data, not off-the-cuff comments by
Sagan or others) support the models which suggest nuclear winter effects are likely: see the papers published in the September 20, 1992 issue of the _Journal of Geopysical Research_.

The example of Krakatoa proves nothing: of course a single explosion is going to put less stuff in the atmosphere than hundreds (or thousands) of firestorms.

It's not "hiding behind peer review" to point out that it's a useful (albeit imperfect) mechanism.

But nuclear winter has not been debunked

True, to be debunked it would have to have actually had a shred of scientific credibility in the first place, and it never did. Your description of it as a "myth" is much better.

I’m familiar with the references you cite

Heh. Really? Because a minute ago you didn't seem to think any of it existed. Well, okay then...

They show that there have been disagreements

In other words, it's not very widely accepted, and thus can fairly be said to inhabit the realm of "speculative crap that very often turns out to be wrong."

he data from the Kuwait oil fires (the actual data, not off-the-cuff comments by Sagan or others) support the models which suggest nuclear winter effects are likely

Nope, modelling was generally wrong. Also, Sagan was NOT speaking "off the cuff," he had done modelling (look it up) which also turned out to be wrong.

http://www.scribd.com/doc/4960296/The-Kuwaiti-Oli-Fires

Earth,as it turns out,is not as easily harmed as people imag-ined it would be.Scientists had predicted severe problems result-ing from climate change due to cooling from soot in theatmosphere.In reality,the soot never reached high enough totravel around the world and affect weather patterns.As the yearspass,it is increasingly difficult to prove that environmental dam-age in Kuwait and beyond is due to the fires and not the generalbuildup ofair pollution around the world.As one researcherpointed out,“Today’s climate modelers obviously still have theirlimits.Now that the smoke is clearing,many people may won-der…ifanyone should pay attention when they warn us ofsuchother potential disasters as global warming.”Do the results ofthestudies in Kuwait really apply so broadly? No true conclusions canbe drawn with the information available today,but there is nodoubt that an important question has been raised.

The example of Krakatoa proves nothing: of course a single explosion is going to put less stuff in the atmosphere than hundreds (or thousands) of firestorms.

Sorry, no, very much the opposite. The same power dispersed over hundreds or thousands of firestorms (which btw most nuclear detonations have not even produced) is much LESS likely to be able to push particulate matter high enough to significantly affect global climate. Even Tsar Bomba only reached the mesosphere, and nothing in service today is nearly as powerful.

It’s not “hiding behind peer review” to point out that it’s a useful (albeit imperfect) mechanism.

Thank you, that is a much more reasonable position than supposing that peer review is the only arbiter of truth, as your first statements implied.

I'll just add this excellent Crichton except in which he very nicely described the problems with nuclear winter as science, and peer review as an arbiter of truth. I recommend reading the whole thing, the historical examples are very telling.

https://www.cfa.harvard.edu/~scranmer/SPD/crichton.html

Now let's jump ahead a decade to the 1970s, and Nuclear Winter.

In 1975, the National Academy of Sciences reported on "Long-Term Worldwide Effects of Multiple Nuclear Weapons Detonations" but the report estimated the effect of dust from nuclear blasts to be relatively minor. In 1979, the Office of Technology Assessment issued a report on "The Effects of Nuclear War" and stated that nuclear war could perhaps produce irreversible adverse consequences on the environment. However, because the scientific processes involved were poorly understood, the report stated it was not possible to estimate the probable magnitude of such damage.

Three years later, in 1982, the Swedish Academy of Sciences commissioned a report entitled "The Atmosphere after a Nuclear War: Twilight at Noon," which attempted to quantify the effect of smoke from burning forests and cities. The authors speculated that there would be so much smoke that a large cloud over the northern hemisphere would reduce incoming sunlight below the level required for photosynthesis, and that this would last for weeks or even longer.

The following year, five scientists including Richard Turco and Carl Sagan published a paper in Science called "Nuclear Winter: Global Consequences of Multiple Nuclear Explosions." This was the so-called TTAPS report, which attempted to quantify more rigorously the atmospheric effects, with the added credibility to be gained from an actual computer model of climate.

At the heart of the TTAPS undertaking was another equation, never specifically expressed, but one that could be paraphrased as follows:

Ds = Wn Ws Wh Tf Tb Pt Pr Pe? etc

(The amount of tropospheric dust=# warheads x size warheads x warhead detonation height x flammability of targets x Target burn duration x Particles entering the Troposphere x Particle reflectivity x Particle endurance?and so on.)

The similarity to the Drake equation is striking. As with the Drake equation, none of the variables can be determined. None at all. The TTAPS study addressed this problem in part by mapping out different wartime scenarios and assigning numbers to some of the variables, but even so, the remaining variables were-and are-simply unknowable. Nobody knows how much smoke will be generated when cities burn, creating particles of what kind, and for how long. No one knows the effect of local weather conditions on the amount of particles that will be injected into the troposphere. No one knows how long the particles will remain in the troposphere. And so on.

And remember, this is only four years after the OTA study concluded that the underlying scientific processes were so poorly known that no estimates could be reliably made. Nevertheless, the TTAPS study not only made those estimates, but concluded they were catastrophic.

According to Sagan and his coworkers, even a limited 5,000 megaton nuclear exchange would cause a global temperature drop of more than 35 degrees Centigrade, and this change would last for three months. The greatest volcanic eruptions that we know of changed world temperatures somewhere between .5 and 2 degrees Centigrade. Ice ages changed global temperatures by 10 degrees. Here we have an estimated change three times greater than any ice age. One might expect it to be the subject of some dispute.

But Sagan and his coworkers were prepared, for nuclear winter was from the outset the subject of a well-orchestrated media campaign. The first announcement of nuclear winter appeared in an article by Sagan in the Sunday supplement, Parade. The very next day, a highly-publicized, high-profile conference on the long-term consequences of nuclear war was held in Washington, chaired by Carl Sagan and Paul Ehrlich, the most famous and media-savvy scientists of their generation. Sagan appeared on the Johnny Carson show 40 times. Ehrlich was on 25 times. Following the conference, there were press conferences, meetings with congressmen, and so on. The formal papers in Science came months later.

This is not the way science is done, it is the way products are sold.

The real nature of the conference is indicated by these artists' renderings of the the effect of nuclear winter.

I cannot help but quote the caption for figure 5: "Shown here is a tranquil scene in the north woods. A beaver has just completed its dam, two black bears forage for food, a swallow-tailed butterfly flutters in the foreground, a loon swims quietly by, and a kingfisher searches for a tasty fish." Hard science if ever there was.

At the conference in Washington, during the question period, Ehrlich was reminded that after Hiroshima and Nagasaki, scientists were quoted as saying nothing would grow there for 75 years, but in fact melons were growing the next year. So, he was asked, how accurate were these findings now?

Ehrlich answered by saying "I think they are extremely robust. Scientists may have made statements like that, although I cannot imagine what their basis would have been, even with the state of science at that time, but scientists are always making absurd statements, individually, in various places. What we are doing here, however, is presenting a consensus of a very large group of scientists?"

I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you're being had.

Let's be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

There is no such thing as consensus science. If it's consensus, it isn't science. If it's science, it isn't consensus. Period.

Me: "I’m familiar with the references you cite"

TallDave: Heh. Really? Because a minute ago you didn’t seem to think any of it existed.

I said nobody'd cited anything peer-reviewed which supported your statement that "the nuclear winter stuff has been largely debunked".

You haven't either. Neither _Foreign Affairs_ nor _Science and Public Policy_ are peer reviewed. Nor are letters, including those you cite. As for the Rathjens and Siegel paper, Rathjens at least changed his opinions when further evidence became available (unlike those who continue to repeat the "nuclear winter has been debunked" canard), and as the papers he published in the early 1990s show, came to the conclusion that significant climate effects were a likely effect of nuclear war.

It's amusing that you think disagreements among scientists show that "it’s not very widely accepted, and thus can fairly be said to inhabit the realm of “speculative crap that very often turns out to be wrong."" Obviously, all scientific fields immediately result in a universally-held set of perfect equations. My understanding of the history of science is clearly flawed.

You can cite a Facts on File summary if you like; I find geophysics papers somewhat more convincing.

As for Crichton, he's not a very convincing base for an argument from authority, at least in my opinion. He at least claimed to believe he could make the metal in a spoon "completely pliable, like soft plastic" with the "focused inattention" of his mind. (see his book _Travels_). He was a good writer, though.

If you want to cherry-pick articles from political journals and SF writers and form your worldview that way, you're welcome to do so. But in my view "debunked" means "shown to be false". And that hasn't happened with nuclear winter. Although an interestingly large number of people apparently deeply want to believe it has (a pattern of citing and dropping supposedly killer references is interesting in this respect).

Wrong, Foreign Affairs is peer reviewed. So is S&PP.

http://www.csa.com/factsheets/supplements/paispeer.php

If you're going to fetishize peer review, you should at least be aware of what is actually peer reviewed.

Obviously, all scientific fields immediately result in a universally-held set of perfect equations. My understanding of the history of science is clearly flawed.

When someone suggests that gravity on Earth acts with a accelerative force of 10m/s/s, this is immediately testable by anyone with a chronometer, a weight, and a measuring tape. That is science. When someone suggests "Hey, maybe nuclear war will cause a severe nuclear winter that would wipe out all the survivors, if these 12 unknown, untestable parameters happen to have these values I've plugged in" that is speculation, not science.

Crichton isn't arguing from authority, he's making an argument based on the various facts. You, meanwhile, have responded with an ad hominen fallacy. Not very good!

But again, I'll concede my statement that nuclear winter as an existential threat to the species has been "debunked" was wrong, since it was never credible to begin with, and properly stands as a junk science myth largely built on well-intentioned PR campaigns.

(BTW, regarding the ad hominem fallacy against Crichton's argument: did you know Isaac Newton was primarily an alchemist? His advancement of science was actually incidental to the monumental folly which was the primary focus of his life. And yet, he advanced science greatly! So one should beware ad hominem as a substitute for counterargument.)

Also, I should point out (if you're going to quote me) I said "largely debunked," and for good reason -- my criteria that nuclear war be survivable (the original context of that comment) allows for a "significant" effect; even as much as 5 degrees probably isn't going to kill off an appreciable proportion of the remaining population, let alone wipe them out. But there were very specific predictions (as Crichton notes) that did envision catastrophic nuclear winter, such as that temperatures would drop 35 degrees -- not an off-the-cuff statement either, it had been modelled! -- and as it turns out there is very little support for those claims, and I doubt you will even find many scientists today who would even attempt to argue those worst-case scenarios have any validity.

"In thought experiments, imaginary children agree with me about almost everything, because my beliefs reflect simple, fundamental truths. Adults are generally patronizing towards children, whom they do not understand. Thus, it is proven that children are smart and I am smart and adults who disagree with me are dumber than fifth graders (and probably secretly hate children, the meanies)!"

Meanwhile, I am certain that at some point some schoolteacher actually asked an 11-year-old Jeff Bezos what he wanted to do, have, and be when he grew up. With Bezos's help, it shouldn't be too hard to find what he actually thought about such things at that age. If he'd rather keep his childhood dreams private, I would rather Gobry make his own case rather than conscript an imaginary child to do it for him.

Whoa, hold on here a minute: Jeff Bezos has also made income inequality much worse by being very successful. Let's not let shiny toys obscure our vision of what's really important.

Just like all the commenters on the Kiva robot video who were whining about the jobs that would be lost.

It is a real shame we ever domesticated animals larger than llamas, think of the porters....

I love that so many young internet billionaires are using their money to advance worthwhile research and technology. It's a very good illustration of why people should be allowed to amass capital without having the government tax it away from them.

However, it must be said that Bezos's spaceship plans seem to be running a distant second to Elon Musk (PayPal)'s SpaceX, which is already flying commercial satellites into orbit, has flown a 7-person capsule safely into orbit and back, and is now working on a massive rocket that will be larger than any rocket ever flown other than the Saturn V - and which will put payload into orbit for 1/6 the cost of its competitors.

It's rarely a good idea to pick the winner of a technology race in advance.

Bezos' plan, to the extent it is public, is quite different from Musk's. Musk's is quite conventional, except for the costs. It's astounding he's been so successful on costs, and that leads one to believe much conventional wisdom of this field might be wrong. So maybe there's room for Bezos yet.

I wasn't declaring a winner - I was just saying that at this point in time Musk seems to be in the lead. SpaceX is doing absolutely amazing stuff right now. At the end of this month, SpaceX is delivering cargo to the ISS. They're just way ahead right now.

That's not to put down any other effort, and I like the uniqueness of Bezos' design. Competition is great, and I hope a heated, competitive battle erupts between these companies, because I can't think of anything that would do more to accelerate the development of space technology.

SpaceX just successfully tested their new "SuperDraco" engine: http://www.spacex.com/press.php?page=20120201

How cool is that? Their launch escape system can double as retrorockets for controlled landing of the spacecraft. That eliminates the need for fleets of ships and helicopters needed for water recovery. This isn't stuff on a drawing board, it's working hardware that's scheduled to fly in the near future. SpaceX is rapidly building to having the capability of manned flight out of orbit: Landers, heavy lift, 7 person spaceships, and orbital rendezvous capability.

This kind of wasteful competition shows why space programs should only be run by the government.

"Bezos’s spaceship plans seem to be running a distant second to Elon Musk"

Musk and SpaceX actively court publicity at every opportunity; Bezos and Blue Origin avoid it - particularly publicity related to their technical achievements. If the phrase "seems to be" appears in your metric, you are probably missing something. Note that, in 1906, the Wright Brothers seemed to be running a distant second to Alberto Santos-Dumont in their respective airplane-development efforts.

As for Falcon Heavy putting payloads into orbit for 1/6 the cost of its competitors, what do you know that the rest of us don't about Blue Origin's cost structure for orbital payload delivery? Because that's one of the things Bezos has been rather quiet about.

Wonderful !

"Of course any adult would have smiled slightly condescendingly, patted him on the head and helpfully explained that these things aren’t possible."

Um. In 1975, when Jeff Bezos was 11, these things actually some of those things really were still impossible. Not impossible to imagine, of course. Children and adults had been imagining robot armies and sky computers that could mimic the human brain for generations by then. And of course by definition someone somewhere already had the worlds largest bookstore. (It might even have been Barnes & Noble in New York, which, while only in a single location already had a huge mail-order book business.) And while the first single-stage, fully nose-up take-off and landing suborbital spaceship wasn't cancelled due to cost overruns until 1993, by 1974 science fiction writers and scientists had been imagining them for nearly a century. (Only, of course, without the cost overruns or fuel-to-payload problems that drive them.)

Even the idea of a single person amassing enough money (and imagination!) to build and/or sponsor all that wasn't new in 1974. At least one of Robert Heinlein's novels from the 1940s or 1950s had that too.

For that matter I don't think an adult who'd him or herself had been a child in the 1920s-1950s, let alone any adult familiar with, say, Walter Cronkite's 1960s weekend prime time shows about the projected 21st Century would have been particularly condescending to an 11-year-old with those ambitions in 1975. I mean, consider how much more more condescending, for instance, they'd have been to any presidential ambitions of either a 14-year-old half-African-American kid from a Hawaiian single-parent family, or a bellicose 29-year-old who's advantages of being the son of a wealthy Washington insider seemed offset by dismal grades, dismal military discipline, and what he later admitted to be a pretty heavy drinking problem.

In other words, poetic and inspiring "come from behind" rhetoric sure. But not terrifically well-grounded in reality.

Oh, also? When Jeff Bezos was 11 Richard Branson's Virgin Records had already been in business for three years. I wonder what he was already imagining.

figleaf

+1. I would've said "oh, you like Isaac Asimov too?"

In the 70s people still patted me on the head condescendingly when I mentioned Asimov.

Not to get too far off topic, but "dismal grades, dismal military discipline"? He had better grades than is competition in an Ivy League school and had very good military reviews. Your comments suffer when you rely on fabrications.

Nice, but he's no Tony Stark...

http://www.youtube.com/watch?v=0xyDyDfOlHc

Comments for this post are closed