Assorted links

Comments

6th link seems to be broken

#2, I don't know the data, and so it's hard to tell if this is a Fisking. ?

Yah. Having gotten most of the way through Pinker's book and reading the current criticism, I have no idea what to think about the numbers. Everyone seems to have an axe to grind.

I haven't read the book, but it doesn't appear that the web site linked to really seemed to attack the core thesis of the book.

The book's thesis seems to be that, Society is much better today than in the pre-industrial past. The web site seems to believe that the author puts too much blame on religion for crimes in the past, but doesn't dispute that the crimes happened. Is the web site tangential to the core thesis of the book (in which case I may read it) or does the book devote a lot of time and attention specifying religion as the source of evil?

I agree. The problem is that death toll estimates are hard to do.

White seems like he is doing his best here:
http://necrometrics.com/pre1700a.htm#America

He certainly doesn't deserve the scorn that link #2 gives him.

#2 - we're seeing Steven Pinker turn into Malcolm Gladwell before our very eyes. Fascinating data point for those interested in cognitive bias.

#5, someone get Mrs. Makhubo on Twitter!

2: "The three founders of Protestantism, Luther, Calvin, and Henry VIII, had thousands of heretics were burned at the stake" is an odd statement. I don't know about Luther and Calvin, but the heretics that Henry had burnt were Protestants, which you wouldn't infer from that statement. He stopped being a Roman Catholic and became what you might call a London Catholic. He was Catholic enough to burn Lutherans. The Church of England became protestant under his son and his second daughter, who, being protestants, didn't go in for burning much. His first daughter, Mary, burned with enthusiasm, but then she was Roman Catholic.

Weren't a lot run by governments? In other words, the problem is not so much religion, but idiots who follow orders?

"Moore’s law squared?"

Aka, there is no great stagnation.

Equals faster porn for everyone!

I'm not fully convinced on 1) Moore's Law. I worked in HPC and benchmarked some of the hardware for acquisition.

I suspect the story is more complicated, mixing other hardware changes with algorithm improvements. That was an era that saw the introduction of vector architecture, massive parallelism and large on chip cache. To be sure parallelism, and cache (and vector) require new algorithms to exploit, but cannot work at all without those 1000 or so processors to harness.

An example on Bixby's web site refers to only 4 CPU machine, but I recall exploiting hardware features such as larger local memory and pipelining on those machines. Moreover, problem performance tends to be non-linear, slowing as the problem size increases. Newer hardware with relaxed memory constraints and other features unrelated to circuit density per se enabled new algorithms that would not work at all on earlier machines.

I'd be curious what fraction of the HPC problem space you think is still beyond mainline machines. When I hear of China building the world's fastest computer, I'm bored. Am I wrong in that?

http://en.wikipedia.org/wiki/Grand_Challenge

This is a somewhat dated list, but the most common theme of the remaining outstanding problems is probably some form of fluid simulation, possibly with chemical or nuclear components. There are many applications such as the simulation of nuclear weapons, combustion, aerodynamics, etc, that have a fair amount of money behind them. For obvious reasons, doing this sort of thing with relatively fine grained grids rapidly gets you into very large numbers of elements that need to be simulated each time step. Also, while we've made a lot of progress on applications that are embarrassingly parallel, applications that have a lot of global communication have not been nearly as aided by hardware advances.

I've been out of HPC since 2006, but I think you are more or less right. When I left the field we were no longer using state of the art supercomputers, but commodity hardware. The highest end stuff was primarily for a narrow range of problems.

I used to design large computers. The set of people we sold to moved around with time as some customers found their problems could be solved by mainstream machines as they improved, while new customers appeared when problems that were previously intractable began to seem feasible. There were also anchor customers like the national labs and the other government entities. But it's not exactly a growth business. In fact, the decline of the traditional vendors is more to do with the increasing costs of fielding something competitive with mainstream hardware and the roughly constant size of the market.

Finch, interesting to hear the perspective from the other end. For us, adapting to the changing hardware was the challenge.

I was at a second tier lab, big, but not Los Alamos. We stopped sending out benchmarking teams when the commodity guys brought the price point down to a level where they couldn't be bothered with us anymore. The update cycle also went from 3 years to 1 year.

My observation of the market at the time was that developing the next generation of hardware plus fabrication plants was more than the limited high end market could support. Hart to recover $10^9 selling a dozen machines. This pushed the market to massively parallel commodity processors. Transitioning from big vector architecture like Crays to massively parallel was quite a programming job mostly reserved for the big workhorse CPU intensive codes.

Since you left HPC, GPUs have become a big thing. They amortize their investment over the graphics business, which is a considerable advantage. Other than that I think most money goes into interconnect, where commodity hardware just isn't cutting it and doesn't care about the right things, and memory, where money gets you performance. Notably, none of those areas require the building of fabs.

@Finch

GPU's are big indeed but sometimes over-hyped: It isn't an easy task to port code and requires significant effort. The learning curve is steep. Hardware is propitiatory (mostly NVedia). Nominal GPU Teraflops must be heavily de-rated the moment you face a non-parallel problem; a conventional architecture is more flex about handling parallel and serial tasks. Overall, I view GPU-Teraflops figures a bit skeptically.

@Rahul

Sure, I'd agree with most of that. I'd put it a little more strongly even: If you want guaranteed performance on any code, it's hard to count on more than a few gigaflops.

But I was making a market point. In the past couple of years, Nvidia has basically captured the HPC market. GPUs are doing much better there than they are in consumer apps, probably because a lot of time has already been spent vectorizing and threading those codes. Also, CUDA adds a lot to GPU usability.

@Rahul again

I'd even say that the real problem with GPUs in HPC isn't the requirement for parallelism, because we're being pushed down that path no matter which technology we work in - single threaded performance stopped improving years ago. The real problem is that inter-thread communication and, in particular, synchronization, is still pretty primitive in GPUs and CPU/GPU systems. But that won't last long.

Everybody will be dealing with systems that contain a few latency-optimized cores and many throughput-optimized cores communicating through a memory that is shared or distributed, probably at the compilers discretion, within a decade anyway. The distinction between CPU and GPU is going away.

In chess, where about half the advance is due to faster machines and half due to better algorithms as well, the better algorithms are not mostly about better using machines features, they are about better understanding features of the problem. But for chess in particular, parallelism doesn't work all that well.

I'm surprised that parallelization does not work for chess given the divide and conquer nature of most algorithms.

It works, just poorly.

The sort of tree search done in game conventional game-playing is inherently serial. You keep track of the best thing you've seen so far, and use that to cut off searches that can't possibly improve on that. If you look in parallel, you wind up doing work that you would have abandoned had you done the problem serially.

Take integer factorisation. AFAIK the newer algorithms are much faster. But only on numbers so long you could not have tried factorising them with old hardware. As in knowing what we know now we would still have used the old factorisation algorithms at the time as they were better for the number lengths being practically factorised at the time.

How much algorithm improvement comes from changes in hardware allowing bigger data sizes that mean the new algorithms become superior.

As an aside, I see a parallel to Tyler's disagreement with Paul Krugman. I was unconvinced by the Moore's law article, despite the argument and evidence. On the other hand, a discussion of "threats to validity" would have demonstrated that the author was not only aware of, but had analyzed and though through the other factors. Instead of digging in my heals as a skeptic, I would have likely deferred to someone who had thought this through more deeply than myself.

Arguing the other side is not required, but it is, among other things, both a powerful signaling and rhetorical device. In some domains, such as scientific papers, it is considered also good form. The baseball analyst Bill James is a master of the technique. Far more skilled statisticians should read his work to learn a thing or two.

3. "After all, there are many orders of magnitude more standard liberals than standard libertarians, and they possess many orders of magnitude more influence. We pick our fights, and I’d like to pick ones that stand a chance of making a real difference."

First sentence, first part wrong, second part right. Second sentence, first part right, second part wrong.

"I’d rather not be affiliated with a “movement” that includes him in even a conflicted way."

Oh, and well, the see ya. The only people I'm not interested in are hypocritical exclusionists.

What major changed happened in Linear Programming algorithms after Dantzig's Simplex and Karmarkars algorithm both of which were pre-1985 developments and already soluble in polynomial time. I'm wondering what caused the "three orders of magnitude" of algorithmic speedup.

Karmarkars Interior points is polynomial but the Simplex isn't - at least not theoretically

I think with Linear programming the success of the algorithm used depends on the problem type - I notice in the post he says "solvers" so maybe he's talking about some suite of algorithms and better identification of which algorithm works best of which type of problem

Lots of development happened in advancing the implementation side vs. just the theoretical algorithms in the late 80s and 90s. For example, the late 80s and early nineties brought many more practically faster as well as theoretically faster interior point algorithms than the first interior point methods for LP vs. the starting point of Karmarkar's algorithm.

In addition, there were many many numerical linear algebra improvements during that time as well that improved the performance of the operations used in both the simplex method (which works well practically despite having exponential time cases) and interior point methods -- in particular, advances in LU factorization and Cholesky factorization which lead to nice practical speedups.

To anyone that doesn't recognize Bob Bixby, he's created two of the top performing mathematical programming solvers available today, CPLEX (sold to IBM) and his new project Gurobi (to which he is the "BI" in guroBI). The guy knows how to implement LP arguably better than anyone in the world.

Anyone curious to learn more about the improvements in LP can go straight to the paper Bixby wrote, available here:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.74.821&rep=rep1&type=pdf
"SOLVING REAL-WORLD LINEAR PROGRAMS: A DECADE AND MORE OF PROGRESS"

Did anybody else notice that 2 and 3 are related?

3. Does anyone have any idea of why Will Wilkinson's cognitive function declined so? Trauma to the head during injury? Heavy use of antidepressants or some other brain deadening drug? He's not old enough for it to be pure age. But it is mental decline and not just changes in preference and opinion. It's one thing to change preferences and advocate those preferences despite their weaknesses. It's another thing to grow blind to the weaknesses of your arguments, to deny facts and the necessary logical conclusions of facts and to decry people who disagree with you as bad people. Wilkinson never used to do any of this. I'd be willing to bet some medical condition affecting the brain, even if he doesn't know it yet.

Friends don't let friends write high.

Who does he socialize with?

The philosophy Wilkinson espouses is egalitarian, not libertarian (or classically liberal). He doesn't seem to know the difference. And the difference between classical liberals and libertarians is, as I understand it, that libertarians are sceptical of public goods and externalities. For classical liberals, ie, followers of Adam Smith, public goods and externalities are included in the worldview.

And Wilkinson makes the mistake of treating democracy as a single, static state: "Democracy is as good as it gets." Argentina is a democracy; so is Norway. But they are not the same version of democracy. And frankly, democracy is less important than governance. How you choose your leaders is less important than whether they govern well. California has plenty of democracy; good governance, less so.

Simple: WW matured and came to his senses.

His piece at BHL is the best they've posted. What a refreshing change from their recent drudgery of meticulous self-categorisation.

#2 Oh man, am I pleased to find that my reading a Cathar chronicle in the original, horrible Latin during college is going to bear fruit. Just as soon as I remember what it said.

Pinker rants like a crazy man when it comes to religion, which is a shame because when he's not evangelizing he's a great read.

3. Does anyone have any idea of why Will Wilkinson’s cognitive function declined so?..."

While I disagree with you about he severity of the issue, he has developed a nasty habit of name calling and has been hired and, er, moved on from a few places as of late. His blogging has also been rather sporadic and infrequent. I hope he's just working on a book or something, since he's a valuable voice in the discussion. I do find his name calling distasteful, but I've always just assumed it's the lefty in him. That's par for the course. I still consider myself a fan, however.

So when is today's Krugman post coming out?

Why don't you type one up and link to it?

Will Wilkinson saying things I believe: “Ideological labels are mutable, but at any given time they publicly connote a certain syndrome of convictions.” Deemphasize your political labels and “come out as inscrutably idiosyncratic”; try individualism instead.

I want to to stand up on my desk and shout "Oh Captain, my Captain" to Mr. Wilkinson for this bit.

This is very much how I currently behave. A related challenge for me is to treat others as if they are indiosyncratic individuals, to see past whatever lables they may use and into who they are on a deeper level, to move past relatively superficial poltical discussion and get to the meaty values that lie behind their convictions, for this is the territory in which we are most likely to connect and create a sense of mutual understanding and respect.

It's one thing to stop using lables to define oneself, but it's quite a bit more challenging (for me) to stop defining others by crude lables. But it is rewarding, important work in my opinion.

I don't think many people have a problem with the "Deemphasize your political labels and “come out as inscrutably idiosyncratic”; try individualism instead". That's nearly a core tenet of Americanism.

However, the rest of his writing is a little more problematic:
"Non-coercion fails to capture all, maybe even most, of what it means to be free."
Really? That seems a very hard claim to prove, and he doesn't seem to try very hard.

As proof he says:
1) Taxation is often necessary and legitimate.
2) The modern nation-state has been, on the whole, good for humanity.
3) Democracy is about as good as it gets.

But that's a strawman argument. You might find some anarchist hermit living in a cave that genuinely disagrees with all 3 of these, but most libertarians are not arguing against, all taxation, the existence of nation-states and Democracy!

And this statement:
"The argument over which rights and liberties ought to be treated as constitutional fixed points, and thus ought to be off the table of democratic negotiation, is not a debate between liberals and the people who think taxation is theft or that the state is an inherently criminal enterprise. It’s a debate within liberalism between liberals."

Seriously? The only proper debate about constitutional rights is between liberals! Even if you were to include the older definition of liberalism, that's an outrageously cliquish statement.

And then he degenerates into the name calling of Ron Paul. I'm not a supporter of Ron Paul and I won't be voting for him, but classifying Ron Paul's arguments as 'years of vile fear-mongering' & 'bullshit prophesies of hyperinflationary race war' is ridiculous and inflammatory.

And then we come to the crux of his article:

'I’d rather not be affiliated with a “movement” that includes him in even a conflicted way. Anyway, I would encourage other decreasingly standard-libertarian libertarian-ish types to hasten their passage through the liminal “bleeding heart” stage and just come out as liberals.'

This basically translates to Libertarians, as exemplified by Ron Paul, are racist fear-mongers and anybody with a heart should just admit they are really liberals.

You should vote for Ron Paul - that guy really should become President

I hear your points and I respect your positions. I'm definitely not asserting Will Wilkinson Uber Ales! In all respects, but rather just in relation to the quotes I identified. For me personally, the interesting part of the essay was the reminder to not to get too caught up in labels and affiliations and to be mindful in the impact they can have, to be clear that identifying oneself as believing in ideology X will impact different people in very different ways and might even impede your ability to have a useful conversation with them. I understand and respect that other folks may choose to focus on different portions of the essay than I.

I think Will perhaps goes a bit too far.... I don't think this is an either/or situation but rather is about finding the appropriate balance of using labels and affiliations. But given my assumption that most of us (including myself), on average, are a bit unmindful when it comes to labels and affiliations, I appreciate his desire to move us in the opposite direction.

Ron Paul can't be president, he doesn't understand that the fastest solution to our recession is healthcare reform.

Who cares? The establishment hates the guy - I'm all for the chaos and disruption candidate.

#4 - Does this mean history has restarted?

Comments for this post are closed