Are computers coming up with answers we cannot understand?

In mathematics at least the answer appears to be yes:

A computer has solved the longstanding Erdős discrepancy problem! Trouble is, we have no idea what it’s talking about — because the solution, which is as long as all of Wikipedia’s pages combined, is far too voluminous for us puny humans to confirm.

A few years ago, the mathematician Steven Strogatz predicted that it wouldn’t be too much longer before computer-assisted solutions to math problems will be beyond human comprehension. Well, we’re pretty much there. In this case, it’s an answer produced by a computer that was hammering away at the Erdős discrepancy problem.


…it may not be necessary for humans to check it. As Gil Kalai of the Hebrew University of Jerusalem, Israel, has noted, if another computer program using a different method comes up with the same result, then the proof is probably right.

There is more here, via Gabriel Puliatti on Twitter.


I am sure I am misunderstanding the question, but why is this difficult? Wouldn't k=C+1 and d=2 always give you a pair of integers that work?

Take 1,-1,1,-1..... & C=3.

No. Let us take the sequence (-1, -1, 1, 1, -1, -1, 1, 1). Let C = 3.

The sum is then 0, and |0| < 3.

As far as I understand, the conjecture states that for ANY sequence and ANY C, integers k and d can be found such that the inequality is true.

You are correct Anonymous. Also, everyone note the conjecture is only on any *infinite* sequence. The conjecture isn't applicable to the finite sequences some have written down above.

Proof of the four color map theorem in 1976 also depended on a computer and couldn't be verified by hand.

That proof only had 1,936 cases. Tedious to verify by hand, but not impossible.

And subsequently whittled down to something more elegant.

They used a computer to generate the cases. IIRC, they did the rest by hand.

I took a diff eq and dynamical systems class with Steven Strogatz when I was an undergrad. He was the man.

BD, that doesn't work for the sequence x_i = (-1)^ceil(i/2) which goes like this:

In particular, using your value of d=2, for this sequence the sum involved would always be either -1 or 0, depending on the value of k, which doesn't work if C = 2.

Argh. Silly blog ate the angle brackets. The sequence goes like <-1, -1, 1, 1, -1, -1, 1, 1, ... >

> if another computer program using a different method comes up with the same result, then the proof is probably right.

This is actually very tricky to guarantee. Different programmers tend to make common conceptual errors. Obviously the same typo is unlikely, but that's not really what you're worried about. See the literature on n-version programming.

"probably" is not the customary demand in mathematics.

They did not actually solve the Erdös discepancy problem, they solved a special case of it.

And a very special special case at that.

I don't go to Gawker sites on general principle, so here's what may be the same article or something close (includes the same quote anyway):

Wikipedia fits in 10 gigabytes? Whoa, that would fit on a memory stick! Wiki + some good agriculture texts + some concise medical texts wouldn't be much larger, and that would be very useful for some guy in Africa who doesn't have a smartphone and is way off the grid. If he can read, he'd be the village wise man. He'd need a solar-powered iPad or something to read the memory stick, but that's totally feasible.

If you make a portable device that has all that built in, even better, and it could be made in China for probably less than $20. Think of all the good that could be done if you make these! Who would pay for it? If nobody else, this seems like a opportunity for evangelical Christians. They could add all their propaganda to it, and it wouldn't take any more memory space once they delete all the evolution and other stuff they don't like in Wikipedia.

For just the English text, 10 GB (compressed) is about right. Extend that to images and you get to around 128 GB. There's a project for building your own local copy of wikipedia. This page mentions all the relevant sizes:

CD3WD is what you're looking for. 55GB (still fits on a thumb drive) and covers tons of topics including agriculture tech.

Add in a pretty comprehensive English course from as many other widespread languages as possible, plus some kind of picture and video based English course that assumes no shared background language. Air drop them over all inhabited areas of the Earth till they become so common they're routinely used as doorstops. The world will be an interestingly better place 20 years later. Plus, there will be lots of cool doorstops.

"A computer has solved the longstanding Erdős discrepancy problem! Trouble is, we have no idea what it’s talking about — because the solution, which is as long as all of Wikipedia’s pages combined, is far too voluminous for us puny humans to confirm."

I've written code that has resulted in outputs as long as Wiki and far too voluminous for any to understand. Maybe I was just ahead of my time.

Well, precisely. You could write an equally-long, incomprehensible proof that two plus two exceed pi.

Me, too. Oh wait - do infinite loops count?

What we need is an even bigger computer running even more complex software to tell us if the proof the first computer came up with is correct.

Oooooooh! I bet the second computer is going to be called Earth! PLOT TWIST!!!


Because we don't understand the question.


That is so obviously wrong, I won't dignify it by explaining why!

Appel and Haken's computer-based proof of 4CT in the 1970s was the first major proof by a computer. Since then, people have argued about whether it's a "real" proof, because it's not surveyable by a human and so we're left trusting a computer.

Really though, any but the most basic proof involves the same kind of trust. Even a simple proof of 1 + 1 = 2 is highly difficult to truly survey if you get down to the fundamentals (yes I'm referring to Russell and Whitehead there, and yes I know their program didn't work out). Of course, people don't go around proving things from first principles; they rely on what has come before, and in doing so they are putting trust in the meat-based computers inside other people's heads. But these meat-based processes are no more reliable or less surveyable than computerized proofs -- arguably quite the opposite, because at least the code for software is surveyable when dealing with a computer. Let's put neuronal chauvinism behind us!

You know, I don't know what you're talking about,

But maybe your computer and my computer can get together and talk about this

And we'll get back to each other after that.

How is this new? Without computers it would be hard to understand or verify almost anything in the world. Take the modern new york stock exchange, which if suddenly went back to the technology of 50 years ago would result in catistrophic failure and mass starvation.

There aren't many problems which weren't solved 20 years ago which can be solved without the help of a computer. Should we mistrust everything which can't be counted with our fingers then?

"catistrophic failure and mass starvation. " doubt it.
the stock exchange is not the economy.

There was always something unsettlingly anti-Copernican about the belief that all of math and science is within the grasp of the human mind. On the scale of all living minds a human brain only contains marginally more processing power compared to a chimp (at least compared to a fly). Yet that slight increase pretty much goes from allowing the chimp to understand a few square miles of his habitat to the human comprehending all of existence. For that matter going from a 70 IQ human to a 160 IQ human you could basically say the same thing, and there the difference in mind in very miniscule.

Why should we expect that we're just across the intelligence threshold required to basically understand any concept in existence? Is it really reasonable to expect that this is all that's needed for total understanding and that there's nothing a Jupiter sized could do that we couldn't?

From a recent article posted here, brain power becomes less efficient with size/intelligence. In other words, there is a point beyond which human intellect costs more at the margin than its benefits.

Corollary: we will have an infinite number of Rocky movies and seasons of Survivor.

The main issue with computer proofs are that whether or not a given conjecture is true is only one of the things we use proofs for, and not even the most important. The best proofs are those that generate entire new areas of study.

Why do you think computers will never produce that kind of proof, just because the first examples are systematic and plodding? Give it time.

I'm reminded of Monty Newborn on computer chess: "In the past Grandmasters came to our computer tournaments to laugh. Today they come to watch. Soon they will come to learn."

Except the history of computer chess was a resounding defeat of AI, which was abandoned decades ago, in favor of brute force with ever-improving hardware making the pure position crunching easier. Computers continue not to play chess.

The attitude expressed in your post is actually as outdated at the vacuum tube.

Huh? Calling computer chess a resounding defeat of AI is like calling airplanes a resounding defeat of the human desire to fly, because engines or something...

Also, it's completely ignorant of what's actually gone on in computer chess. Perhaps half the rise in ELO since the 70s is due to improved hardware, and the rest due to improved understanding of how to think about the game (software).

I didn't intend to exclude that possibility. For computer proofs read "computer proofs consisting of voluminous proof by exhaustion of the type we have heretofore seen."

That said, I'm skeptical that classic proofs are around the corner. Math is much less well specified than chess and even were it better specified the search space is huge. But eventually -- sure.

Fair enough. I wouldn't disagree with this restatement.

I just solved the P=NP problem. The answer is No.

Proof: this solution

Give me my million dollars.

Hmmm... is your supporting data sufficiently incomprehensible, tedious, and large?

If nobody can understand it, is it really a proof?
A proof is a social construct, an agreement. Check this post by mathbabe: and the links there.

All mathematics that has been discovered to date is no more and no less trivial than well-designed wallpaper, from the point of view of someone who has understood it. the question of whether a particular math result has enormous chains of reasoning or just a few steps is insignificant, once the result is understood. computers are Of Course going to be repositories for reiterated actions that nobody with a finite number of days would care to follow; in a few decades it will stupefy the average person to think anyone could ever have thought otherwise . the midpoint of a journey to the next town over is quite as interesting as the midpoint of the journey to Alpha Centauri would be , possibly more so, because of simple little unpredictable unreiterable things like association, homes, familiarity, taverns, likeable creativity, admirable sacrifice, unexpected love, and human-level rewards that dwarf a millionfold the excitement or disappointment of even the dreamtime platonic ideal of that acceptance or rejection letter from that "elite" coastal STEM school.

To me the real question is whether we can write a computer program whose correctness can be checked by hand with as much reliability as mathematicians already check each-others proofs. Proper choice of language, and Knuth-style literate programming should do the job. I suppose Wolfram would claim that Mathematica notebooks deliver this.

NB: "checked by hand" is not crucial. I say it only because empirically testing it with another algorithm is not quite in the traditional spirit of maths.

Formally correct programs are an active field of study:

There are formally verified programs that are used in the real world, but it is still mostly an area of research, not a routine programming tool.

Formal verification of programs has very narrow applications, because most code that is written nowadays is to very loose specifications. It's not so difficult to write software that does what you want. The hard part is to actually define what you want. Most large software deployment failures don't come from buggy code, but from incorrect specifications that look good in chunks, while they fail together. You get asked for a car that sits 6 adults, can go for more than 30 miles per gallon, goes from 0 to 60 in 5.3 seconds, can drive for 1000 miles without refueling, and fits in a compact car spot. All of those requirements are doable individually. Together, not so much.

Well, the proof lacks surveyability...

But similar things have happened in other domains. Chess endgame tablebases are full of multi-piece solutions that run 200+ moves, and the solutions cannot be boiled down into simpler rules ("cut off the opposing king's space row by row and file by file with your rook, and move your king up to support your rook, and deliver mate").

Without doubt, there will be a day when computers will do real mathematical proofs. But this day has not come yet. The reported "proof" of Erdös' discrepancy conjecture is not a proof, but a computation checking that this conjecture is true in one specific instance. There is nothing surprising or interesting in the fact that we can not follow this computation. I cannot follow either the details of the multiplication of two numbers of 10000 digits, that a computer can do in microseconds. The Erdôs conjecture is about infinitely many such instances, and thus no amount of computer computation of this type will solve it.

Proof by induction proves for a base case, and shows that it follows for all other cases, in this case, C>2.

A "Formal Proof" doesn't rely on semantics; it's a purely syntactic operation (hence why a computer could do it). Mathemeticians have realized it is a problem when human errors go undetected for potentially decades, so members of the IAS have spearheaded an effort to re-found Mathematics in a way that will be more amenible to computer verification:

I beginning to wonder whether it is worth it to keep you guys around.

Are people coming up with articles we don't understand? It looks like it.....

I’m not that much of a internet reader to
be honest but your sites really nice, keep it
up! I'll go ahead and bookmark your website to come back down the road.
Many thanks

Hey there! I know this is kinda off topic nevertheless
I'd figured I'd ask. Would you be interested in exchanging links or maybe guest authoring a blog post or vice-versa?

My blog discusses a lot of the same topics as yours and I think we could greatly benefit from each other.

If you're interested feel free to shoot me an email.

I look forward to hearing from you! Excellent blog by the way!

If you'll pardon a non-spam blog plug ;-> Dick Lipton and I covered this in some detail at

Hmm it seems like your website ate my first comment (it was
extremely long) so I guess I'll just sum it up what I submitted and say, I'm thoroughly enjoying your blog.

I too am an aspiring blog blogger but I'm still new to the whole thing.
Do you have any helpful hints for novice blog writers?

I'd certainly appreciate it.

Great article! We will be linking to this particularly great content on our site.
Keep up the great writing.

In the winter, the dog's body heat is what helps to heat the
house. Unfortunately bats are becoming increasingly endangered because of
decreased habitats and roosting spaces. Well, they are easy to use, they can use very light line,
it is easy to adjust the drag and they are generally less expensive than other kinds of reels.

Effective functioning of a company may require you to delegate critical tasks that the staff can
help promote a broader program of the company. Relax ' Take some
time now and then to take a break from work. Nearly 15 percent of these injuries were a
result of a structural failure or collapse.

Comments for this post are closed