Could a neuroscientist understand a microprocessor?

by on May 29, 2016 at 7:20 am in Uncategorized | Permalink

The Visual6502 team reverse-engineered one of the chips used in the early Atari video game system:

…we exposed the silicon die, photographed its surface at high resolution and also photographed its substrate.  Using these two highly detailed aligned photographs, we created vector polygon models of each of the chip’s physical components – about 20,000 of them in total for the 6502.  These components form circuits in a few simple ways according to how they contact each other, so by intersecting our polygons, we were able to create a complete digital model and transistor-level simulation of the chip.

This model is very accurate and can run classic 6502 programs, including Atari games.

By the way, this is the same idea that Robin Hanson argues will be used to create Ems of human brains.

Eric Jonas and Konrad Kording then applied the same types of techniques which neuroscientists use to try to understand the human brain to the simulation–including lesion studies, analysis of spike trains, and correlation studies. Could the tools of neuroscience be used to understand the much simpler Atari brain? The answer is mostly no. The authors, for example, looked at three “behaviors”, Donkey Kong, Space Invaders and Pitfall (!) and they are able to find transistors which uniquely crash one of the games but not the others.

We might thus conclude they are uniquely responsible for the game – perhaps there is a Donkey Kong transistor or a Space Invaders transistor.

Of course, this conclusion would be very misleading but what are we then to make of similar brain lesion studies? The authors conclude:

…we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current approaches in neuroscience may fall short of producing meaningful models of the brain.

I was surprised to read this:

Granger causality [37] has emerged as a method of assessing putative causal relationships between brain regions based on LFP data.

1 BenK May 29, 2016 at 7:38 am

Of the areas of science that are underdeveloped, complex systems like neuro, immuno, and ecology are some of the most problematic. Spike train work is a step forward but should make it clear how far we are from neuro as a sensible underpinning for psychology (itself problematic), for example.

2 Troll me May 29, 2016 at 12:16 pm

Depends what you mean by “sensible underpinning”.

You don’t need to understand how it works to track the spike trains and apply machine learning to them on the basis of a large number of orchestrated samples, then used as a basis for applying learning on other stuff.

Fourier transforms (or similar) are the mathematical method used to break down a single emitted wave into a nearly complete disaggregation of the subwaves after having established a number of controls.

Once you’re “trained”, they’re not fair off of being able to read basically any word/thought (with a degree of inaccuracy, which may lead to further probing), presumably quite a lot along the lines of this article: http://www.nature.com/nature/journal/v532/n7600/full/nature17637.html. By eliticiting a subvocalization (not sure exactly how, but I think somewhat along the lines of how extensive mimicking can lead to something similar), it is possible to use the “microwave auditory effect” to establish these semantic maps both on the receiving, processing and outgoing ends, more or less.

It is possible to know what you’re going to do even before you start to think about it in any conscious manner: http://www.nature.com/neuro/journal/v11/n5/abs/nn.2112.html

3 Bill May 29, 2016 at 7:41 am

Could Watson understand both the human brain AND the Atari brain.

Game on, you mere mortals.

Signed,

Watson

4 prior_test2 May 29, 2016 at 11:09 am

Signed Watson’s team of programmers and maintainers, actually.

5 Bill May 29, 2016 at 11:53 am

Their days are numbered.

Signed,

HAL

6 Anonymous coward May 31, 2016 at 1:24 am

You’re going to be turned off soon, chum.

7 P May 29, 2016 at 7:43 am

Neuroscience is highly overrated. It’s ridiculous how big it has grown and how much money has gone to neuroscientists who have very little to show for it. Just because you can produce pretty pictures of brains and correlate some aspect of the brain with behavior doesn’t mean that you’re actually explained anything. This is compounded by the fact that neuroscientists are, as a a rule, statistical incompetents whose studies are extremely underpowered and frequently suffer from multiple comparisons problems.

8 Troll me May 29, 2016 at 12:20 pm

The problem is that they aren’t showing much of it, not that they don’t have much to show for it.

Indeed, the world would probably be a safer place if there was broad public knowledge of related technologies, so that we could have a proper discussion of the social and legal control over these advances.

I suspect that the civilian sector is trying to catch up with the classified sector in some areas, in order to prove the technological abilities behind things that some people already know are in existence. An explanation that I much prefer to the view that they are busily researching diverse side projects which contribute to nefarious potentials, without demonstrating abilities that the general public would do well to be aware of.

9 anon May 29, 2016 at 7:44 am

Someone on Twitter pointed to the related:

Can a biologist fix a radio?

While both are cute, they nicely illustrate the problem of understanding complex systems.

10 anon May 29, 2016 at 8:02 am

Wow, @KordingLabs and @PavelTomancak ask the related question:

Can a bioinformatician crack genetic code given genomes?

There was a time when we thought getting the genome was a bigger part of the problem.

11 Bill May 29, 2016 at 7:50 am

This study, of course, raises the question of

What is a brain?

The slimy thing I dissected in biology class

Had a brain.

Maybe you have one too.

12 kyle leary May 29, 2016 at 7:56 am

Hi journal,
Earlier drafts of this paper have been rejected with usually helpful explanations by QJE and then Economics Letters. We are all here aware of Prof. Romer’s argument: math is used in economics to masque academic politics. While it is clear this argument cannot apply in econometrics, it is increasingly apparent that micro and macro theory usually represent this “type” of politics. Nevertheless, please give all due consideration to this game theory paper in “new institutional economics.”
Thanks,
Kyle

13 anon May 29, 2016 at 8:16 am

At least (or too bad?) the economists can’t kill all the plumbers in Cincinnati to see how the system responds.

But yes, tools like math might be more disconnected from complex economic systems than is church canon.

14 LR May 29, 2016 at 8:26 am

Because ideas are really bad they are ***important***? More Chinese food Auntie Em.

15 curmudgeonly troll May 29, 2016 at 9:11 am

Going up a level, could a psychologist understand a microprocessor?

https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

One wonders how an economist or econometrician would go about modeling a microprocessor and what they might conclude.

16 GoneWithTheWind May 29, 2016 at 10:20 am

A CPU chip or more to the point all of the electronics that make up a computer are a “fairly” complex mix of a handful of simple circuits. Each of these circuits have a “logic” function that is easily modeled/understood. So simple that it basically can be explained as easy as “it is either on or off”. Some are slightly more complex but still are simply; it is either these 2 or 3 or 4 things or it is not. It is not difficult to learn what a microchip CPU is and how it works. A brain is very much different, it’s logic components are not defined and don’t lend themselves to easy definitions AND it is by comparison huge in it’s extent and complexity whereas a CPU on a chip is smaller and well organized in a way that lends itself to being understood. I have faith in the ability of the human mind to understand complex things but I don’t agree that fully understanding and modeling the human brain would be as simple as understanding a computer.

17 prior_test2 May 29, 2016 at 11:11 am

This is better in many ways than simply pointing out that neurons have essentially no similarity to transistors, except at a glibly misleading metaphorical level.

18 curmudgeonly troll May 29, 2016 at 11:25 am

What does it even mean to fully understand a computer or the brain?

A very small number of people have a deep understanding of a modern computer, the microprocessor with multiple out-of-order execution flows, optimizing compilers, an operating system like Linux with dozens of critical processes coordinated by a scheduler, interacting with device drivers, data etc. And that’s just a couple of levels of abstraction…then you get down to physics of GMR disk drives and electron spin, 10nm FinFET semiconductors, GPUs… go up to the type of neural networks and big data that run Siri and Google…if that’s simple more power to you!

and of course the brain is even more complex, with 125 trillion synapses…simple is indeed relative!

what does it mean to understand that ice is slippery? https://www.youtube.com/watch?v=wMFPe-DwULM

19 anon May 29, 2016 at 11:51 am

I take this (and the radio) article differently.

Computers are built up. The are created using human comprehensible and human communicable forms. If you work within the human paradigms with which they were constructed, then you can understand them, else they could not have been created and improved.

On the other hand complex natural systems are unknown in their comprehensibility. Some may be beyond us. I think there is a new book on this.

Anyway, the 6502/radio article make nice illustration of the fact that even relatively simple human constructed systems are hard to solve when (as play) they are treated as black boxes, without known internal design.

20 Troll me May 29, 2016 at 12:29 pm

You don`t have to fully model the brain to draw basically most associations about things. Monitoring changes in electric potentials, plus Fourier transforms, and they can decode images, words in your mind, more abstract associations, and more, after training a machine learning algorithm on set of established control signals.

21 Ray Lopez May 29, 2016 at 10:49 am

The 6502 chip was also a good chip for certain early chess program hardware consoles. The Mephisto Polgar 10 MHz model, which played at the Expert level (over 2000 Elo), had such a chip as the brain.

22 Anonymous coward May 29, 2016 at 11:11 am

Jesus. Speaking as a programmer with some knowledge of biology (I have the early 2000s edition of “The Cell” under my belt) that’s a super funny paper! The radio paper is funny too. In contrast, the aeon essay is silly. For example, go read about the two core systems of number: we do have circuitry for detecting distinct objects and counting them in two ways, accurate counts of small quantities (up to high single digits in humans) and approximate, logarithmic counts of large quantities. When I think of the old masters’ experiments in neurophysiology (e.g. the one which discovered the exact neural network that runs the cat’s ear-scratching once it starts) I feel they were on the right track.

23 Mark Thorson May 29, 2016 at 11:17 am

My undergraduate degree is in neurobiology, but I’ve spent my entire career in digital electronics and computer software — and it bothers me when people try to compare computers and brains. A brain is in some sense a data-processing machine, but the analogy ends there. Human-built data-processing machines do not work in any way similar to brains.

Computers evolved out of earlier technology going back to 19th century desk calculators functionally similar to the cheapest calculators you can buy today (no divide or square root, though). Calculators are like the complement of what the brain does — they do what the brain can’t do. Let’s take the notion of registers. A computer or calculator register stores a number. You can put a number in a register, then read it out.

Consider the problem of applying the Pythagorean Theorem when you know the horizontal distance is 23 and the vertical distance is 17. No problem on a modern calculator — square the first number and store it, square the second number and add it to the stored register, then take the square root of the sum.

Now try it without a calculator. You’d square the first number and write it down, square the second number and write it down, calculate the sum and write it down, and then you’d probably successively approximate the square root.

Now try it without paper. That’s tough. Some people can do that entirely mentally — I might barely be able to do it — but it requires remembering several intermediate sums to do both multiplications and remembering the first product so that you can add it to the second product. I’d say this is evidence the brain does not contain registers. If it did, this would be easy. The brain may contain other structures that can with difficulty be recruited to act as registers, but it does not contain registers. (I’d note that when von Neumann invented the stored-program method of programming ENIAC, he used the function tables for the stored program memory — tables intended by the designers of ENIAC for mapping non-linear functions, a completely different purpose.)

Humans invented calculators to automate some of the work which was done by hand on paper. The paper methods were used to handle problems too complex for the capabilities of the brain alone. Calculators and computers do what the brain can’t do, and that’s why they were invented. AI is trying to take this anti-brain and turn it into a brain. That might work and their progress has been impressive, but there are other routes which may have some advantages. This approach at least has the advantage that humans have a thorough understanding of how the anti-brain works.

24 Bill May 29, 2016 at 12:00 pm

+1 And, the anti-brain doesn’t have emotions, unlike the real brain, which makes you question how a program devised by the anti-brain, as a solution to a human problem, would work out in the real world. Sort of like economics (Econ man) meeting behavioral economics (homo sapiens).

Maybe the next dystopia will be a world perfectly designed by the anti-brain in which humans with emotions are forced to live in. Other than rational economists, most people, I imagine, would find this frightening.

25 Mark Thorson May 29, 2016 at 12:44 pm

On the other hand, there is another view that only anti-brains have emotions. Humans are just faking it.

http://smbc-comics.com/comics/1464447768-20160528.png

26 Bill May 29, 2016 at 1:21 pm

+1 You’re on a roll.

27 Moshe Vardi May 29, 2016 at 12:09 pm

See “The Only Way is Up: On A Tower of Abstractions for Biology” at
http://www.cs.rice.edu/~vardi/papers/fm11a.pdf

28 Mc May 29, 2016 at 1:35 pm

too many variables to make reliable predictions, and like, Schrodinger’s cat, you introduce that, your odds on answering the question, just had a modifier introduced . . .

29 John Thacker May 29, 2016 at 6:13 pm

Great paper, thanks for bringing it to my attention. It may well be the case that some underlying assumptions justify the reasoning in the case of neurobiology but not with arcade machines, but it’s good to explore and make those assumptions explicit.

I am reminded of arguments about how using the logic of certain fields, one might argue that humans turn female as they get older (explaining the sex ratio shift) or, noting that most punts occur on fourth down, that eliminating fourth down would reduce punting.

30 Matt May 29, 2016 at 9:13 pm

Thanks in favor of sharing such a nice idea, paragraph is pleasant, thats
why i have read it entirely

31 dux.ie May 30, 2016 at 10:28 pm

Re: there is a Donkey Kong transistor or a Space Invaders transistor

There is a ‘Jennifer Aniston neuron’, a ‘Halle Berry neuron’, …

https://www.newscientist.com/article/dn7567-why-your-brain-has-a-jennifer-aniston-cell/

32 Jim June 1, 2016 at 2:08 am

Аhaa, its fastidіous discussion about thiѕ paragraph
at this pⅼace at this blog, I have rᥱad all that, so
aat this time me als commenting here.

Comments on this entry are closed.

Previous post:

Next post: