Artificial Intelligence is Alien Intelligence

Imagine if an alien came to earth and told us some new scientific fact that no human had ever known. Artificial intelligence is starting to do just that. Computers and AI have long given us solutions to problems that humans could not have worked out for themselves but AI is going beyond optimization to tell us facts about the world that no one suspected. Eric Topol on twitter points us to a paper in Nature that used deep learning to analyze retinal images to predict heart disease–it’s long been known that this can be done which is one reason why ophthalmologists take a close look at your retinas when fitting lenses but not surprisingly the AI can see more than can ophthalmologists. What was surprising, however, was that the AI could also tell gender from retinal images, a fact no one had ever previously considered! As a summary notes:

…that information in a retinal image can be used for the prediction of a persons gender is surprising and puzzling. This underscores the potential of artificial intelligence to revolutionize the way medicine is practiced and to help discover hidden associations.


I am confident that AI will uncover a lot of Hate Facts in both biology and sociology.

I am really looking forward to it.

AI-based retinal scanners on public restroom doors.

Up yours, Nancy Pelosi.

AI-based rectal scanners on public restroom doors.


Pelosi Derangement Syndrome is a brain eating disease, isn't it?

100% agree. It will be nice to move from "Feel Facts" to "Actual Fucking Facts" by removing all the human emotion that is such a part of our Current Year.

Can you give an example of biological hate facts? What does that entail?

It refers to facts that are related to stereotypes or biases. Like if AI demonstrated that skin melanin content were correlated with measures of intelligence in some way, that would probably be a "hate fact".

What's hilarious is the notion that gender is so obvious, so self-evidently hardwired into our natures that it is a perversion to trifle with it.

Yet...we need a sophisticated AI computer technology to spot it.

I don't have time to read the article, but this is not actually that new or surprising. Over a decade ago we noted that strong correlation between retina characteristics and sex in the literature:

Nor is it surprising. It has long been known that estrogen and other sex markers have differential effects on blood vessels. Changes in the diameter, lucency, pressure, and even color of vessels have been observed. Correlating these variables to accurately predict gender is, at worst, computationally intensive.

Spotty gender differences are actually pretty easy to find. We have strong feedback loops that push towards to clusters. Given that these difference go down to the cellular level with metabolic differences, finding them is just a matter of number crunching.

Flagged for transphobia and hate speech.

Gender is a social construct.

While this was obviously just a flippant remark, the best scientific understanding for the whole trans phenomenon is that they have these sort of subtle characteristics of the opposite sex, suggesting there's a biological process behind it.

Citation, por favor?

Flippant and probably borderline impersonation.

FWIW this anonymous would only note that ML is a pattern-finder, and while very good at that, far from an AGI.

Still, going from "an image" to "an answer" is right up ML's alley. And I'm sure there are many more applications to be found.

Nature, the International Journal of Science, probably knows more science than you:

"US proposal for defining gender has no basis in science"

Not if that article is any evidence.

Sex is a biological characteristic determined by gonads; somehow this political screed had nothing to say at all about gonads. The truth is that at a gonadal level, it is extremely hard to find anything that shows anything resembling a "continuum". You either have male gonadal function (or evidence that you once had it) xor female gonadal function (or evidence that you lacked male gonadal function).

And I know its bogus because they include things like Complete Androgen Insensitivity Syndrome yet both fail to name it and fail to describe in any real detail what it does. CAIS means that your Y chromosome produces gene products which produce the common male differentiating signals - AMH and testosterone. The former does its normal function (obliterating the internal female reproductive structures, maturing the male ones) and the latter fails at its (driving external sexual differentiation, among other things). Individuals are quite literally feminized for external primary sexual characteristics, but have male gonads. As the feedback mechanisms on androgen production are also non-functional they aromatize more androgens into estrogens until they reach typical female levels or higher.

This results in individuals who are phenotypically female. And they are actually more female, by the numbers, than your average XX female. They have higher rates of sexual attraction to men, lower rates of transgender identity, and report enjoying more cultural expectations of women. Physically, well they have such low effective testosterone responses that they tend to be on the upper end of the female curves for most dimorphic curves.

CAIS is somewhat common (as these conditions go), but they are not a population that is gender indeterminate. As a population, they have a more strongly defined (female) gender than the general population.

And this is the real truth. Our bodies tend to end up clustering a bunch of values in certain dimorphic distributions. CAIS, CAH, 5ARD, Turner, Swyer, MRKH, Klinefelter, and even the obnoxiously rare stuff do not actually have difficulties with discerning gonadal sex. Patients with these disorders of sexual development rarely report gender dysphoria and rarely wish to be treated as anything other than "male" xor "female". Certainly from a medical perspective we can easily bin folks into "male" or "female" for treatment purposes.

But why bin at all? Well because I dislike having patients die. Males respond differently than females and with remarkable ease I can take any of the conditions above and with minimal clinical information figure out what normal should be - will their cells have a metabolic profile of men or women? Does this serum level of a marker indicate cancer or normal development? How will they respond to this dose of medication? All of those differ in some instances between males and females. All of the disorders of sexual development still let me bin and predict outcomes.

When this whole thing became a political issue a few years ago, I took a deep dive into the literature. I found all manner of fun DSDs (streak gonads, twin fusions, genetic mosaics). I found zero individuals, even in the case reports, who report truly indeterminate values for all the male and female outcomes. The closet to a true "intermediate" was a mosaic individual with non-functional (Sertoli only) teste on one side and a functional ovary on the other (with an abnormal uterus present in the scrotum). Even in this mosaic individual, we still observed a dominant gonadal function and hence a dominant sex that resulted in heavy clustering of multiple lab values in the female range.

Yes chromosomes do not always match gonads and yes gonads may not match external sexual characteristics ... but gonadal function is remarkedly good at predicting so many things (like serum protein marker levels) that we can bin people on that than we can on things like eye color or number of fingers.

Ultimately if biological sex is not dichotomous, then nothing else is either.

For every complex problem there is an answer that is clear, simple, and wrong. - H. L. Mencken. I've certainly not the background you exhibit on this issue but you do seem to be a bit confused. You *claim* that you can "bin" people (male xor female) and base treatment (one assumes pharmacological, but IDK) on that. If we assume that your "evidence" isn't simply anecdotal (that you do it, and it works) then please provide the references to the double-blind studies on these extreme (rare) cases with every single intervention you imply can be so simplified. I think you know better than to spout this kind of pseudo-scientific rubbish. Doctors have to work in the real world, but I suspect the "binning" you do is based more on YOUR need for having "a plan" than on any reliable statistics about outcomes vs treatment choice for the 2.5+ sigma tails of the population. I guess what would be needed would be to demonstrate that given a specific sexual gene or gonad based dysfunction, that the people which "have" it (as if "having it" is binary, we both know it's not) can be put in a bin when they need a treatment which best practice dictates discriminate between treatments for male and female patients AND then show that their outcomes are identical (statistically speaking) to the "normal" group they were binned with. (Identical in response, both negative and positive). Your post suggests in some cases that their behavior does not have the same distribution as the 'normal' population of that supposed gender. Whether they're more feminine or masculine is irrelevant: if their distribution is different then =new flash= it's not the same.

Simple clinical case. Patient has an AMH level of 20 ng/mL. How should I treat them?

Bin 1: they have functional testes. I look up the AMH levels in the male chart, they have normal levels and I have no reason to suspect gonadal cancer.

B2: they have ovaries, ovatestes, or any other option that lacked male gonadal function. I look up the female table. This is high. By a factor of four. Could be something like PCOS or it could be something like a granulosa cell tumor. If it is the latter, we are almost certainly finding it at stage I.

So in my horridly "psuedoscientific" view. I should look at this patient and determine gonadal sex (normally trivial to do) and the either send the patient home or do a workup that rules out cancer.

And this is not isolated. Ambien, for instance, is twice as powerful in women. So how do I pick a good starting dose? I don't want to much because then patients have an annoying habit of dying in car wrecks. I don't want to little because insomnia is correlated with lots of terrible health outcomes. Or I could just the empirically verified dosing guidelines and give people effective medication doses. But what does the FDA know, why should we trust their regressive dosing guidelines? Clearly I should not bin DSD patients and just leave them to suffer with enough Ambien to endanger 25% of them and too little to let another 25% sleep.

The truth is that for many, many variables of medical interest humans exhibit a bimodal distribution. This bimodality is so strong that that the only regions of overlap are clear pathological states.

+1 to Sure.

Li: next time I go to the doctor with abdominal pains, I don't want him to ask if I could be pregnant.

Thank you for the informative comment! I was especially interested in the idea that "We have strong feedback loops that push towards to clusters." Could you elaborate on this/provide pointers to relevant reading?

The wording suggests that the AI predicts heart disease. It just found associations with known risk factors for heart disease.

Of these risk factors, it only did a good job with age with a good AUC (almost=1). I suspect an ophthalmologist can tell you who's old or young based on their retina too.

Sex and actual cardiovascular events, not so much (AUC

File under: AI hype. Didn't we go through an AI hype cycle in 80s? This time it's different though, right?

You sound like those guys I used to troll at the GO forum on Usenet back in the 1990s, who said Go, unlike Chess, is mystical and cannot be fathomed by AI. I was however surprised how quickly Go was conquered by AI, I did not see that coming for at least a couple of decades.

Bonus trivia: the Quicksort algorithm by C.A.R Hoare! How in the heck is it so Ferrari fast compared to bubble sort? That partition is magic!

I was on usenet back then, but not on the AI boards :). These days, I do medical research, and feel that AI, so far, has failed to live up to its hype when it comes to patient classification.

Based on your read of the article, what do you think of the performance of its AI? Do you think the headline of this post and the tweet accurately reflect the findings in the paper?

Games like go or chess are great places to train AIs, because the rules are limited and precise, so you can set up generative adversarial networks, run them on a server farm, and eventually your GAN's will have the equivalent of a few million years experience of constant play. Of course, if you take your exquisitely trained GAN and then try to apply to to a version of go with the rules tweaked slightly, then it'll be totally useless (this also happened with networks trained to play breakout. Change the size or position of the paddle slightly and it's completely thrown off).

Last I checked, however, you couldn't run reality on a server farm, so you networks can only accumulate data at the speed the world moves at. It'll have plenty of uses, no doubt, but isn't going to be the world changer that people seem to think it will be. When was the last time you heard a confident prediction about driving cars being ready within 5 years?

I've heard all we need to do is remove all plastic bags from the environment, and make sure no one walks their bikes and AI will be good to go.

That's the great thing about life: It is up to you to actually get heart disease or not.

The state of AI right now seems to be machine learning, just having computers learn what we already know (or think we know). It is automated sorting with the categories programmed into the computers statistically rather than logically. As interesting to me as it is to you: ho hum. It also seems like a waste of both computing time and human time (unless the people making all the captcha-style decisions one by one on mechanical turk to build many of these training sets are using the task as a study break and it is a better alternative than stepping outside for a cigarette--people do have time to waste, afterall).

The interesting frontier is where computers can highlight for us things we don't know off architecture or some other 'space' we native in rather than data, and teach us those things. I'm thinking people who work with error correcting codes or cryptography might eventually develop gestalt spatial perceptions in higher than 5 dimensions through their work, for example. That would be cool. Art and cultural products might then disseminate it to gen pop.

ML just means (averages, not signifies) discrimination* (over many variables, yes). It will save you from the judgement of the above average doctor (or whatever human role is being automated) as well as the below. It will have a normative affect where it is applied, and so sort of kill a field. I'm not alarmed by that, just noting it.

(This post was just red meat for the gladiators in the equality meme ring, not a tech post.)

But I don't work in computing; if I am displaying some profound ignorance, I'd welcome illumination.

*By discriminination I mean discernment, life's only purpose.

Exactly correct!

Remember how IBM's Watson was going to revolutionize medical diagnosis? What actually happened in the near decade since then is: not so much.

November 11, 2018

"Eko’s heart murmur detection algorithm outperformed four out of five cardiologists in recent clinical study."

"Self-taught artificial intelligence beats doctors at predicting heart attacks"
April 17, 2017

"All four AI methods performed significantly better than the ACC/AHA guidelines [in predicting heart attacks]. Using a statistic called AUC (in which a score of 1.0 signifies 100% accuracy), the ACC/AHA guidelines hit 0.728. The four new methods ranged from 0.745 to 0.764, Weng’s team reports this month in PLOS ONE. The best one—neural networks—correctly predicted 7.6% more events than the ACC/AHA method, and it raised 1.6% fewer false alarms."

The achievement may well be spectacular, but count me among the overhyped crowd. The ability to predict gender from retinal images is staggeringly unimpressive to me. I'm not ready to hand the world over to AI because it can do that. Now, if it could identify the gender-ambiguous, that would be something worth crowing about.

It classifies gender with an AUC of 0.7, which is pretty weak.

Are you looking at the right line? The gender AUC is 0.97. The smoking AUC is 0.71.

D'oh. You're right.

I for one welcome our new robotic overlords.

I watched I, Robot (the 2004 film starring Will Smith as Detective Del Spooner - don't you love the name) yesterday for the first time. I don't like robots. Like many women, they have a mind of their own. [That's a joke.] And why do they have to look so creepy. Shouldn't robots look like Robby the Robot (in the 1956 film Forbidden Planet), with flashing lights and spinning ears? What's amazing about the film is that Robby played himself.

"This underscores the potential of artificial intelligence to revolutionize the way medicine is practiced and to help discover hidden associations. "

The AI finds a correlation. The job of assessing if it's a real or spurious correlation is still human. "AI" is just a tool that enhances of our human baseline sensorial and cognitive capabilities.

From the description of the design, they did proper validation. And their results mostly held or improved when they applied the model on a dataset of primarily Asian patients.

To your broader points, though:

"The job of assessing if it's a real or spurious correlation is still human." - And just about the only way to do that is to test on out-of-sample data (technically, you could also point out a confounding factor, but with models of such complexity that is more of a guess game). Spurious correlations tend to break down. The difficult question is: if an AI is able to consistently give you good predictions out of inputs that you believe should have no impact on the output, will you accept that is has found a (causal?) connection that is too diffuse for humans to discern, or will you keep chanting "Spurious correlation! It'll break any moment now."

Reproducibility of results is a requisite, albeit is not enough to establish a causal relation.

AI found relationships are something special, not as dumb as spurious correlation, not as strong as properly explained causal relation. Perhaps AI found relations need a special treatment and name.

The problem with a "relationship to diffuse for humans to discern" is
how to ensure the proper data is used as input. What should be on the data for the algorithm to be useful?

... this AI/Alien stuff is silly hype -- AI is just another useful tool developed by Earth-Humans.

Telescopes & Microscopes long ago provided humans with "new scientific facts that no human had ever known" .
No aliens or magic needed.

Wow - who could have imagined that one day telling who is a boy and who is a girl would be as easy as taking a picture of the inside of their eye and feeding that into a very powerful and expensive computer.

That will make my life so much more simple. No more akward social interactions where I don’t know if it’s Mr or Mrs.

But what about akward interactions where I don’t know if it’s Ms. or Mrs. ?

How long before it will be illegal for an obstetrician or nurse to observe the genitals of a newborn?

As any deep learning algorithm worth its salt well knows, the eyes truly are the windows to the soul.

Could sophisticated AI predict whether gender is related to ovarian cancer? Could AI also make medical and behavioral predictions to price health and auto insurance? Would there be any drawbacks?

This is a kind of data mining, where is the "intelligence"? They trained a computer to sweep data sets for correlations.

How might artificial intelligence handle artificial lysergic acid diethylamide?

What capabilities, meanwhile, might some nifty-keen quantum AI monstrosity acquire from the inside out? Whether in error or in fidelity to its programming, what if elimination of the human species became its categorical imperative once it determined its connectivity with hypersonic missile- and ICBM-launch protocols, et cetera?

Could AI one day blackmail all of us into becoming human beings, that is?

This question is totally irrelevant to the discussion. Nothing currently being described as AI is moving us closer to electronic consciousness.

My questions are not "totally irrelevant" merely for being anticipatory.

Alex is the one who posed the "AI-alien intelligence" metaphor here: germane to this discussion would be (and is) consideration of "the humanity" of artificial intelligence fabricated by folks like us. How might artificial intelligence account for the kinds of distortions of perception and cognition that humans prize when they ingest substances like LSD, cocaine, heroin, oxycodone, even lowly THC?

I have heard no one say. (All good wishes to AI devotees on conjuring "an LSD algorithm" for some quantum-AI-device-to-be.)

I don't even know whether researchers can reliably predict what capabilities a quantum-AI device would possess or be able to attain.

This AI hype. Saying these are 'scientific facts' is really misleading. DNN's and other tools can find correlations in datasets that can't be captured with traditional statistical tools (think linear regressions). This is really useful in some cases and has tremendous economic value if you can do something like solve autonomous driving or replace medical diagnosticians. But the number of deep insights revealed is incredibly limited and very little additional understanding of our world will come out of this work.

Very well put. Whenever I see an article about AI, I just mentally replace "AI" with "regression" and the magic disappears.

"Computers and AI have long given us solutions to problems that humans could not have worked out for themselves..."

That's like saying knives are able to figure out how to cut materials that humans were unable to. Actually, given that humans built computers and AI, humans were able to solve those problems. The solution was to build the computers.

This is more than a pedantic point about semantics. For some reason, we have a fetish for anthropomorphizing computers. The way to understand AI is that computer science methodologies abstract problems in a way that is broadly applicable across many domains, including apparently to the problem of scientific discovery. "Computers and AI" are not discovering biological facts; computer scientists are.

Is it an alien intelligence or just an application of the intelligence we already have? Humans don't naturally try to guess gender by looking at retinas because, well even if a woman is fully covered head to toe you usually can see at least her eyes. I could imagine, though, a lonely guy working the night shift at the library of retina images might make a game of guessing gender from retinas and get very good at it.

What I think is a Big Question is there an Intelligence or multiple types of Intelligences? If there's one, then AI's and humans are simply converging on the same thing with just different advantages and disadvantages based on our hardware and software.

But what if there are different intelligence's possible the same way there's different geometries? Then AI research could produce different types of intelligence which could start exploring that diversity.

Just from reading the abstract it seems that the AI was able to identify risk factors that you could get just by interviewing the person, i.e. age, gender, whether the person was a smoker. I guess this verifies that the AI could identify risk factors, but not that it actually found one that humans didn't already know about. As the article notes it is still a potential.

" I guess this verifies that the AI could identify risk factors, but not that it actually found one that humans didn't already know about." - They gave it digitized images of retinas. It was able to predict the genders of the retina owners almost perfectly. This is indeed no useful in predicting CVD or identifying CVD risk factors, but is nonetheless an ability humans did not have before (although may have suspected a connection, as someone above pointed out).

I'm sure you guys know this, but let's contrast it so everyone is on board. What we now call an AGI, a thinking robot, might read medical textbooks, develop a theory, and test it on a database of images. That's not what happened here.

Someone *had* to suspect a linkage from image to some set of diagnoses, and *try* those thousands (or millions) of times with training sets for which the answers were already known.

(An ML engine trains with inputs and outputs. A black box.)

Then you take your ML and give it another data set, to see its rate of detection and false positive, to see if it would really be useful in a clinical setting.

"It was able to predict the genders of the retina owners almost perfectly."

An area under the ROC curve (AUC) of 0.7 is not anywhere close to perfect. I'm getting the number from the abstract. Where did you get the "almost perfectly" from?

Gender AUC =0.97 as indicated in the abstract

Correction: The paper is in "Nature Biomedical Engineering", which is not the journal "Nature". Nature has a moderate size series of these speciality journals (e.g. Nature Genetics, Nature Neuroscience etc).

I thought gender was an artificial social construct.

Can it really distinguish between demigirl, non-binary, genderfluid, neutrois and androgyne?

If so, great news. The first biological support for otherwise totally cargo-cultic gender "science".

Comments for this post are closed