Nurses complain about algorithms

In response [to the rise of diagnostic algorithms], NNU [National Nurses United] has launched a major campaign featuring radio ads from coast to coast, video, social media, legislation, rallies, and a call to the public to act, with a simple theme – “when it matters most, insist on a registered nurse.”  The ads were created by North Woods Advertising and produced by Fortaleza Films/Los Angeles.  Additional background can be found at

Here is the link.  Here is an MP3 of the ad.  Remarkable, do give it a listen.  It has numerous excellent lines such as “Algorithms are simple mathematical formulas that nobody understands.”

For the pointer I thank Eric Jonas.


With the exception of the word "simple", the quote is absolutely correct and goes to the heart of the AI debate.

Technologies that "nobody understands" have been widespread for decades, and we rely on them daily for things just as crucial as medical care. No single individual understands how all of the systems in a car or airplane work. Increasingly complex computer systems have unequivocally led to better safety despite this. The reason is the black box principle: how something works is irrelevant as long as we can test that it behaves correctly for all possible input.

Perhaps the frontiers of AI do have some risks that are a bit different: a very high level of AI that could have traits like creativity might behave in unexpected ways, and the domain in which it acts might be to broad to effectively test. However, current and near-future medical decision algorithms have nothing to do with this. They are low-level, fairly simple models that produce predictable output for a given set of input.

I think you are wrong. Two reasons: first, the black box principle only works when we can specify (a) what input is possible and (b) what output is correct. This is easy with relatively simple systems such as those created by engineering -- they could not be designed otherwise -- but becomes progressively more difficult when we move to complicated stuff like medicine, cooking, politics or personal assistance. Second, the designation "nobody understands" hides quantitative differences so large as to become qualitative. I may not understand all the details of how a car works, but I know the general idea and the underlying physical principles, and an engineer will have no trouble (in principle) "opening" the black boxes for me by explaining the details. On the other hand, consider the Monte-Carlo tree search Go-playing programs recently discussed here. Neither the programs nor their designers can provide an intelligible explanation of the reason this or that move was chosen or rejected, nor can they teach you to play better. The black box is truly a black box: you cannot look inside, because there's nothing inside. There is no "why", it's just the way numbers have fallen. You might as well consult ox entrails.

1) You overestimate the complexity of these medical algorithms. They are designed to answer specific, limited questions, not be comprehensive virtual doctors. The problem is not broader or more complex than things within AI's proven capabilities.

2) You underestimate the complexity of proven AI applications. Flying a plane is an incredibly complex task involving a wide variety of inputs and dramatic variability due to things like weather. The same kinds of arguments made against medical algorithms could be made about computers' ability to perform this task--except that it is already proven that computers can fly, including flying planes that no human is able to fly.

3) Suppose a Monte-Carlo tree search diagnosis algorithm were 99% accurate, while a human doctor were 95% accurate? Should we prefer the human doctor because we don't "know what's inside"? Why? (And do we really "know what's inside" the doctor, anyway?) The black-box principle is stronger than you claim.

1) Maybe I do, but if they are so simple, why do we need computers for them? Because we can no longer teach people properly? Maybe that's what we should work at instead? (No, MOOCs are not the answer.)

2) You overestimate the complexity of flying a plane on autopilot. I happen to know a little how autopilots work in some surface-to-air missiles, and it's just not a very complicated piece of equipment. It's relatively easy to fly a thing that was specifically designed to fly, to be flyable and pilotable -- designed by humans, I should add. On the other hand, humans were not specifically designed to be curable. Compare the difficulty of circumscribing the "healthy" states of human and the ease of defining a route for an autopilot to follow.

3) It's a difficult question, but we would still need doctors at least because we cannot learn anything from the algorithm -- since its calculations are not intelligible, one can only observe its results. I recall a recent item here that Toyota is reintroducing human workers in its factories for exactly this reason. Also see point 1 above.

The old Luddites used to object to the replacement of the labor functions of the human body with machines. Now we have largely replaced those and aim at replacing the functions of the mind. The body we share with the animals, so its replacement is less dangerous; but mind we (mostly) don't share with anybody else as far as we know. Suppose we replace it too; what then? Do you really want to be a meat co-processor to your i-phone? Do you think being one, or enabling people to be one, is a worthy aim? Why?

To add to my point (1) above, maybe we should also figure out how to avoid having so many teachable people playing bureaucratic tag around compliance-rulemaking-enforcement. I seem to remember that education administrators, who have multiplied and multiplied to the point that there are almost as many if not more of them than actual teachers, are paid rather more than teachers, and have more status to boot.

Anon coward said: "You overestimate the complexity of flying a plane on autopilot. I happen to know a little how autopilots work in some surface-to-air missiles, and it’s just not a very complicated piece of equipment. It’s relatively easy to fly a thing that was specifically designed to fly, to be flyable and pilotable — designed by humans, I should add."

You are wrong. (I do this kind of work for a living.) Take, for example, the B-2. If there were a direct connection between the control stick and the control surfaces, the plane would be unflyable by a human - it it so unstable that if the flight computers cut out for even a tenth of a second, it would crash. No human has the reaction time or the ability to sense the state of the aircraft sufficiently to control an airplane like the B-2 without massive computer help. The F-35 is the same way; it is designed to survive loss of some primary flight control surfaces and continue to be flyable. For that configuration, a human simply can't do it.

And the notion that we can get 100% coverage on our testing of such systems is ludicrous too. I try not to fly Airbus because I know enough about how they design and test their systems to feel less comfortable in their aircraft than Boeing aircraft. (I do not work for either firm, nor any of their subcontractors.)

B-2 and F-35 are battle airplanes, they must be very unstable to have high maneuverability for the fighting. Don't they? With such design requirements, they are, as you say, not designed to be flyable by humans without computer help, so what I said does not really apply. Are civilian aircraft designed that way too, though? A B-787 hardly needs the maneuverability of a fighter jet, and it does not need to worry about being shot at. A SAM is again a different beast, but Soviets used to get decent performance out of partly mechanical missile autopilots. I suspect SAM autopilots have it easier because their maximum allowable acceleration is way larger than fighter jets, which must after all keep the pilot unsquashed.

> And the notion that we can get 100% coverage on our testing of such systems is ludicrous too.

Indeed. But I don't understand whether you mean this to be an argument for or against autopilots.

> Maybe I do, but if they are so simple, why do we need computers for them?

Because doctors act under perverse incentives and so refuse to practice evidence based medicine.

@almount: do you saw off a leg and replace it with a prosthesis if a shoe doesn't fit?
@cthulhu: I forgot one additional point. I may be wrong again, but isn't human pilots' relatively slow reaction time the largest problem necessitating computers in fighter planes? The human body contains hundreds of actuators and thousands of different kinds of sensors, and e.g. an Olympic skater uses most of them very efficiently. We have been even known to function with the loss of some control surfaces.

@Anonymous coward May 26, 2014 at 10:10 am
>Maybe I do, but if they are so simple, why do we need computers for them? Because we can no longer teach people properly? Maybe that’s what we should work at instead? (No, MOOCs are not the answer.)

I could teach someone how to compile Java source code into Java byte code using a big book of rules. It wouldn't be very hard work. But it would be very boring work, inhumanly so, and very error prone. Instead, I compile code with a computer, which does the work quickly and reliably and does not seem to suffer thereby.

This is an extreme case, but often the requirement is not that a task be performed with understanding, but that it be performed with an absolute, stultifying reliability. In fact, the stultifying nature of such work is the most dangerous thing about it, for boredom can lead to not only basic mechanical error but creatively generated new kinds of failure.

Thus I would much rather have, say, a mechanical/electronic ventilator than trust a series of RNs to be able to pump air into my chest reliably for hours or days, contra NNU in what is certainly an example of a situation where "it matters most".

If ox entrails produced error rates lower than (or comparable to) trained medical professionals, why wouldn't I prefer the entrails? Sounds like we should all be willing to gore that cow.

— they could not be designed otherwise — but becomes progressively more difficult when we move to complicated stuff like medicine, cooking, politics or personal assistance.

You don't know what output is correct in medicine and cooking?
My sympathies.

I may not understand all the details of how a car works, but I know the general idea and the underlying physical principles, and an engineer will have no trouble (in principle) “opening” the black boxes for me by explaining the details.

I don't know about cars, but if you think like this, I recommend never talking to a civil engineer if you ever want to sleep inside again.

I see no reason why an algorithm should not be able to justify its decision once it has made it. It would not be any more opaque than a registered nurse's intuition.

If its decision is wrong, whom do you sue? A couple of well-targeted lawsuits will put a knowledge software company out of business. There seem to be only two alternatives: the patient bears full responsibility, or the government picks up residual risk. Hopefully you see that both are bad.

Alexey was commenting on one specific critique of algorithms. Your reply seems to be off-topic but still: you could say the same about hospitals. Also, heard about medical malpractice insurance?

Yeah, I guess I was being facetious. Still, I wonder how they lived without medical malpractice before the sixties. Nasty, brutish and short? Nature red in tooth and claw? Hm.

I certainly do not support the extent of the current malpractice liability. Sadly, this will probably be one of the barriers in diagnostic algorithms adoption but I hope it will not stop them.

Umm...medical devices that can malfunction already exist.

But if its decision is wrong, its priors get updated and the software improves. Humans are way worse at admitting error and changing their behavior. A Bayesian diagnoser should overtime be asymptotic towards zero error.

> Humans are way worse at admitting error and changing their behavior.
Then that's what we should be working at. As I said above, if we replace the mind, what is left? The gonads?
> A Bayesian diagnoser should overtime be asymptotic towards zero error.
Suppose I build a Bayesian diagnoser of coin flips and give it a biased coin to test. Over time its priors may converge to the true bias of the coin, but it will never reach zero error.

>asymptotic towards zero error

>converge...but it will never reach zero error

those are equivalent statements

No, they aren't. (1/2)^n converges to zero and is thus asymptotic towards zero. 0.1+(1/2)^n converges, but not towards zero, and is thus not asymptotic to zero. It is asymptotic to 0.1.

You're conflating zero error with zero the number. Different things.

1. Most algorithms implemented in commercial systems do learn/update. The learning is done offline and the resulting algorithms are hard-wired into the system.

2. In most realistic circumstances it is not possible to have a machine or a human perform with zero error. There are many reasons for that, for example the inputs are noisy, the information in the input is sometimes not sufficient and the outcome could be truly ambiguous.

Should be do not learn/update, of course.

Use FLOSS software, and the liability is put back onto the hospital.

From what I know of large scale neural networks, it can be (depending on the number of factors involved) a real trial to try and determine exactly what factors and what weighting were eventually used to obtain the output from the input.

Sometimes things really are complicated, and there's no way around that.

However, for something like a diagnostic system, I imagine it's relatively simple. It's when you have 1,000 or 10,000 inputs that could be interacting with each other that things (as I understand it) get hairy.

The nurse is a neural network too, you know. She has no idea how exactly she arrives at any of her decisions, but she should be able to justify them post factum. Same way, an algorithm could use all kinds of complicated machinery to arrive at an answer, but then do a sanity check on it.

> then do a sanity check on it

How? Run another complicated algorithm? You're going in circles here.

I don't know how facial recognition algos work but they seem to do a pretty good job.

This pushback from occupations is going to get more frequent and intense across industries.

By the way, California just announced they will start issuing drivers licenses to robot cars in September. Taxi and truck drivers hopefully are preparing themselves.

I hope AI researchers concentrate on Krugman bullshit generators next. Should not be too difficult to put the chattering classes out of work.

No, Krugman is safe. Lots of people could generate similar output, but they aren't Krugman. In a sense, Krugman is a brand, and no AI is going to put that brand out of business.

Krugman could commission the KBG, and use it to maintain his income while expressing an increased preference for leisure.

How do we know he hasn't already? :)

Nurses should benefit from diagnostic algorithms, as they will allow RNs to take on more responsibilities have have traditionally belonged to doctors

it's the other way around; automation most devalues the patience to perform routine work and dispense routine judgment. Doctors are mostly immune to the challenge of automated diagnostic heuristics, precisely because the nurses have already annexed the most routine elements from them.

Let me guess. Every algorithm ends with "consult your doctor."

The fear of algorithms extends to the legal world. Patent attorneys, I am told, don't like to use the word "algorithm" when drafting patents since the law technically does not allow you to patent an algorithm per se.

Good discussion of that here:

In the U.S., anyway, you can patent a pure algorithm in all but name--the LZW compression algorithm that lies behind .gif files being a notorious example.

If the diagnosis is based on comparing a set of symptoms to match the symptoms of previous patients, then it sounds much more like a 'heuristic' (e.g. bayesian match) than an 'algorithm'. Algorithms are deterministic and guarenteed to work step by step until they reach an exactly correct solution (see 'The art of programming' by by Donald Knuth).

Most often they are a straightforward mathematical model that gives a given output for a specific set of input. A simplified example would look something like this:

Odds of having disease = .2 * having this symptom + .10 * number of medications ^ 2 - .3 * patient is a non-smoker

Of course there are more complex approaches being explored, but in my experience the stuff that is in use or close to being used would be deterministic stuff like that.

"Algorithms are deterministic and guarenteed to work step by step until they reach an exactly correct solution (see ‘The art of programming’ by by Donald Knuth)."

I believe Knuth said an algorithm must be "definite" (steps are precisely defined), not "deterministic." So shuffling a deck of cards is Algorithm P in his book.

Even setting aside the highly curious claim that algorithms always "reach an exactly correct solution", it's unclear what dividing line you mean to imply between algorithm and heuristic. Is a procedure for calculating a Bayesian update to an estimated probability an algorithm or a heuristic?

Do nurses perform diagnosis?

I hope not. When my wife was in labor with our first child, I asked one of the nurses about the oxygen mask in the room (which thankfully no one at my baby's delivery needed). "What percentage of oxygen can you give through this mask?" I asked. "Oh it's pure oxygen," said one of the RNs.

But you're asking a question that's not that relevant; the relevant question for the nurse is "is this the oxygen I'm supposed to give to delivering mothers" and "is the machine working". Maybe "how do I reload the machine correctly" also.

Unless it had a mixer attached, the nurse was correct. Wall O2 comes out at (about) 100%.


Pure oxygen, for your unadulterated child.

My personal experience is that Google plus a doctor's IQ does about as well as a doctor.

I'm not an expert on oxygen masks, but isn't that right? At least, within the margin of measurement? I understand that what the patient breathes is a mix of oxygen from the mask and ordinary air, but what the mask delivers I thought was pure oxygen.

I find it amazing that doctors are not already using computer assisted diagnostics.

With the vast amount of medical research results, no doctor can even stay up to date on their own section of medical research very well, not to mention all the other stuff.

I agree that doctors and nurses should be in charge of patient care (for now...), but that is no reason to tell people they should not be using a very inexpensive (code once, deploy everywhere) and powerful tool that is likely smarter and faster than the best doctor is.

This ad should be archived for a future Museum of Luddite Propaganda (which really should be a thing).

The Museum of Discovery in Little Rock has an exhibit on robots and labor. They had some good Luddite items on display.

Doctors work by algorithm already. When you come in for a checkup, there is a recommended procedure for history and physical and a recommended set of lab tests. When the lab tests come back, anything out of range is flagged. Everything is according to the "standard of care". There is relatively little room for creativity. You don't want to have to explain your creativity in a malpractice suit.

Flying has become a lot safer in the last 50 years because commercial pilots follow the book. Private pilots have more freedom to be creative and to kill themselves.

+ 1.

Atul Gawande on the subject:

the hard part is shifting thru all the bad research & outright fraud. Lets see- B blockers are they good for you peri-op or do they kill you-what is better an MI or a stroke? Hetastarch for volume replacement, safe? oh yea lots of the data was fabricated, preemptive pain therapy- ditto.

Very good point against the apostles of evidence-based medicine in general, not only their AI subdivision.

@andrew' No, not as part of their job description.

Then what is this really about?

There's something missing from all this discussion of "symptoms in + other facts in => diagnosis out."

What's missing is how the symptoms get in. In 1967-68 I worked with Dr. Harald Leuba on a project for the Air Force titled "Development of a Symptom Reporting Language for Operator Reports of Aircraft Malfunctions." It was amazing how bad the Air Force pilots were at reporting how their aircraft had failed. How good do you suppose Mrs. Smith is at reporting her medical symptoms?

I suspect we are a long way from the day when a computer can apply what we call "intuition" to ask all the right questions to get the symptom reports clarified.

When you take your car to the dealer now, they care a lot more about what the computer says than your "It sounds a little funny sometimes" complaint. Doctors care more about lab results than your complaints.

This is an excellent point. Often the questions asked by the medical personnel are in terminology that's difficult for me to understand.

When I was a teenager with a southern accent I had stomach pain (turned out to be appendicitis). The Boston-raised doctor asked me "Do you have any pain in your heart?" but it sounded to me like "Do you have any pain in your hot?" I took this to be a euphemism for the genital area -- since the doctor looked embarrassed when I asked him to repeat it twice, and my mother was present. Luckily I had no pain in either area and accurately reported "no".

Yep, that's the real problem: Getting enough, quality inputs. There are relatively few things that we have sensors for, and while those things are very valuable, they are nowhere near enough to perform good diagnosis.

That said, this sensor issues are exacerbated by the fact that we are doing interviews with patients, and we lack a good baseline for any given patient. Imagine an automated system that you had at your own home, and took a bunch of your vitals often, and in under a couple of minutes. That level of detail can tell us a whole lot more, but a doctor today would have little use of that kind of data: You really want a computer to analyze that, and eventually get rid of a whole lot of doctor guesswork altogether.

But in a world where 23 and me is being told to not do analysis because the results aren't reliable enough is not a world where medical research advances quickly.

irony overload- 1)the various nurse's unions are the 1st to declare they can do anything an MD can do only better & cheaper- but all hell breaks lose if you suggest there are alternatives to RNs. 2) the RN hierarchy at most hospitals are the most rule bound automotons you have ever seen- with a blind faith in the collection & use of bad data- they have no idea what GIGO means.

It is interesting to compare this to the HFT debate: Computer geeks automate something to offer cheaper and better service for the customers => old professionals fight back claiming that algorithms are "black boxes" and nobody knows what is inside (much unlike experts brains ), the new tools are more expensive and the new tools provide inferior service. One big difference is that nurses will still be a much needed and respected profession after more widespread adoption of the automated diagnostic tools. They certainly have less to worry about than taxicab drivers.

Also did anybody else found the video clip stereotyping the computer programmer as anti-emphatic stoned asshole pretty offensive? Also the hospital manager was ridiculed for caring about the medical costs? And of course, the registered nurse was the only hero in the story (are all of them really like this).

C'mon, it's an ad (and I suspect not professionally scripted, given how painful it was). Of course, everything is caricatures.

I don't expect to see an even handed treatment in political ads, either.

Changes...... this a situation where all people being happy with the change is just impossible. In the present scenario people is scared of change and is angry with the mean bean counters that push the introduction of technology. Imagine another scenario where the technology is only used for rich patients, it proves useful and we are on year 2020. Then, some people is going to complain about why the technology is not available for the poor: inequality!!!!!!!!

The plebs sees now the bedside computer as low status, let it become high status and they will be begging for it. In conclusion, it seems like a marketing fail from the people introducing the new technology.

Well it's true, we don't understand how many of the machine learning algorithms work. But I can attest from my failed social advances, that we also don't understand how nurses work.

But there are lots of people who do understand how nurses work, and you can take lessons from them. With algorithms, you can't say as much.

"IF you have touched a patient THEN wash you hands." Seems like an algorithm simple enough for every nurse with a four-year degree to follow every time, and yet...

Hello, robot nurse!

That's not really a problem of human nurses, though. I imagine that, in the army, they do manage somehow to teach everybody not to point loaded guns at one another.

President Obama accomplished something I could not. My wife is an RN and according to her peers over the past thirty years she is one of the best long-term care nurses in the country.

I have been trying to get her to retire, as her work takes a physical and emotional toll, not to mention the lousy schedule.

One month with the new EMR system (nicknamed " the Obama-puter") and she turned in her resignation. She is retiring.

And I quote:

"I can take getting smacked by Alzheimers patients, I can live with osteo-arthritis pain, I can take working every other holiday, I can deal with grieving families, I can put up with arrogant physicians and dumbass pharmacists, I can take a lot but I will not take orders from a dysfunctional computer system or computer criticism because I gave a 9:00p med at 9:02p."

The only problem now is that everywhere we go in the community someone is begging her to reconsider, not to mention the phone calls. Tuesday night it is over. Good for me, bad for the elderly.

I am glad your wife is no longer working. These dinosaur nurses who still want to chart with paper and do things their way and refuse to learn new technologies that could benefit their patient are part of the problem. The thing that gives nurses like your wife staying power is their ability to convince patients that their old world charm is the real snake oil that will cure them, but it's nurses like your wife that are the reason that medical errors and nosocomial infections are through the roof.

Obama didn't kill your wife's job, sir. Her failure to learn new skills did. Good riddance.

Oh, holly c***. I did go to their web site. Did you know that the racist algos are tuned to "mean white men in their forties" [citation needed]. You know there are physiological differences between genders, races ... oh, wait ... is it the nurses who is racist? Anyway, forget about statistics (and maybe science in general?), everybody is different you need registered nurse (tm) intuition to figure out what is wrong with them.

I dunno about algorithms being complicated. Here is one you all use every day written in very bad but understandable pseudo-code:

Case 1:
If I arrive at the stop sign first
Then I go first

Case 2
If I arrive at the stop sign second
Then the other car goes first

Case 3
If I arrive at the stop at the same time as the other car AND I am the right most car
Then I go first

Case 4
If I arrive at the stop at the same time as the other guy AND I am the left most car
Then the other car goes first

Written for the two cars only. It gets slightly more complicated for three or four cars (and for the case when four cars arrive simultaneously, you would have to add a back-off algorithm, which is a fancy way of saying "rolling the dice to see who goes first.")

The problem is usually when the you miss some eventuality and the program either freezes or returns garbage. The real life analogy to this would be that you and the other car are coasting to a stop and look to arrive at the limit line at the same time. The other car's driver sees that you are decelerating enough to stop and even though he is the left-most car, he accelerates and leaves you sitting there like a chump. I hope that this helps the non-Boolean (George Boole, mathematician who successfully unified logic and arithmetic, look him up) amongst you to understand the issue.

Are you sure they weren't just quoting the Insane Clown Posse?

Please don't let anyone explain to them that all their training is, essentially, algorithms.

"It has numerous excellent lines such as “Algorithms are simple mathematical formulas that nobody understands."

Except that an algorithm is far easier to understand than a nurse's intuition. Especially when the algorithm produces consistent results from identical inputs, which is not true for even the same nurse on different days (let alone different nurses).

The nurses are, I suppose, afraid their jobs will be de-skilled. But the protest is still wrong-headed (and all the more so in that it's based on what nurses want for nurses and not on what's best for patients).

Algorithms will sometimes be wrong, so? That's only an argument if you can find MDs or RNs who are never wrong.

As with many branches of science, overall complexity is such that no individual can keep all relevant factors in their working memories. The ability of computers to search vast databases in minutes will inevitably make computer-assisted diagnosis and patient assessment superior to diagnoses and assessments made without this assist.

And, umm, how many patients already "diagnose" or "assess" themselves by consulting "Dr Google" before even seeing a medical professional?

Hate to be contrarian, but here's an uncomfortable fact. Predictive algorithms are (rightfully) less trusted than trained nurses. Moreover, hospital liabilities for using software instead of a RN is much, much higher. As a result hospitals are very resistant to adopting diagnostic technology.
However, the uncomfortable truth is that diagnostic technology is outperforming RN's in certain areas (and highly underperforming RN's in others). Adoption of diagnostic tech follows this trend: as it outperforms RN's, it becomes a tools for diagnostics.
Its a fair complaint that some hospitals are going too far and adopting tech that is not ready for the clinic, largely to cut costs. But its incorrect to assume that the accuracy of RN's is the gold standard. In reality, good RN's will leverage diagnostics effectively to be better at providing care, while poor performers will be exposed. The latter are the likely drivers of these campaigns, and should be taken with a grain of salt.

With any algorithm, the question is going to be at what point you allow clinical judgement to override the algorithm. That point must exist. We do not understand medicine well enough to turn it over to primitive machine intelligence. We can't convey to the machine all the inputs which might be important, and we ourselves don't always know how we make decisions. We have our own "black box" of clinical judgement and expertise, informed by evidence-based medicine, and it works pretty well.

Nevertheless, algorithms are valuable and I use them in my practice every day. The ABCD2 score for stroke risk in TIAs. The San Francisco Syncope Rule. The PERC criteria for pulmonary embolism. And so on.

And, as others have pointed out above, a lot of what could be called "algorithms" can also be called basic professional competence. A patient with altered mental status needs a point of care glucose, STAT. Drunks who fall need head CTs. And so on.

I favor a principle I like to call the "laziness catcher." An algorithm or protocol should defer to professional judgement, but require those exercising their professional judgement to override the algorithm to do a little extra work (call the pharmacy, answer some questions in the EMR, write an email to QA.)

The lazy doctors and nurses will not want to take extra time and trouble and will follow the protocol. Which is what you want them to do. The more conscientious and opinionated caregivers will do a little extra work when they feel the algorithm is not delivering the best care, which is also what a good system should want to happen.

The nurse diagnoses the patient with an arrhythmia, but did you notice that the telemetry monitor was keeping a perfectly steady beat?

Comments for this post are closed