From the comments (Dan Hanson on ACA)

by on October 27, 2013 at 4:04 am in Law, Medicine, Web/Tech | Permalink

Dan writes:

The front end technology is not the problem here. It would be nice if it was the problem, because web page scaling issues are known problems and relatively easy to solve.

The real problems are with the back end of the software. When you try to get a quote for health insurance, the system has to connect to computers at the IRS, the VA, Medicaid/CHIP, various state agencies, Treasury, and HHS. They also have to connect to all the health plan carriers to get pre-subsidy pricing. All of these queries receive data that is then fed into the online calculator to give you a price. If any of these queries fails, the whole transaction fails.

Most of these systems are old legacy systems with their own unique data formats. Some have been around since the 1960′s, and the people who wrote the code that runs on them are long gone. If one of these old crappy systems takes too long to respond, the transaction times out.

Amazingly, none of this was tested until a week or two before the rollout, and the tests failed. They released the web site to the public anyway – an act which would border on criminal negligence if it was done in the private sector and someone was harmed. Their load tests crashed the system with only 200 simultaneous transactions – a load that even the worst-written front-end software could easily handle.

When you even contemplate bringing an old legacy system into a large-scale web project, you should do load testing on that system as part of the feasibility process before you ever write a line of production code, because if those old servers can’t handle the load, your whole project is dead in the water if you are forced to rely on them. There are no easy fixes for the fact that a 30 year old mainframe can not handle thousands of simultaneous queries. And upgrading all the back-end systems is a bigger job than the web site itself. Some of those systems are still there because attempts to upgrade them failed in the past. Too much legacy software, too many other co-reliant systems, etc. So if they aren’t going to handle the job, you need a completely different design for your public portal.

A lot of focus has been on the front-end code, because that’s the code that we can inspect, and it’s the code that lots of amateur web programmers are familiar with, so everyone’s got an opinion. And sure, it’s horribly written in many places. But in systems like this the problems that keep you up at night are almost always in the back-end integration.

The root problem was horrific management. The end result is a system built incorrectly and shipped without doing the kind of testing that sound engineering practices call for. These aren’t ‘mistakes’, they are the result of gross negligence, ignorance, and the violation of engineering best practices at just about every step of the way..

…“No way would Apple, Amazon, UPS, FedEx outsource their computer systems and software development, or their IT operations, to anyone else.”

You have to be kidding. How do you think SAP makes a living? Or Oracle? Or PeopleSoft? Or IBM, which has become little more than an IT service provider to other companies?

Everyone outsources large portions of their IT, and they should. It’s called specialization and division of labor. If FedEx’s core competence is not in IT, they should outsource their IT to people who know what they are doing.

In fact, the failure of Obamacare’s web portal can be more reasonably blamed on the government’s unwillingness to outsource the key piece of the project – the integration lead. Rather than hiring an outside integration lead and giving them responsibility for delivering on time, for some inexplicable reason the administration decided to make the Center for Medicare and Medicaid services the integration lead for a massive IT project despite the fact that CMS has no experience managing large IT projects.

Failure isn’t rare for government IT projects – it’s the norm. Over 90% of them fail to deliver on time and on budget. But more frighteningly, over 40% of them fail absolutely and are never delivered. This is because the core requirements for a successful project – solid up-front analysis and requirements, tight control over requirements changes, and clear coordination of responsibility with accountability, are all things that government tends to be very poor at,

The mystery is why we keep letting them try.

Jari Mustonen October 27, 2013 at 5:14 am

First, every post and news should mention that creator of the website is CGI.

As an IT-professional, I can say that there are two parties to blame:

1. CGI. I think US should sue CGI out of its existence. Companies whose business model is to win government contracts and then fail in them and forcing the said government to buy more of their “services”, should be put out of their misery. If this is not an option, they should be named and shamed.
2. US government. They buy from CGI, et al., while paying them ridiculous amounts of money. The people making the buying decisions clearly do not know anything and IT. Hire people who know how to make IT services. Recruit them from places like Google, Facebook, Apple, etc. They also know how to buy IT-services.

Neville October 27, 2013 at 9:00 am

“The real problems are with the back end of the software.”

.

Nope, it’s the fundamentally flawed ACA ‘concept’. GIGO

Software, front or back, is merely the hapless carrier of those illogical ACA premises… ideological premises frantically composed into dictates– by grossly incompetent Congressional staffers and lobbyists in the dark of night on the Potomac… to get ACA passed.

No Congressman, President, or SCOTUS jurist ever read the vast ACA text… nor had the slightest understanding of its actual “commands”. (Infamously, Pelosi was very smug about ACA’s opaque mysteries — ‘we have to pass it to find out what’s in it’)

As the saying goes:
Ya Can’t Make Chicken Salad out of Chicken Sh** !

dan1111 October 27, 2013 at 1:10 pm

I do not support the ACA, and I doubt the exchanges will work well. But that is not the reason for the website’s failure. What they want to do is perfectly feasible from a technical standpoint. So for this discussion, the real problems are with the software.

derek October 27, 2013 at 6:17 pm

I suggest that it isn’t. An engineered proposal for this type of project would have come back with a completion date of about 2025 and multiple billions of dollars because it would require the reengineering of almost every government department IT. By then someone would have put an end to the stupidity. It isn’t feasible.

Another reason why it isn’t feasible? Go to congress and say that we need $X billions of dollars to redo the IRS IT structure. So that it would be possible to do quick queries on the financial status and health status of every US citizen.

Blithering idiots. Anyone who supported this mess is a blithering idiot.

Z October 27, 2013 at 11:53 pm

You are simply wrong. I’m familiar with some of those backend systems. As the OP says, some of them were written in the 1960′s by guys who are dead now. Most of the code has been moved along over the years, but it is hardly cutting edge. The reason is it never had to be cutting edge. For the primary client (insurance company, government agency) it did the job at a low cost until some idiots from Congress decided to had to have web access to them.

That’s the real issue. The people passing these laws are the least knowledgeable about the country over which they rule. Everyone made a big deal about George Bush The Elder’s fascination with grocery store scanners. Guess what? That’s typical. These people have no idea how stuff works in the world outside DC. It may as well be elves and gnomes for all any of them know. Someone showed them the Interwebs and they said, “let’s just use this magical contrivance to deliver the much needed medical insurance to the poor!”

It is not just that they don’t understand software. They don’t know how insurance works or how hospitals function. They don’t understand how a business functions and why it chooses to provide health insurance or what to buy. Most of these people have been on welfare for so long they no longer recognize the country outside of DC. We may as well be ruled by space aliens. Unless and until you stop thinking the dumbest people in America should be trusted with running America, this crap will keep happening.

JeffC October 28, 2013 at 1:46 pm

the Bush grocery scanner thingy was a MSM myth … just like I can see Russia from my house … didn’t happen they way the MSM reported either …

Rahul October 28, 2013 at 1:10 am

@Z

Your argument, if right, demonstrates how people passing these laws are clueless, yes.

But what I don’t see is how it shows the impossibility from a strictly software or technical standpoint. Revamping legacy software systems isn’t uncharted territory. Nor is adding web frontends.

Noah Yetter October 28, 2013 at 5:37 pm

No it isn’t uncharted territory, but doing it in 6 months — correctly — very much is.

Rob Morgan October 29, 2013 at 8:15 am

“Revamping legacy software systems” is often extraordinarily expensive, and high risk. This is because the modern replacement is expected to incorporate a faithful replica of the system it replaced. If that isn’t the case, we’re talking about implementing new, additional systems, which is generally easier. Replacement also includes data migration, a huge and costly aspect, and the additional functions to route interactions between old and new systems – as most of these systems are too large for “big bang” migration.

As Noah points out, these modernisation options are neither simple nor feasible in these short time frames.

dan1111 October 28, 2013 at 3:28 am

@derek, @z, I understand that, but I meant “feasible from a technical standpoint” in a more general sense. Neville claimed that the ACA is so illogical that it is not possible to implement it successfully in software. I am arguing that this is not true, and these problems really are in the software domain rather than the underlying concept of the exchanges. Ancient government databases are part of the software problem.

Dan Weber October 27, 2013 at 10:02 am

First, every post and news should mention that creator of the website is CGI.

They were one of 55 contractors. Specifically, how do you say that they are “[the] creator”?

The quoted user said that CMS was responsible for integration and had no experience in that, and there’s been nothing in press coverage to disagree with that. You can build solid systems out of underlying bad systems, or bad systems out of underlying solid systems. It seems someone thought “well, we’ll just tell them all to build good stuff, and it will all plug together like Legos.”

William T Reeves October 28, 2013 at 3:33 am

When your top pay is 176k you cannot attract the type of client side executives necessary to even engage contractors. You can’t out source the client function and the type of person who sticks for 30 years to make 176 probably lacks almost all of the things you need to succeed. This stuff is hard for 7 figure CIOs to control much less time serving bureaucrats.

notanidiot October 27, 2013 at 11:22 am

This is pure idiocy. CGI implemented its part. Read the contract. It delivered what was on the contract. The integration was up to the Center for Medicare and Medicaid, which failed miserably. It’s incredulous how ignorant some of you are.

TMC October 27, 2013 at 11:44 am

Read some of the programmer’s sites. The front end is so poorly written it’s DDoSing itself.

Dan Weber October 27, 2013 at 1:28 pm

Unfortunately, “some of the programmer’s sites” include idiots who will say anything to defend Obamacare, as well as idiots who will say anything to attack Obamacare. If you have a case, please make it.

TMLutas October 28, 2013 at 1:36 pm

Try this one

http://blog.isthereaproblemhere.com/

I’m somewhat concerned that they’re processing applications that were abandoned and which did not agree to the T&C. That’s more than one major failure on the front end and not a comprehensive list.

Rahul October 28, 2013 at 3:29 am

Do you have a link? I do want to read their contract.

Charlie October 28, 2013 at 4:26 am

“It’s incredulous how ignorant some of you are.” Holy smokes, that is fantastic.

If English is your first language, please put on this dunce cap and go sit in the corner.

John Cunningham October 28, 2013 at 12:13 pm

CGI is infamous in Canada. they were hired to create the nationwide registry of rifles and shotguns for $100. after 6 years,over $1 billion had been spent, and it did not work. They were then given a new contract to do another gun registry, which also failed. in 2010, the Ontario govt fired CGI for failing to deliver a registry of diabetic and heart patients. curious how hiring Moochelle’s college pal was more important, no?

Static October 28, 2013 at 3:30 pm

Wrong…
CGI Federal also built the successful Kentucky exchange.
http://www.modernhealthcare.com/article/20131017/NEWS/310179950

Steve Sailer October 27, 2013 at 5:34 am

But, but, but Obama has so much management experience! Didn’t he manage D-Day?

Oh, wait, that was Dwight Eisenhower. My mistake …

Okay, now I remember: Didn’t Obama once help get asbestos partially removed from a housing project?

babar October 27, 2013 at 6:54 am

does _anyone_ claim that obama has much management experience?

Steve Sailer October 27, 2013 at 7:37 am

Obama’s 1986 asbestos removal project was given a striking amount of publicity in 2007-08 by his supporters as an example of what he got done during his vaunted “community organizer” phase, followed by a debunking in the Los Angeles Times arguing that Obama merely assisted a local activist. New Yorker editor David Remnick’s 2010 bestseller “The Bridge: The Rise of Barack Obama” devotes pp. 164-168 to rebunking Obama’s role in the project:

http://books.google.com/books?id=F6HAasv2v-4C&pg=PA165&lpg=PA165&dq=obama+management+experience+asbestos&source=bl&ots=lyrVEMizsd&sig=6x3KFO5nQy1-FSh3AV8Qh6UYDqA&hl=en&sa=X&ei=TvlsUvCcCYiriQKt5YHICg&ved=0CDYQ6AEwAQ#v=onepage&q=obama%20management%20experience%20asbestos&f=false

Rich Berger October 27, 2013 at 7:44 am

Like most of Obama’s background, this lack was downplayed or covered up. Sarah Palin had a lot more experience just being mayor of Wasilla.

Mark Thorson October 27, 2013 at 11:21 am

He’s given the order that the problems will be fixed by the end of November. If that’s not management, what is?

derek October 27, 2013 at 6:20 pm

Or he’ll go down there and fix it himself.

What is sad is that the people saying these things don’t elicit laughter.

Jonathan Silber October 29, 2013 at 10:16 am

More probably, Obama will claim he didn’t know of the existence
of ObamaCare until just recently, when like the rest of us, he read
about it in the newspapers.

Thomas B October 27, 2013 at 11:44 am

I’ll go out on a limb and say anyone who has spent six years as President has more management experience than I do.

I don’t think the President is beyond reproach, but there are probably stronger critiques here.

Thomas B October 27, 2013 at 11:48 am

* Or even just over four.

Rich Berger October 27, 2013 at 2:58 pm

“(Reuters) – The United States may have bugged Angela Merkel’s phone for more than 10 years, according to a news report on Saturday that also said President Barack Obama told the German leader he would have stopped it happening had he known about it.”

Do you see a pattern with Mr. Obama’s “management” style? The only thing missing is a statement that nobody is madder than him about it.

Clint Eastwood was right. The only problem is that the low-information voters may be getting the message a little too late.

Roy October 27, 2013 at 10:56 pm

Nero had 14 years of experience, so did Comodus, King John had ehat 17? Aethelred Unraed had 38… I could go on…

Tenure and experience are only opportunities to learn. Successful tenure is another matter

TMLutas October 28, 2013 at 1:48 pm

He may have management experience but he apparently is blind to the plain fact that IT project management is an odd bird and has never heard of the Mythical Man Month. Just about every IT project manager in America cringed when they heard about the tech surge. They have another word for that in the business. It’s called the IT project death march. Obama publicly launched a death march. He’s giving out all the classic signs.

RobertF October 28, 2013 at 4:37 pm

“I’ll go out on a limb and say anyone who has spent six years as President …”

That’s the number unadjusted for golf and campaigning, which rather undermines your point.

Rahul October 27, 2013 at 7:16 am

What’s the precedent here? Ronald Reagan’s military experience?

Rich Berger October 27, 2013 at 7:48 am

Actually, he was in the military. Maybe not Rahul-acceptable service, but he was.

http://www.reagan.utexas.edu/archives/reference/military.html

Steve Sailer October 27, 2013 at 7:20 pm

Reagan joined the Army Reserve in 1937, three or four years before conscription was introduced

TMC October 27, 2013 at 10:57 am

And the President of the Actors Guild, then Governor of the largest state in the union.
More than a little experience in management.

Steve C. October 27, 2013 at 11:08 am

“I accept full responsibility, but not the blame. Let me explain the difference. People who are to blame, lose their jobs. People who are responsible, do not.” David Frye as Richard Nixon.

rege October 27, 2013 at 11:25 am

He successfully governed California for chrissakes. Get over it.

Rusty Synapses October 27, 2013 at 5:50 pm

I voted for Obama twice, but it’s become pretty clear he’s not a good leader or manager – the criticisms of him in the first campaign were generally right. He gives pretty good high level speeches, though, especially if you like false choices. (It’s also become clear that he likes things that sound good – e.g. “transparency” – but he has zero credibility that he has any intention of following through.)

dead serious October 27, 2013 at 8:08 pm

To be fair, and I’m no Obama lover, anyone who claims to want “transparency” in actuality wants the opposite.

I thought this was well-understood by most adults.

dead serious October 27, 2013 at 8:24 pm

Similar to Republican office holders who abdicate their positions “to spend more time with the family” after inconveniently caught with young boys or farm animals.

Adults know that’s horseshit.

Another example: Republicans with the “family values” bullshit when actions show that they “value” only certain kinds of families.

RobertF October 28, 2013 at 4:39 pm

This is the problem with liberal screwups. Rather than admit the problem, they try to change the subject. As if Reagan’s military record is relevant to the clusterf**k that is ACA.

Darren October 27, 2013 at 2:23 pm

Hilarious! It’s like you honestly believe that the PRESIDENT OF THE UNITED STATES personally project manages every single government operation. And I’m sure he’s sitting down writing code on all the government IT projects and personally resurfacing the roads while he’s at it.

Rusty Synapses October 27, 2013 at 6:00 pm

I’m not sure I understand your point (is it that Reagan didn’t have experience or that Obama doesn’t need it?). In any case, I think it’s a complete failure by Obama – not that he’s supposed to project manage the whole thing, but three and a half years ago, he needs to start it off by gathering the key internal people together an saying “We’ve got to do whatever this takes to get it done early and right.” (I’m guessing he could find money for over budget – he seems to be finding it now.) Plus, not having a prime seems like incredible stupidity, both from a practical and political standpoint (too easy for everyone to point at everyone else.)

TMLutas October 28, 2013 at 1:55 pm

The President of the United States should have, before the PPACA ever passed, have asked simple implementation questions like “what’s the government’s track record on IT projects” which would have alerted him that it’s generally horrible. He might have done a little reading of the classics, like the Mythical Man Month. He needed to put his foot down from the day he signed the legislation that there would be enough time to do testing and that the scope would be fully set in stone early enough to develop and work out the bugs before the go live date. It would have taken about ten minutes of his time.

IT project management is horrible on meeting deadline under budget. It’s worse in government IT. It is even worse when there is significant user pushback, like 34 states refusing to sign on to run their own exchanges. This was a known high stakes project form day 1 and it is reasonable to expect that a president be more hands on with his signature legislative accomplishment.

Jonathan Silber October 29, 2013 at 10:22 am

SignatureAchievementHealthcare.gov,
courtesy of the Smartest–and the Maddest–Guy in the Room.

Rahul October 27, 2013 at 5:50 am

There are no easy fixes for the fact that a 30 year old mainframe can not handle thousands of simultaneous queries.

A queue? Or would people mutiny at the prospect of a non-realtime transaction?

Jerry October 27, 2013 at 6:59 am

And how long will people be willing to wait in this queue before giving up and complaining?

Say a million people visit the website [1], and each person query for insurance five times [2] -> that’s 5 million queries. Say the system can handle 1000 request a second [3] -> that’s 83 minutes to handle those 5 million queries.

I guess if the system emails back the results, instead of returning results in real time, then maybe. But people will probably still complain.

[1] http://www.examiner.com/article/obamacare-site-crashes-first-day-of-new-system-plagued-by-glitches-crashes
[2] just a rough estimate. It’s 16 minutes if each person just do 1 query
[3] another rough estimate

Rahul October 27, 2013 at 7:10 am

Yes I meant the offline model; not a person actually waiting 83 minutes in queue for a real-time slot.

Basically, when you have a slow backend that’s too expensive to revamp & parallelize. The front-end can be fast, and accept data and then throttle requests to the back-end and then respond via email etc.

Not always an option. e.g, when extreme interactivity in exploring options is needed.

Dan Weber October 27, 2013 at 10:19 am

You could build a hell of a caching layer. It all depends on the specifics of what’s going on, and there might be legal hurdles that prevent you from holding onto certain stuff.

It’s entirely solvable technically, provided the engineers building it have been told about the problem in time. There’s the whole idea of giving them estimates of the plan they could get and then emailing them when the specifics, once their eligibility has been verified. But there was a management decision made that people would not ever see price quotes without the subsidy amount already subtracted from it. That seems entirely a political decision made so as to not cause any headlines about sticker shock, and now we see the trade-off.

john personna October 27, 2013 at 10:46 am

I don’t believe you can cache, because I believe that the insurance providers demand single-user pricing ability. We don’t actually know what backend searches THEY do, do we? Do they check if I’m a current customer, and give me a rate no so different from my current one?

Dan Weber October 27, 2013 at 1:32 pm

It’s community(ish)-rating, right? All 28-year old men should get the same rate from the same company, yes?

Of course they made deliberate design decisions to make what should be a very easy quote-then-verify system, like any mortgage website uses, into something almost exactly like it but incredibly complicated.

john personna October 27, 2013 at 2:42 pm

I know that on the California system you are asked up front your age, your average number of medical visits per year, and your number of prescription drugs. I don’t know, given a SSN, how much further insurers go on each query. I mean there may be a gap in my knowledge, with respect to ACA rates. Is there supposed to be one 28YO rate (for the day)?

Mark Thorson October 27, 2013 at 10:53 am

Yes, a web-based HTML5 front-end, and somewhere on the back-end a deck of cards is being punched for input to the mainframe.

Slocum October 27, 2013 at 7:48 am

“And how long will people be willing to wait in this queue before giving up and complaining?”

Use self-reporting to let people go through the process without any connection to the slow, clunky legacy systems. Generate a provisional quote that’s subject to confirmation/adjustment when the self-reported values are checked against the back-end systems in non-real time batch processing.

dan1111 October 27, 2013 at 2:38 pm

Rahul, that is certainly a possibility, but it is not an “easy fix”. It is re-thinking the entire design of the website, not just from a technical standpoint, but from a user standpoint as well.

Faceless Commenter October 27, 2013 at 8:57 pm

As for the user standpoint, an e-mail later in the day or next morning would be a vast improvement over what’s happening now — although it’s bush compared to everything else on the internet.

dan1111 October 28, 2013 at 3:30 am

It would still be good compared to nearly all government services, though.

Andao October 29, 2013 at 6:49 am

The queue…exactly what I was thinking in all of this. It could be entirely handled on the back end. Is someone’s insurance rate going to change drastically even over the course of a few days?

And if it’s an issue of activation (I need to buy insurance today and go to the hospital RIGHT NOW), the system should be able to have a time stamp of when users purchased insurance, even if the official activation isn’t for a day or two after the fact. A claims person can sort that out later.

Rich Berger October 27, 2013 at 7:42 am

This reminds me of the end of War of the Worlds when the Martians were killed by earth bacteria.

RobertF October 28, 2013 at 4:41 pm

Reminded me of the destruction of Sennacherib’s host. “The Obamabots came down like wolves on the fold…”, etc.

Jay October 27, 2013 at 8:31 am

Progger’s theory on making government as big as possible is as follows:

Institute flawed government policy. Claim market failure. Push for even more intrusive government policy. Rinse, lather, repeat. The ACA exchange failure was intentional negligence. The end game is European-style health insurance/delivery system.

john personna October 27, 2013 at 8:34 am

I also appreciated Dan’s thorough answer, and as I mentioned, I think the real way to collapse the problem would be to have insurers provide rate tables rather than umpteem live backend connections.

john personna October 27, 2013 at 10:25 am

You know, SAP systems are noted above as large and typically outsourced by large companies … note that they are also famous for multi-year rolling failures, and sometimes failure to implement. I see that someone has a list of the 10 Biggest ERP Software Failures of 2011. They are not all governmental, though many are, because governments tend to buy big, highly featured, highly specified, software systems.

And as far as I know, Obama wasn’t a system architect in all of them.

TMC October 27, 2013 at 11:06 am

How many cost $600 mil for what is mostly brochureware?
That is not a lot of connections to databases. IRS stuff could be uploaded to the site every night, the insurance provides were not even giving quotes as we find out now. The site was giving low ball estimates and the insurers have been following up with real quotes.
As stated above, the data could be cached even for an older legacy system. Not a lot of data elements there to share.

john personna October 27, 2013 at 11:14 am

It is certainly not brochureware under the current specification, and as I say, I think insurance providers wanted single user pricing ability. I don’t believe any of them would just turn over rate tables, based on some small set of applicant variables.

Michael B Sullivan October 27, 2013 at 11:36 am

I worked for Netsuite, a provider of accounting, ERP, and CRM software for six years.

You are absolutely right that SAP was bloated, expensive, and prone to failure. I know because Netsuite and several other companies during the 2005-2010 time period basically made a business of going to unhappy SAP customers and saying, “Let us replace SAP. We’ll be faster, cheaper, and better.” And they were very successful at doing it. I don’t think it’s an exaggeration to say that Netsuite substantially went public on the back of replacing SAP. It wasn’t a secret. The executives would regularly reference it in company meetings.

Eventually, most of the mid-sized companies that were unhappy SAP customers got gobbled up, and at Netsuite, people talked about it as t the end of the low-hanging fruit era of sales.

So you’re right, SAP was pretty terrible. But that was a good example of the market working as it’s supposed to, and non-terrible alternatives springing up and out-competing the big slow dinosaur.

Bob October 28, 2013 at 4:28 pm

But SAP is still out there, and it’s still terrible. People are still buying their Kraken-style software.

Michal October 27, 2013 at 11:46 am

> because governments tend to buy big, highly featured, highly specified, software systems

Yes, but that is not an excuse that is the problem.

I work in the tech industry and I’d say that we build more complicated systems than your typical government ERP. The big difference is that in the tech industry the business people work closely with the engineers and they are trying to strike a good balance between the business requirements and the implementation costs. People who understand both the business side and the tech side are extremely valuable (and extremely well paid).

On the other hand, from what I heard first hand about some government IT projects, they are designed by committee, horribly overengineered behemoths. Often, the final specs are delivered so late that it is impossible to deliver the product on time. Often, the targets are set too aggressively. Do we really need 99.5% or 99.9% uptime on something that is basically processing forms? Can some of the work be done by humans before it is implemented in the machine?

It is no wonder that such projects are pure pain for the software developers and the better ones will look for a job elsewhere. That leaves these projects staffed by only semicompetent or incompetent developers what is a further drag for these projects.

Now, to be fair, the same problem is present in a lot of other outsourced IT projects for big companies but the government is willing to sink incredible amount of money in bad projects.

john personna October 27, 2013 at 11:58 am

I would very much prefer governments to grow effective solutions over time. We need pragmatic legislatures for that.

Matthieu October 27, 2013 at 8:50 am

Kind of FUD for a comment.

“Everyone outsources large portions of their IT” outsourcing is not the panacea too, I worked for a very large company and outsourcing completely the *development* was, for some projects, a complete failure. It can make sense to outsource the maintenance, the servers administration etc.

“It’s called specialization and division of labor. If FedEx’s core competence is not in IT, they should outsource their IT ..” IT is so tangled to your business that it become one of your core competence. Yes you don’t have today to develop a HR program, you ask an integrator (CGI) to use one of the solution from SAP, IBM, Oracle. But I’m sure that FedEx doesn’t outsource the development of the IT around its business (fleet management)

“Failure isn’t rare for government IT projects – it’s the norm. Over 90% of them fail to deliver on time and on budget” Again FUD. I don’t know where this number comes from but it’s not specific to government projects, it’s the case for a lot of very large projects, public or private. Look at Airlines companies ‘mergers and the problem with their IT integration. “a Jul 2008 study by US Government Accounting Office (GAO) that found that of 840 federally funded projects 49% of were poorly planned, poorly performing or both [3]. While some would like to believe that the situation is better in private organizations a 2008 study by the Information Systems Audit and Control Association that found that 43% of 400 respondents admitted that their organization had had a recent project failure [4]. ” http://calleam.com/WTPF/?page_id=1445

“This is because the core requirements for a successful project – solid up-front analysis and requirements ..” solid up-front analysis and requirements, haha, the big dream of the 90′s: a nice report with a lot of diagrams and lot of “analysis”, give that to your IT team and see the result in 6 months. We know now this methodology is not working.

I don’t know any detail about the problem of this particular system but to me it seems a classic failure for a very large project done with some kind of old methodology (waterfall project management) that you found in large companies (public or private).

Ryan Langrill October 27, 2013 at 10:26 am

I don’t think a 49% failure rate among projects for the government is equivalent to 43% of businesses having had a failure. If a business had ten projects, one of which failed, they respond in the affirmative–the total project failure rate may be significantly lower. (The way these stats are given it could also be higher–the 43% of businesses could do most of the IT projects and have a horrible success rate, making the total private sector failure rate higher than the government rate.)

Some of the other stats in that article are ambiguous. 45% of projects go over budget? That is consistent with them being, on average, correct in their budget estimates…

Millian October 27, 2013 at 9:00 am

“The mystery is why we keep letting them try.”

Because “we” voted for a Democrat as President, a fact some people seem remarkably unable to digest.

Barkley Rosser October 27, 2013 at 10:38 am

To the Partisan Millian,

It is my understanding that there were major glitches with the rollout of the Medicare prescription drug expansion during the W. Bush admin. Of course, there was less carrying on about that one as it was mostly supported by Dems rather than facing overwhelming opposition from the opposition party trying to destroy the project at every turn, even though the project was originally their idea, a fact that somehow keeps being ignored all the time.

TMC October 27, 2013 at 11:12 am

While Obama has called them ‘glitches’, this is a crash and burn.
Medicare D had bi partisan support because it was a good idea.
Republicans liked a new program that would actually save money in the long run while adding new benefits.
Democrats like adding new benefits.

I wonder if part D held up to the projection that it would save $2 for every $1 spent.
It was a couple years into it, but I wonder if it has held up. I’;d guess so since we hear little about it anymore.

dem October 27, 2013 at 11:28 am

Should we go back to enslaving African Americans too, since that was originally Democrat ideas?

Michal October 27, 2013 at 11:58 am
TMC October 27, 2013 at 12:32 pm

Can’t blame the Dems for slavery, but the Republican part was largely founded on freeing the slaves.
The Dems fought it and any of the civil rights reforms after. Bull Connor was a member of the Democratic National Committee.

Deon October 27, 2013 at 11:22 am

Unlike the FBI’s failed Virtual Case File project (Bush) or the FAA Flight Control System overhaul (Reagan, into Bush, eventually killed under Clinton).

Big software projects fail a lot. Government software projects fail even more. Political party has nothing to do with it, as even a cursory understanding of the history makes abundantly clear.

Mike W October 27, 2013 at 12:00 pm

Actually, the same complaints about government contractors wasting taxpayer money have often been made about Republican supported defense projects.

delirious October 27, 2013 at 9:36 am

So, if you were eligible (required) to sign up with ACA, would you try to sign up offline? (If that’s possible?)

Ray Lopez "the troll" wins *again*! October 27, 2013 at 9:49 am

Just want to point out that Dan Hanson’s reply–which, after being highlighted on MR, which is #11,585 in the USA amongst all websites and #1 for econ blogs, is worthy of being posted in Dan’s resume as a highlight–was in response to my comment that the failure of healthcare.gov was due to the front end not being ‘modern’ (or OOP for you geeks, like Microsoft’s Silverlight is for a front end) and Dan was replying to that opinion. I still say making Silverlight the front end would facilitate a better design, even for the backend (Silverlight (frontend) + Entity Framework (backend), both OOP = success), though politically the USA would never give Microsoft such a coup.

But the larger point is that I, Ray Lopez, who some say is a troll, started this debate. It’s always that way. A paper by research scientists citing network theory found that the town / office ‘gossip’ –or ‘troll’ if you will– was responsible more often than not in disseminating valuable information. Next time you accuse somebody of trolling, remember that. Trolls are indispensable, and I, like the geek programming villain in the underrated James Bond film “Goldeneye”, I am invincible!

anon October 27, 2013 at 10:17 am

Your last sentence made me think of the movie “Titanic” rather than “Goldeneye”.

YMMV.

whatever October 27, 2013 at 5:53 pm

Yeah, but the Titanic wouldn’t have sunk if they had used SilverBlight or the latest version of Splash, or some other OOPS! language.

AndrewL October 27, 2013 at 12:05 pm

How will silverlight or flash solve the problem? The problem is clearly the backend, the interfacing with dozens of different databases on dozens of different systems. How does silverlight solve this issue?

Silverlight and flash are client-side processors, they take in user input and process it client side, and send results back to the server. ACA is a server-side processor: users send their input to the healthcare.gov to be processed by the server and results sent back to the client. For silverlight to work, you’d have to send the entire contents of the federal databases to the clients computers. That just wont work on every level.

Ray Lopez October 27, 2013 at 3:13 pm

Silverlight and Flash will make for better OOP coding on the front end, which surely is a good thing. At the backend, Entity Framework (EF), which is an OOP way of writing raw SQL commands, can be used. Silverlight and EF work well together and you can design a website in one-tenth the time of Javascript and HTML front-end websites using raw SQL commands for the backend database. So the assumptions I made are: (1) front end was important, and one reason for the delay (rebutted by Dan Hanson), (2) OOP will result in a front-end and back-end being built much faster, and this will speed up healthcare.gov (rebutted by Dan Hanson, who claims it’s the old, obsolete hardware at the backend that’s the problem, not the software). So it depends on who you want to believe, me, or TC-kudoed Dan Hanson? I posit healthcare.gov’s problems are software, while Dan Hanson is positing that it’s really more of a hardware problem (think of a bunch of old mainframes being fed data punchcards at the backend and not being able to easily talk to each other). Who knows? Dan may be right, but I’m staking my claims on the software.

ANON October 27, 2013 at 3:53 pm

I think you’re talking out of your ass

* OOP is not magic pixie dust that you spread on everything to make it better. It’s a approach to coding which is (usually) better than the alternatives.
* Switching to Silverlight and Flash? Almost no websites are all flash, and it’s been quite a while since I’ve seen a Silverlight based site. There are a lot of reasons to stay in HTML + JS.
* Dan’s point is not necessarily “hardware”; the outside systems that the ACA website has to interface with are old, in both design and performance. Keep in mind that this isn’t the ACA’s backend, these are the back ends of other systems.

Dan Weber October 27, 2013 at 7:05 pm

Don’t argue with crazy.

Bob October 28, 2013 at 4:30 pm

I wonder if he is talking out of his ass, or software really is his main occupation, in which case he’s incompetent.

Rob Morgan October 29, 2013 at 8:36 am

Ahhh, no. I’m not sure what level of experience you have in enterprise systems and integration, but this has all of the hallmarks of a fundamentally flawed and unscalable integration model. This is more complex than simple 2 or 3 tier app design, and I suspect you know that already.

Once you’re in this zone of integration distress it doesn’t matter how slick the front-end is. And the back-ends are not just “databases” even if that’s the common press terminology. They’re not accessible via a simple entity model. They’re policy acceptance and administration systems, entire applications that just might be SOA-enabled, but probably won’t be. Some will rely on file interchange for their external interfaces – welcome to the 1970s.

The only viable solution here will be to decouple the entire process, and separate application submission from reviewing resulting quotes – some hours later, with luck and a good headwind.

ad*m October 27, 2013 at 2:15 pm

“Dan Hanson’s reply–which, after being highlighted on MR, which is #11,585 in the USA amongst all websites and #1 for econ blogs, is worthy of being posted in Dan’s resume as a highlight – ”
->
Dan may end up being blacklisted for this, please don’t repeat his last name. Anyone who criticizes our Dear Leader or anything he is involved in runs the risk of having the IRS/EPA/DOJ sent after them.

Remember the Gibson Guitars Raid: http://www.realclearmarkets.com/articles/2013/05/24/now_the_gibson_guitar_raids_make_sense_100343.html

Komori October 28, 2013 at 10:08 am

As a linux user, I would be severely pissed off to have to buy MS Windows in order to interface with a government website. Silverlight is simply unacceptable for any government project that faces anyone other than government employees. (And, to preempt the inevitable, Moonlight has been abandoned, and was never a good alternative.)

Nor is OOP a silver bullet, especially in government procurement situations like this one. If the requirements were in flux as drastically and as late as several stories have claimed, there was simply no way for this project to ever happen on time.

Marie October 27, 2013 at 10:51 am

So you have a huge, convoluted bureaucratic system being installed and — how many? Hundreds? Of people involved knew it wasn’t going to work. But you just keep you head down, make your little widget and don’t talk about it out loud.

If it quacks like the Soviet Union. . . . . .

Faceless Commenter October 27, 2013 at 9:07 pm

One thing we’ll get that the Soviet Union doesn’t is tell-all books from this administration. My eyes cannot physically wait to read them.

Corey Cole October 27, 2013 at 12:05 pm

I’m not a super-huge fan of mainframes, but I fail to see where we have to believe that these are awful, slow, 30 year old systems. IBM is selling new mainframes with serious computing power (read 60+ cores @ 5GHz). As to the original programmers being dead or retired, so what? A few jobs back my employer used an IBM mainframe for some critical systems. I had co-workers that were in their mid-30s who could understand COBOL, make appropriate mods, write new programs etc.. We also had access to programmers via an eastern European contracting agency who were smart and new what was what, the bulk of whom were probably in their late 20s.

The biggest problem that I have found with mainframes is not that they’re slow (they’re not) or that nobody understands COBOL, but that they’re expensive as hell. IBM has had plenty of time to figure out how to monetize the hell out of these things. In fact, they’re so good at it that they can afford to give you better hardware than you pay for so that they can come back later and trivially extract money by turning on another CPU or an additional bank of memory. It’s so expensive (and so monetized) that it was worth my employer’s time to pay me tens of thousands of dollars in order to come up with a way to implement our own queue manager layer on top of MQ Series so that we didn’t need to buy as many raw queues from IBM.

Rahul October 28, 2013 at 1:28 am

Are people really buying new mainframes for new applications? Which ones?

I was under the impression the market was those firms that have ready code written say for AIX they want to reuse or scale up and it’s too much trouble to revamp the whole system.

I can’t think of any good reason to use COBOL now other than legacy compatibility or maintainability. If it ain’t bronen why fix it and that sort of thing.

Of course IBM can charge big bucks for mainfraime work, not because they are so fast or great but because it’s a pretty think market with low competition and the legacy guys are locked in.

Dan Hanson October 28, 2013 at 8:24 pm

I didn’t mean to talk smack about mainframes as an architecture. I was simply pointing out that there is wide variety in the nature of the back end systems, and some of them were coded before modern programming methodologies were developed. They may have non-standard interfaces by today’s standards, and there may be bugs lurking in the hardware or software that was never uncovered until they were asked to do something new. They’re not necessarily bad – they’re just an additional risk.

Errorr October 27, 2013 at 12:30 pm

This is very very close to getting at the core of the problem but is still just a little bit off from what I have gotten talking to the actual insiders (like my wife). The decision to put CMS as the IT lead was largely political but also the only sensible option. CMS is much harder for the House Republicans to harass with constant hearings and subpeonas. It is also the largest organiziation in HHS that had ANY infrastructure that was needed to actually do this type of procurement. It would have been great to get DOD or DHS to run the program but for the problem that the Executive can’t just pick and choose how to spend funds appropriated by congress. They can’t go out and hire the individuals who would have the knowledge because it would be near impossible to find competent people who are willing to take a MASSIVE pay cut. The pay structure of Civil servants is massively distorted in both directions where you end up with overpaid low level hacks that feel you up at the airport and massively underpaid managers. The very last civil servants with the amazing pensions have retired in the past few years and that was the only thing that kept high performing boomers working to get their 30 years in. That meant that the team was made up of people at the end of their careers who have finally been promoted beyond their level of competence, young new workers who like the apparent job security but have no experience, and the mediocre execs who don’t take a payout to work privately.

That is why the acquisition and functional program managers are failures. Ideally the government would ask an outside integrator to come in and run the thing. I know companies like Accenture, IBM, Lokheed, CACI, all would have bid. (side-note: Lokheed would have underbid everyone and give you a system so filled with custom code that they are the only ones who could ever maintain it) However, a contract for the entire projects would have taken forever just to award the contract. That isn’t because of some inherent government culture (although any large org will have problems) but because of the wonderful anti-waste laws congress has mandated to ensure the peoples money is spent wisely. One little error and the GAO will overturn the award and force a re compete. (The biggest cluster-f of all time is the multi-decade saga to replace the Aerial refueling planes as the Air Force watches its fleet of 50 year old planes disintegrate) The biggest barriers are things like the Buy-American Act that forces all work be done inside the US and drives up prices like crazy. The worst is the fetishism of “small-business” that requires that 40% of the money be spent on companies without the inherent experience necessary to complete things. If you aggregated things in a single award the prime contractor has to file a MASSIVE small business sub-contracting plan which also needs to meet all the crazy requirements of the FAR. I once had to get one of those plans from CISCO to buy networking equipment and include it in a file that nobody would ever read unless we were audited. The thing was the biggest waste of money ever.

So they chose a pre-competed contract vehicle called an IDIQ to issue task orders through. This means all the prices for work had already been set but there are still limits to how big any single contract can be. HHS still has to ensure that the money is spread around to small businesses and 8(a)’s. (The biggest scam ever is how “Internet Tubes” Ted Stevens got Alaskan Indigenous organizations to fulfill certain requirements. They are just giant shells that bid on contracts and turn around and contract them out to the real major players. When I was job hunting I would see these ads from some Alaskan company that was offering $120k+ to go drive semi-trucks in Afghanistan; which makes sense since nobody knows how to move freight around a desolate mountainous 3rd world country like the Inuit).

So HHS scraped together every spare dollar they could and outsourced like crazy knowing they have a deadline set in stone. (or at least set by Congressional Act). They needed to outsource the program management but there is no available task order for something like “development lead” or program management that was pre-competed so because of time restrictions the had to take it on themselves. That is in addition to being understaffed by more than half. My wife was on a “team” of 2 that was supposed to be 5 people. Sure they hired MITRE and Booze to come in a tell them what they “needed” to do to meet “best-practices” as the corporate world would. That ain’t enough!

(Tangent: You should see the headaches people get trying to comprehend the different laws that define the “flavors” and “colors” of money restrict how the executive can use them. God forbid you spend research and development money on increased headcount; or spend procurement money on operations and maintenance. Also, you better spend what was given to you because it all disappears October 1st. Go home and dream that someday your program will actually get a sensible budget (that assumes such a thing CAN be passed by congress) or else dream that you are running the pet program of Rep. Cletus A. Dumba** counting cow farts or investigating the use of chickens*** as a fuel.)

Still, it isn’t like they could go to the contractors and force them to fix things. The Acquisition shop is full of Contract Officers who are by law independent of any program office. They are the lucky ones who instead of getting fired when they screw up get investigated by an Attorney General and are charged with CRIMES. They also get public black marks and become unemployable by any government agency or contractor. If that is even in the realm of possibility I am going to be more worried about dotting every i than in a program that I have no stake in whatsoever. They certainly don’t get payed enough to care.

(TANGENT: The biggest ethical dilemma of my young life involved my response to an error I found in a contract. There was a contract signed by by a CO that was a technical violation of the anti-deficiency act. It obligated the government to pay several hundred thousand dollars. The law says I have to immediately report it to the government but I new he just missed it. After several weeks of arguing I got the company to throw out the contract and create a new one instead. This was their only government contract and they could have gone to court and received their money as it was valid (kinda more complicated than that). They made very special products for the oil and gas industry and didn’t understand that it was illegal to force the government to pay anything in the future. They could at least have gotten the CO fired, investigated, and the entire office audited for missing the 14th page of some contract he signed as an official representative of the US. So all turned out fine? Some overworked schlub had a mistake erased? Not so sure anymore because the now defunct Minerals Management Service who collected royalties from resources taken from public land was rife with corruption and bribery in awarding contracts. I doubt the issues were related in any way, and the investigation was already done but I wonder if the CO who signed that document was one of the corrupt ones in that office. I was just a dumb inexperienced kid only a few years removed from college who was proud he fixed a problem and made it all easy.)

They still have to “go-live” because the law doesn’t really give them any other option. The real revelation is that Sibelius is just a politician (shocking!!!) and not an actual manager. She makes sense for the job from the perspective of a pre-ACA HHS considering her long history with the Health Insurance Industry (as a foil). That doesn’t mean she knows much more than any other person who was a trial lawyer 30 years ago and then went into politics. I doubt she ever had to manage any complex organization before. The story was that she was all “see no evil” and “hear no evil” with everyone warning her. She didn’t want to hear it because then she would have to “tell the President” that the single most important program of her department is a giant disaster. Of course it was a disaster waiting to happen.

One way they wanted to “simplify” was to integrate the Identity and income verification system with the databases that had the actual information. That seems smart. However, that data just happens to be held by the IRS. The IRS (who should have probably run the exchanges) isn’t allowed to expose that information. It is largely obfuscated from system users (by law) and it is also illegal to disseminate that info to any other government entity unless compelled by subpoena. Well the health care exchanges don’t have subpoena power so there was no way to have a single entity ensure the systems could even talk to each other. I think they eventually get back a simple yes/no on many data fields but that is still a major issue. You have 2 different contractors answering to 2 different agencies trying to figure out how to integrate a system while playing telephone through half a dozen layers of government bureaucrat. Of course the IRS also has to scrape together money from somewhere because they sure can’t go ask congress for it.

The crazy thing is that I truly believe that the only sensible organization that could have succeeded is the IRS. God knows they have implemented some crazy big IT systems before and without an major problems. They have to process over 100 million files in over a week. They regularly update tax records. Hundreds of millions of updates to a database weekly so that they can take 99.9% of the tax records and turn them around in a few weeks. The e-filing seems to work pretty well for many. (But you can’t make it too easy or people might find paying taxes too unobtrusive and drive H&R Block out of business destroying good ‘merican jobs and replacing them with evil jobs working for the IRS)

Errorr October 27, 2013 at 12:51 pm

Didn’t quite mean to vent that much… but there are so many problems with the Federal government that could be corrected if people could at least agree to try. Yet we have one party who doesn’t want to admit failure of the government. The other party will actively inhibit the ability of the government to function better because they want to make everyone else despise government as much as they do. It will never be that functional. It is just too big and unwieldy in many respects but so are alot of organizations.

Alot was crowed in the IT crowed that major IT projects now fail less than 50% of the time IN THE PRIVATE SECTOR. This is from data over the past couple of years. The exchanges were doomed because our system of government is broken. I say we try something different than this stupid 3 branch nightmare that is going to cripple our country.

Mark Thorson October 27, 2013 at 7:02 pm

Maybe you’d prefer Taiwan, where they have five branches of government, one of which is the Control Yuan responsible for auditing the other four.

Yancey Ward October 27, 2013 at 1:08 pm

I don’t understand. You say the rules prevented the hiring of an integrator, but most of the contracts were no-bid outsourcing anyway. What prevented HHS from assigning an integrator through no-bid, too, other than your claim that putting CMS in charge somehow shields the process from Congress (an argument that makes no sense on its face).

Faceless Commenter October 27, 2013 at 9:19 pm

To your first question, Errorr said, “They needed to outsource the program management but there is no available task order for something like ‘development lead’ or program management that was pre-competed so because of time restrictions the had to take it on themselves.” Second (about CMS being harder to harass), I don’t know.

MikeDC October 28, 2013 at 1:12 pm

“They needed to outsource the program management but there is no available task order for something like ‘development lead’ or program management that was pre-competed so because of time restrictions the had to take it on themselves.”

They managed to develop task orders for everything else.

But in more practical terms, having an outsourced program manager wouldn’t have worked anyway because any requirements the manager develops would still have to be taken back to, and approved, by the various bureaucratic stakeholders at the various agencies involved.

The conversation would have gone something like this:
Program Manager: “OK, CMS, I need you to do this. IRS, I need you to do that.”
CMS: Piss off!
IRS: Piss off!
PM: OK, just explain to me how your system works, and how we can pull data from it. You don’t actually have to do anything.
CMS: Explaining how our system works is something.
IRS: Yeah!
PM: Well, ok, we’ll pay you.
IRS: You can’t pay us to do that, your contract is through CMS.
CMS: OK, we’ll get you the specs on our system. We’re faxing them over now.
PM: There must be some mistake. Those faxes look like Punch cards?
CMS: Why’s that a mistake?

Errorr October 29, 2013 at 2:49 pm

The list of available services are already strictly dfined in an IDIQ. There are prist lists and pre-defined tasks that limit what the contract can be used for. There almost certainly wasn’t some way to request a program management function for the program.

Chris S October 27, 2013 at 5:11 pm

Bravo, thanks for the comment.

Kevin October 28, 2013 at 1:19 am

Ah yes. The IRS. Whose highly developed anti-fraud systems didn’t notice that they delivered 23,994 tax refunds worth a combined $46,378,040 to ONE address in Atlanta. Yeah, let’s put the guy who developed that system in charge. What could POSSIBLY go wrong?

Mike W October 27, 2013 at 12:55 pm

“[Donald] Berwick [head of Centers for Medicare and Medicaid Services from July 2010 through December 2011] said he does not remember any discussion, for instance, of a decision that is the focus of intense second-guessing in Congress: having [CMS] administrators oversee development of the online marketplace, rather than hiring an outside management company with stronger technical expertise to coordinate the complex project.

‘It certainly wasn’t me who made the decision. It must have been lower down in the organization. I don’t recall,’ Berwick said.

“The decision to keep it a CMS function makes sense,” he said, citing data security concerns as a reason not to outsource oversight to a company. “It’s a highly competent agency accustomed to managing data and data systems.”

http://www.bostonglobe.com/news/nation/2013/10/23/gubernatorial-candidate-don-berwick-distances-himself-from-health-insurance-rollout-mess/qpKyKuihGjARdj6TNtpFfK/story.html

Yancey Ward October 27, 2013 at 1:10 pm

Berwick’s claim is just simply gutless. Afraid to take responsibility or afraid to assign the blame to a person who could take a shit on him- one or the other has to be correct.

Mike W October 27, 2013 at 2:29 pm

“data security concerns”…yet the federal government outsourced security clearance investigation to a private contractor that cleared Edward Snowden. Ya just can’t make this stuff up.

Faceless Commenter October 27, 2013 at 9:24 pm

“I don’t recall,” Berwick said.

Well, the guy’s got a memory like a sieve. Maybe a job with short-term memory tasks would suit him better. I say this at the risk of insulting waiters, traffic cops, and assembly line workers, which I don’t mean to do.

Styro October 28, 2013 at 8:51 am

Donald Berwick is currently bidding for a job with short-term memory tasks and lack of accountability, just like his last job. He’s running for Massachusetts Governor.

Errorr October 28, 2013 at 9:51 pm

According to the wife it was Henry Chao (not sure on spelling… Chow?) who made the decision. After asking her the chief worry was that if they did this the “Right Way” like the OP suggests it would take at least 1.5 years to award a contract (a lot of hoops) and then were terrified that at the end a contract with so many bidders would definitely be contested adding slmost another year when they would have to turn around and start over again.

Still, they were arrogant about their own abilitiy to implement such a system.

Yancey Ward October 27, 2013 at 1:01 pm

It wasn’t him, and it wasn’t someone higher up the food chain who made the decision- just some faceless nobody? F*&*ing hilarious.

Yancey Ward October 27, 2013 at 1:02 pm

In reply to the link from MikeW.

JonFraz October 27, 2013 at 2:22 pm

All this hand-wringing begs the question of how commercial apps of this sort accomplish mass data queries and submissions to diverse systems. This isn’t anything new: Travelocity (to name just one website) has to communicate with far more vendor systems than healthcare.gov. Obviously healthcare.gov was not done well, but the task is not an impossible one, as the nervous neddies here are trying to suggest.

Ray Lopez October 27, 2013 at 3:44 pm

I’m hardly a database expert, though I could play one on TV and I have done several MS-SQL databases integrated with a front end, but one way the Travelocity people do what you suggest is by caching data across distributed systems but NOT synchronizing data except ‘relatively infrequently’ (once a day or week, not once a second). What this means in English (and you can see this in practice if you travel around the world and attempt to log into your account like Amazon.com, only to find you have NO purchase history while you are in Japan, though a week ago you did while in America) is this: several servers exist world-wide, but they don’t talk to each other except say once a week. So, you can do a lot of stuff quickly, but the right hand may not know what the left hand is doing. Usually this is not a problem, since speed is more important than accuracy. But if, as apparently the government did, you try and be 100% accurate, you will run into problems with synchronization in real time that will slow and possibly crash your system. So private companies make this tradeoff all the time and don’t synch their data except relatively infrequently, unless it is something like a credit card purchase which must be coordinated in real time, but they also use massively high-cost infrastructure to speed transactions to a central depository in that case. Sorry that’s the best I can do in limited space. Just remember: speed vs accuracy vs low-cost–which do you want, pick two of three? You can have speed and low-cost but it will not be a synchronized distributed system–think Amazon.com in my example above. Or, you can have accuracy and speed, but it will cost you in expensive hardware.

Ray Lopez October 27, 2013 at 4:11 pm

The database buzzwords are BASE vs ACID. Credit cards are ACID, while Travelocity, Amazon and other such sites are BASE. More info here: http://en.wikipedia.org/wiki/Eventual_consistency

There is another buzz phrase used, in the form of a theorem, but the name escapes me, however, the above link should get you started.

TMC October 27, 2013 at 5:13 pm

How often does my 2012 AGI change? Many of the other data elements could be cached as well.
If there are restrictions about where the data lies and the back ends are legacy, the cache could be built and housed at the IRS.

Rob Morgan October 29, 2013 at 8:53 am

Interesting. Cache all of the policy data in a central datastore, scale it up and presto.

What about the actual policy risk algorithms? You know, the really complex code that’s unique to each provider, developed over years with their medical actuaries, and often represents most of their IP? If it’s old, it’s probably so obtuse by now it’s not even possible to document it for re-implementation, and no, I didn’t just make that up – it’s a real problem for many legacy system modernisation projects.

Square one guys, try again.

Michael Hussey October 27, 2013 at 2:29 pm

Until the back-end databases are properly organized (likely need a centralized cache-layer), they should just collect people’s contact info. Let them create an account and fill in their basic info. Then send them an email when a quote is ready to go with a link that direct loads the quote page. They aren’t equipped to deliver real-time quotes yet and I don’t believe, based on what I’m reading here, that they’ll be able to deliver that any time soon. So, they need to step back and take it step-by-step.

Some of my companies have experienced significant scale challenges (peekyou.com, ratemyprofessors.com) with real-time queries required across multiple databases. In the case of PeekYou, we’re often querying 30 or more disparate datasets on every search made on the site (and there are over 400K searches made every day). They key is cache, cache, cache — which I can’t believe the people responsible for this didn’t know. It is sickening to see how much money they’ve wasted.

Dan Hanson October 28, 2013 at 7:48 pm

That’;s exactly what I’d do at this point too. And I wouldn’t be surprised if that’s the kind of solution we see at the end of November. Abstracting away the quote system allows them to handle them on their own time – manually if necessary. Then they can start putting back-end systems into the mix with subsequent releases. It’s what they should have done from the beginning with a pilot program to validate the requirements, then with a phased roll-out so they could mitigate risk if the system didn’t scale.

But their political masters couldn’t allow that. At least not until they had their noses rubbed into reality by the first failure. Now maybe they’ll be more willing to listen to reason.

Max Tower October 27, 2013 at 2:50 pm

If the backend was the problem then how are some of the individual states up and running? Don’t they need the same backend? Can we just copy paste their working software?

Wil October 27, 2013 at 7:00 pm

Exactly the question that occurred to me.

Jeff October 27, 2013 at 7:39 pm

>>Can we just copy paste their working software?

Umm … no.

Besides, even if you could you’d still need Congress to authorize paying the states for that code.

Richard Gadsden October 28, 2013 at 11:06 am

The states have a much smaller problem, in that there’s only one ruleset and only one set of insurers per state.

Healthcare.gov would have worked a lot better if they’d just built 26 separate state sites, each one as an independent project, rather than building a single giant site.

Since the first question is “which state are you in” and it has to redirect people in the 24 states that have state systems anyway, having a very thin front-end that just redirects to your state exchange (whether federally-built or state-built), that would work out just fine.

Of course, that would be in breach of the ACA.

Errorr October 28, 2013 at 9:58 pm

Much smaller set of back end systems that ARE integrated properly. Most of that is getting the income verification for income.

radical blogger October 27, 2013 at 4:51 pm

as someone who works for the govt and who is well acquainted with fed govt IT practices, allow me to give you some insight:

The govt promotes people based primarily on race/ethnicity and people skills.

This is a recipe for disaster when it comes to software. I have a degree in computer science and have some insight into this area. Software design and architecture requires someone who is oriented towards knowledge acquisition and who is white, in general. Some asians do it well, yeah.

But the govt promotes people who are black or hispanic and who are people oriented, as opposed to being oriented towards book knowledge. That is the fundamental basis for this and many many other IT disasters in govt.

Rich Berger October 27, 2013 at 5:22 pm

Where, O where is mulp? I can’t wait to hear his take on this.

A Fake Mulp October 27, 2013 at 5:39 pm

This is what you get when you have Republicans cutting government to the bone- the government can’t find two sticks to rub together to start a fire.

Hillary October 27, 2013 at 5:41 pm

I’m a financial analyst who has been learning IT in self-defense since I ended up being the business liaison on a couple projects for my department.

From my limited knowledge, this looks like an integrations problem. They appear to be using 834 EDI transactions. EDI is usually near real time, not actual real time, especially if you use outsourced vans.

Rob Morgan October 29, 2013 at 9:04 am

Yes, subtle difference there, but a big difference too. Real-time = wait for result. Near real-time = use async mechanism, get timely response, but later.

And another fundamental aspect: Everyone waiting on a slow NRT transaction generally consumes a live session resource, usually a worker thread on a server somewhere. (Quiet please in the NodeJS corner). Just a slight degradation in that wait time will push most high-volume web severs over the edge, and mid-tier SOA or similar gateways will behave the same way.

Miek October 27, 2013 at 6:23 pm

The issue I see here is the author of this article has marginal experience with federal contracting. “The people who wrote the code for these systems are long gone…they are prone to transaction timeouts” … wrong, wrong and wrong. There are plenty of coders still around maintaining these systems, even with the obscure technologies like MUMPS and all and they are running on rather robust hardware in huge datacenters.
Second of all, the government should NEVER outsource integration – the systems integrator requires an authority to manage other contractors that only the government is capable of holding.

Rahul October 28, 2013 at 1:34 am

I agree with you. You can outsource the pieces but the need for internal IT teams is exactly at times like these. You need guys that are conversant with both your internal systems & the technology in order to be able to write specs. & RFPs that make sense.

Without a core of in-house talent, it’s extremely hard for an outsider to come in and do a good job of writing out the requirements.

hoo boy October 28, 2013 at 2:10 pm

Second of all, the government should NEVER outsource integration – the systems integrator requires an authority to manage other contractors that only the government is capable of holding.

Inherently governmental function. That power can be “finalized” by using a GTR (or whatever that agency) calls it. The problem is that person still needs to have a skillset that would render the Integration PM largely moot.

Dan Hanson October 28, 2013 at 7:33 pm

I’ve worked on legacy software, and anyone who has knows that there are usually areas of bad or very complex code that no one wants to touch. “Here be dragons” is a comment I’ve seen more than once left by a previous developer to attempted to wade into the mess.

And you would be surprised how often you find bizarre little one-off applications and side projects written by some employee long gone which is absolutely critical to the process. “You can’t replace that computer! Our special aggregator application depends on it! No, we don’t have the source code for it – Bob, an old IT admin wrote it before he retired in 1985.”

Faceless Commenter October 27, 2013 at 9:30 pm

So Dan, at the end of all this, what’s your take on Jeff Zients’s promise of a smooth system for the vast majority of users by the end of November?

Dan Hanson October 28, 2013 at 7:43 pm

I hope he’s right. I have my doubts. Since they didn’t do a proper integration test, It seems to me that they should be spending more than a month just doing that after they are satisfied that the code is complete.

Again, I’m just speculating because I don’t know the nature of their problems, but if I had to guess, I’d say that what they’ll wind up doing is scaling back, and they’ll release the new version with some of the more problematic parts disabled, turned into ‘temporary’ manual processes, or something else. For example, rather than connecting to all the systems required to fully vet the policy and calculate the subsidies, maybe they’ll just take the insurance request as-is, pass it off to the insurers, and make them figure it all out. I do know that a month is a crazy-short time for any development process. Whatever code they cook up during the month really should create a whole new cycle of integration testing. So even if it’s do-able, it’s definitely risky.

The quote from him I’ve read sounds like they’ve gone into a programming ‘death march’ or ‘war room’ – which is a common way troubled projects finish. You pull all the developers together, force them to work insane hours, ply them with Coke and cheetos, micro-manage the heck out of them, and try to drive something to completion. That can work when done for a few days to get a project across the finish line, but if you carry it on too long you’ll start seeing the bug count climb and productivity will collapse. And trying to maintain a ‘war room’ atmosphere across many teams in different regions can be extremely hard.

Worst case scenario is that no time is spent really analyzing the problems or the fixes, and they break the system even more. But I guess we’ll see. I do think they’ll release ‘something’ – the political forces behind this have to be immense. Whether it satisfies anyone or even works remains to be seen.

Rob Morgan October 29, 2013 at 9:22 am

But the pessimistic interpretation is that they have a flawed and unscalable integration model. Doing the scale-back (simplifying or disabling some bits) would definitely help to mitigate this, as would a radical shift to a more async approach.

But I suspect that the current accountabilities won’t allow that remediation to be contemplated – if skilled system architects had the audience and authority to communicate this form of change, this mess could never have arisen in the first place.

More intensive code changes and/or testing will only work if the entire system is 99% ok, with just a few nasty residual defects. Doesn’t sound that way, and a raft of intensive late changes is likely to introduce even more chaos into the system – death march, as you’ve pointed out.

For clarity, death march projects are those that get stuck at 90% complete indefinitely, until finally someone kills them off, usually with complete catastrophe in terms of outcomes.

ThomasH October 27, 2013 at 10:47 pm

This is making supporters of a single payer look better and better.

Govco October 28, 2013 at 12:58 am

Or, the opposite.

Kevin October 28, 2013 at 1:11 am

We have that in the US. Do a google search for the phrase “don’t get sick after june” to learn more about the wonderful future awaiting you.

hoo boy October 28, 2013 at 2:07 pm

Failure isn’t rare for government IT projects – it’s the norm. Over 90% of them fail to deliver on time and on budget. But more frighteningly, over 40% of them fail absolutely and are never delivered. This is because the core requirements for a successful project – solid up-front analysis and requirements, tight control over requirements changes, and clear coordination of responsibility with accountability, are all things that government tends to be very poor at,

The mystery is why we keep letting them try

(1) The animus towards the concept of Government -especially Government employees – is interminable. “We” aren’t letting anybody try. “We” have their hands tied behind their backs.
(2) Full time professionals who want to work for the Government are needed. They need to keep their skills current and be paid enough to leave the private sector. “We” aren’t willing to pay for it, thus “we” have the IT management “we” deserve.
(3) 1 prohibits 2. This is why “we” are where “we” are.
(4) The reason that there was no integration lead is probably because there was no $$$ – or willpower on behalf of the administration – to fund the competition. Contractors write, recommend reqs. They do they heavy lifting of the tech advising.

Jay October 28, 2013 at 5:33 pm

You don’t think the administration had the $$$ or willpower to spend on this? That would be a first. Please cite your evidence for #2 also, there were many articles written over the study last year (or 2011, can’t remember) that showed that equal professional positions within the government paid more (with benefits) than similar private positions, a lot more.

Dan Hanson October 28, 2013 at 7:24 pm

I’m not criticizing the developers, or the companies involved. And there are lots of fine engineers working for the government. This isn’t a people problem – it’s a management problem. Government bureaucracies are uniquely unsuited to managing these sorts of projects. Politics always trumps everything.

I’m willing to bet that various developers were sounding the alarm. And I’ll bet the response they got was, “No, we can’t extend the launch date. And no, we can’t cut any features. Therefore, just get it done! Failure is not an option.”

But failure is always an option. There are four options: You can extend the time needed for the project, you can increase the budget, you can cut features, or you can fail. If you don’t do one of the first three, the fourth is a necessary result.

Errorr October 29, 2013 at 3:04 pm

Partially correct. They KNEW they needed a single contractor to run the thing with functions sub-contracted out to various players. They asked the AQ people how long it would take. The answer was 1.5 years to award plus however long to get through the rpotest process that would almost certainly have to be completed for a big contract with so much interest. That forced their hand and they did it ad hoc using pre-competed contract vehicles.

Da Moose October 28, 2013 at 3:31 pm

I’ve worked as an IT contractor for over a decade in DC. This article is spot on. I’ve remained a contractor because I don’t qualify for a government IT position though I have 2 masters in IT and six IT related certifications. The reason I don’t qualify is because I am not a minority, I am not a vet and I am not disabled. When you hire people for complex IT positions based upon their race or their ability to fire a gun, you inculcate a sense of entitlement and unfounded arrogance that only leads to one thing: pervasive institutional incompetence.

When rank-n-filed government positions become handout jobs for special interests, social and civil services inevitably collapse. GOP and the Dems are both to blame. Maybe some day Americans will start to vote to break the back of this corrupt political system.

Dan Hanson October 28, 2013 at 7:19 pm

Wow, I didn’t expect my comment to make it to the main page and get this kind of response! I’ll go back through the thread and respond where appropriate.

One thing I do want to say up-front is that like everyone else, I am speculating on what’s going on based on published news reports and my own experience as a software engineer. I have no personal knowledge of the specific issues they are facing today. Given the almost total lack of transparency around this project, the only people who can claim knowledge of the actual back-end code are those working on it.

What I can say is that I’ve seen a lot of projects like this get delayed, sometimes for a very long time. I’ve seen six-month projects turn into five year projects. I’ve also seen them fail. The problems that kill projects aren’t generally bugs in front-end code, even if it’s horribly written. Bad code can be fixed, and client-side bugs are generally easy to find and repair. The most serious problems are the ones that involve the overall complexity of the project simply getting out of hand, and that’s often because of back-end issues. Transactional problems between multiple databases, data format incompatibilities, architectures that can’t scale, that sort of thing.

When you take a number of already complicated systems and you start connecting them together, you need to control that very tightly, and you have to have a solid understanding of how those systems perform. There must be a well-defined interface between them, including handling transactions, failures and delays. Those need to be tested thoroughly as early as possible to make sure the legacy systems behave correctly. If you build an architecture from faulty assumptions, it’s very hard to fix the problem late in development.

The complexity of these projects grows exponentially as more data sources and transactions are added to the mix. The complexity is compounded if the project grows so large that you need multiple teams working in coordinated fashion. Lack of communication, rivalries, and inconsistent assumptions between teams can sink a project all on their own. At some point, the complexity becomes so great that the chance of success is small right from the start.

If you look at the history of failed IT projects, whether in the government or the private sector, you often see network effects and interactions between multiple legacy systems at the root of it. By ‘network’ I don’t mean a physical network problem, but the problem of coordinating activity between many nodes of a complex web of interactions.

It’s been said that the only complex software projects that succeed are the ones that start out simple and evolve their complexity through iteration over time and multiple release cycles. The industry switch to agile programming is a reflection of this – we’ve learned that it’s best to build only the minimum that needs to be built today, and then iterate through short cycles to build up the complexity of the system while constantly testing and getting continual feedback from end users. We’ve gotten to this point because of the many failures of the old ‘waterfall’ development methodology where really smart people ™ try to understand everything there is to know up front and build a giant design that can do everything. These architectures usually don’t survive very long, and success or failure depends on how quickly you can adapt them to the real world or abandon them once you realize they don’t work.

The internet is the very embodiment of this. The reason we’ve managed to connect so many computers together and why companies like Travelocity or Amazon can successfully interface to so many other systems is that we do it iteratively. Trial and error and incremental updates are how the internet generally functions. You learn from your mistakes, and you try again. Short Iteration cycles keeps the costs of the mistakes low. When two systems are hard to interface, you find other ways. The scope of each change is small, but over time complexity emerges.

This project was very different, even if agile methods were used for parts of it. This project had requirements pushed on it by people who don’t understand the needs of IT. It was grand in scope, connecting many government and private computer systems and data sources. Some of the requirements were withheld from the development team until late in the process. The development team was given no flexibility in which systems had to be connected together or which features had to be provided in the 1.0 release. There was no option for a simple system first – it all had to work right out of the gate. The fact that there was very little or no formal integration testing guaranteed a launch failure, because integration testing of a project of this size always finds serious issues. That process should have been months, not days or a couple of weeks. And there should have been flexibility for the schedule to be pushed back or features cut if the integration tests uncovered more severe problems that the several month testing cycle could manage. None of that happened.

The true complexity of these systems is almost never visible from 10,000 ft – the view of the architects. It’s when you actually try to implement the vision that you discover timing errors, missing data, fields that are not compatible, assumptions that are wrong, bugs in legacy systems that prevent them from doing what you want, and critical system or user requirements that no one understood at the start.

As a trivial example, consider a simple transaction. You want to transfer cash from one account to another. The system tells one computer to remove the money from one account, then it tells another computer to credit the other account with the money. If the credit request fails, the whole state of the transaction has to be rolled back or else your money vanishes into the ether. Now what happens when you discover that the first system doesn’t handle transactions well, and so when you told it to deduct your money it kicked off a number of other processes that can’t be unwound? Perhaps it sent a notification of the withdrawal to an external entity, or when it tried to unwind the debit a bug prevented the operation. So when you try to roll back the transaction, some of it unwinds and some doesn’t.

These are the kinds of problems that can give you fits and can be very hard to solve. When you have to maintain transactions across many different systems, it can get hairy. And this is just one of the many very hard to solve problems that can occur in the back end. Bugs in these areas can be a devil to find – especially if they only happen once in a while under certain conditions. If the system screws up, it can leave pieces of bad data lying around, caches can fill up, databases get out of sync, holes are left in the transmitted data, etc. In very complex systems, you can get to the point where fixing one of these issues creates two more, or the attempt to fix a bug leads you down a rat hole of legacy code or chains of dependency that seemingly never end. And eventually, you’re forced to just pull the plug on it all.

HoratiusZappa October 28, 2013 at 10:14 pm

Transaction management doesn’t strike me as a problem this system should be having.

Users need to be able to create an account and enter enough profile information to correctly map their identities in various legacy systems. The requests to legacy systems should be read-only: gather the data, crunch the numbers, present the options. Leave aside the selection and enrollment in a plan; up to that point, it should have been easy-peasy. If a data-gathering expedition fails, who cares – try again, there should be nothing that needs to be rolled back.

If users can’t simply create accounts and enter profile information, then the legacy systems are not the problem (yet).

No sensitive information should be cached anywhere.

What I’m curious to know is whether project scope included the provision of an updated “API” (facade, adapter) for each of the legacy systems. Tying the “new” directly to the “old” would not be the best approach.

Dan Hanson October 28, 2013 at 11:21 pm

My understanding is that the system has to do a bunch of notifications as well. When the enrollment happens various systems have to be notified so that the government knows who has healthcare and who doesn’t so they can levy fines. And I’ll bet a lot of departments demand notification even if they don’t strictly need it.

Actually, I just went and looked at the top level architecture diagram that’s been posted online, and it looks like the system has to write to the HHS computers for ‘reporting’, to the state exchange for that person’s state, to the IRS computers so they can track tax credit status, to “Employer Services” for billing or invoicing and ‘underwriting’, to the state insurance agency, and to the health plan carrier that’s providing the insurance. It also looks like the enrollment system has to communicate with the state’s MMIS (Medicaid Management Information System) for whatever reason, and the MMIS system then talks back to the web hub at a different level.

But the problem may not be transactions – I just picked that as an example that non-programmers might understand. There could be threading issues, race conditions, data conversion problems, inconsistent fields between databases, legacy errors in data that surface when trying to use that data to write to other systems, data corruption, network failures… Who knows what the specific problems are in this case? The point is that when you build systems like this, all of these kinds of problems have to be considered and dealt with.

One other thing – since the system apparently has never run successfully end-to-end with any kind of reliability, has there been any security or penetration testing at all? Is it HIPAA compliant for health record privacy? This thing connects to a lot of government databases – it’s an identity thief’s dream system. What do you think the odds are that it’s secure, given what we’ve heard about the quality of the client code and the issues in the back end? I wonder what an injection of malicious code in the right place could do to the whole thing?

I agree that there would logically be an API, but that’s easier said than done if you’ve got to code it into many systems. I just did a quick search on these MMIS systems, for example, and found one state that’s using a 1980′s era OS/390 mainframe, while another is using a system based on SQL server. I wonder how many different technologies are out there? Who’s writing API connectors or all of them? Who has that expertise? Was each state responsible for writing its own? If so, how many of them are buggy since they were never apparently tested as part of a full-scale integration test.

My guess is that the interfaces between them are done with documented data interchange formats and not a coded API. But I could be wrong about that.

If you’d like to read about just the MMIS part of this whole thing, you can start with this: http://www.cms.gov/Research-Statistics-Data-and-Systems/Computer-Data-and-Systems/MedicaidDataSourcesGenInfo/MSIS-Mart-Home.html

If that hasn’t bored you to tears, Here’s the file specification and data dictionary spec for the MMIS systems – it’s only 167 pages long… http://medicaid.gov/Medicaid-CHIP-Program-Information/By-Topics/Data-and-Systems/MSIS/Downloads/msis-data-dictionary.pdf

Fun times.

Rob Morgan October 29, 2013 at 9:51 am

Dan,
That’s a good and comprehensive explanation. Although I’d argue that any critical system not capable of accepting compensating transactions (for distributed rollback) is just a no-go area, for integration.

The key to these large multi-system integration projects is usually to have robust expertise available from each impacted system (integrated system) to provide not just interface definitions but also behavioural aspects that aren’t always as well documented as they should be. And integration planning and design really needs to be done by trusted and skilled technical staff, not by project managers.

This all goes to custard if the coordination or control of this latter aspect becomes a program-to-program (as in project, not system) coordination process. At that level, the politics of program managers all determined to not be the one to raise the red flag dominates the process. Actual integration architecture and planning effort becomes a time-boxed risk to be suppressed, and declarations of milestones achieved is the main game.

That produces outcomes like insufficient testing, because the schedule is the schedule, and the deadline is fixed. Sounding familiar?

The real underlying issue is that the folks who can fix it won’t have a voice, at least not before the incumbent PMs calling the shots have been dismissed, or stood down until a workable technical solution is defined.

Dan Hanson October 29, 2013 at 10:30 am

Completely agree with all that.

wrick October 29, 2013 at 11:37 am

“The real problems are with the back end of the software. When you try to get a quote for health insurance, the system has to connect to computers at the IRS, the VA, Medicaid/CHIP, various state agencies, Treasury, and HHS. They also have to connect to all the health plan carriers to get pre-subsidy pricing”

Best would be to get the latest pricing from all those legacy systems as stated — but if those systems are really that unreliable, a daily (or whatever) upload of the pricing and other details from the legacy systems to the main system would be a good alternative approach. It’s not like a stock ticker that needs pricing up to the second. This approach would mitigate the problems related to the legacy systems being not available or just not consistently stable.

Dan Hanson October 29, 2013 at 12:59 pm

I’m not sure that’s feasible given the requirements they have. How complex is the formula for determining your subsidy? Is it just income alone? Or are there numerous other ways to affect the subsidy amount?

if they wanted to just give raw price data for the various insurance packages, that’s easy. Just have each insurance provider provide a table of prices and build static files where consumers can search them. But the administration is very worried that the prices are looking to be pretty high without subsidies, and they’re trying to avoid sticker shock by only showing your price after the subsidy has been calculated. Hence the connections to all these other systems for the data needed to calculate your price.

And give you the price is only half the problem, and probably the easy half. When you actually sign up a number of systems need to be notified.

wrick October 29, 2013 at 2:50 pm

I agree it depends on the complexity of the requirements. I suppose it could entirely be rocket-science, but my guess is that it’s a basic set of policies for each state/area with a few variables like income, and age. Sex and pre-existing conditions are out by law. I don’t know, so could certainly be wrong, but my guess is that the premium calculations are not so dark and mysterious that such an approach would be helpful.

Either way, it’s comically inept to use ‘the legacy systems are spotty’ as an excuse. If there are really that bad, it should have been known well before deployment, and a strategy to mitigate the problem should have been in place long ago. The “legacy system is no good” excuse is just more demonstrated incompetence.

Errorr October 29, 2013 at 3:06 pm

They took out the window shopping raw price data aspect at the last minute for some unknown reason…

Matthew October 29, 2013 at 7:33 pm

What purpose does it serve to have real-time integration with batch systems?

Matthew October 29, 2013 at 7:36 pm

It’s also worth pointing out that the regulatory specification for how to count to 50 employees is 200 pages long. Try writing efficient code to meet that kind of specs.

Comments on this entry are closed.

Previous post:

Next post: