Results for “rapid test”
251 found

Toward a universal medical test?

UC San Francisco scientists have developed a single clinical laboratory test capable of zeroing in on the microbial miscreant afflicting a patient in as little as six hours – irrespective of what body fluid is sampled, the type or species of infectious agent, or whether physicians start out with any clue as to what the culprit may be.

The test will be a lifesaver, speeding appropriate drug treatment for the seriously ill, and should transform the way infectious diseases are diagnosed, said the authors of the study, published Nov. 9 in Nature Medicine.

The advance here is that we can detect any infection from any body fluid, without special handling or processing for each distinct body fluid,” said study corresponding author Charles Chiu, MD, PhD, a professor in the UCSF Department of Laboratory Medicine and director of the UCSF-Abbott Viral Diagnostics and Discovery Center.

Here is the full story, via David Lim.

The greatest gaming performance ever?

Or is chess a sport?

First Magnus Carlsen “privatizes” chess competition, naming the major tournament after himself, setting all of the rules, and becoming the residual claimant on the income stream.

He reshapes the entire format into a seven set, four months-long series of shorter tournaments, consisting of multiple games per day, 15 minutes per player per game, with increment.  It seems most chess fans find this new format far more exciting and watchable than the last two world championship matches, which have featured 22 slow draws and only two decisive games (with the title decided by rapid tiebreakers in each case — why not just head to the rapids?).

Magnus won all but one of the “sets” or mini-tournaments, along the way regularly dispatching the game’s top players at an astonishing pace, often tossing them aside like mere rag dolls.  Even the #2 and #3 rated players — Caruana and Ding Liren — stood little chance against his onslaught.  Carlsen kept on winning these mini-tournaments against fields of ten players, typically all at a world class level.

A Final Four then led to a 38-game, seven-day showdown between Carlsen and Hikaru Nakamura, not decided until the very last set of moves yesterday.  Note that at the more rapid pace Nakamura may well be a better player than Carlsen and is perhaps the only real challenge to him (at slower classical speeds Nakamura would be in the top twenty but is not at the very top of the rankings).

Nonetheless Carlsen prevailed.  Nakamura had the upper hand in terms of initiative, but in the final five-minute tie-breaking round, Carlsen needed to pull out 1.5 of the last 2 points, which indeed he did.  He drew by constructing an impregnable fortress against Nakamura’s Queen, and in the final “sudden death Armageddon” round a draw is equivalent to a victory for Black.

Along the way, at the same time, Magnus participated in Fantasy Football, competing against millions, at times holding the #1 slot and finishing #11 in what is a very competitive and demanding endeavor.

The new quicker, cheaper, supply chain robust saliva test

The FDA has just approved a new and important Covid-19 test:

Wide-spread testing is critical for our control efforts. We simplified the test so that it only costs a couple of dollars for reagents, and we expect that labs will only charge about $10 per sample. If cheap alternatives like SalivaDirect can be implemented across the country, we may finally get a handle on this pandemic, even before a vaccine,” said Grubaugh.

One of the team’s goals was to eliminate the expensive saliva collection tubes that other companies use to preserve the virus for detection. In a separate study led by Wyllie and the team at the Yale School of Public Health, and recently published on medRxiv, they found that SARS-CoV-2 is stable in saliva for prolonged periods at warm temperatures, and that preservatives or specialized tubes are not necessary for collection of saliva.

Of course this part warmed my heart (doubly):

The related research was funded by the NBA, National Basketball Players Association, and a Fast Grant from the Emergent Ventures at the Mercatus Center, George Mason University.

The NBA had the wisdom to use its unique “bubble” to run multiple tests on players at once, to see how reliable the less-known tests would be.  This WSJ article — “Experts say it could be key to increasing the nation’s testing capacity” — has the entire NBA back story.  At an estimated $10 a pop, this could especially be a game-changer for poorer nations.  Furthermore, it has the potential to make pooled testing much easier as well.

Here is an excerpt from the research pre-print:

The critical component of our approach is to use saliva instead of respiratory swabs, which enables non-invasive frequent sampling and reduces the need for trained healthcare professionals during collection. Furthermore, we simplified our diagnostic test by (1) not requiring nucleic acid preservatives at sample collection, (2) replacing nucleic acid extraction with a simple proteinase K and heat treatment step, and (3) testing specimens with a dualplex quantitative reverse transcription PCR (RT-qPCR) assay. We validated SalivaDirect with reagents and instruments from multiple vendors to minimize the risk for supply chain issues. Regardless of our tested combination of reagents and instruments from different vendors, we found that SalivaDirect is highly sensitive with a limit of detection of 6-12 SARS-CoV-2 copies/μL.

No need to worry and fuss about RNA extraction now.  Here is the best simple explanation of the whole thing.

The researchers are not seeking to commercialize their advance, rather they are making it available for the general benefit of mankind.  Here is Nathan Grubaugh on Twitter.  Here is Anne Wyllie, also a Kiwi and a Kevin Garnett fan.  A further implication of course is that the NBA bubble is not “just sports,” but also has boosted innovation by enabling data collection.

All good news of course, and Fast at that.  And this:

“This could be one the first major game changers in fighting the pandemic,” tweeted Andy Slavitt, a former acting administrator of the Centers for Medicare and Medicaid Services in the Obama administration, who expects testing capacity to be expanded significantly. “Rarely am I this enthusiastic… They are turning testing from a bespoke suit to a low-cost commodity.”

And here is coverage from Zach Lowe.  I am very pleased with the course of Fast Grants more generally, and you will be hearing more about it in the future.

Pooled Testing is Super-Beneficial

Tyler and I have been pushing pooled testing for months. The primary benefit of pooled testing is obvious. If 1% are infected and we test 100 people individually we need 100 tests. If we split the group into five pools of twenty then if we’re lucky, we only need five tests. Of course, chances are that there will be some positives in at least one group and taking this into account we will require 23.2 tests on average (5 + (1 – (1 – .01)^20)*20*5). Thus, pooled testing reduces the number of needed tests by a factor of 4. Or to put it the other way, under these assumptions, pooled testing increases our effective test capacity by a factor of 4. That’s a big gain and well understood.

An important new paper from Augenblick, Kolstad, Obermeyer and Wang shows that the benefits of pooled testing go well beyond this primary benefit. Pooled testing works best when the prevalence rate is low. If 10% are infected, for example, then it’s quite likely that all five pools will have at least one positive test and thus you will still need nearly 100 tests (92.8 expected). But the reverse is also true. The lower the prevalence rate the fewer tests are needed. But this means that pooled testing is highly complementary to frequent testing. If you test frequently then the prevalence rate must be low because the people who tested negative yesterday are very likely to test negative today. Thus from the logic given above, the expected number of tests falls as you tests more frequently (per test-cohort).

Suppose instead that people are tested ten times as frequently. Testing individually at this frequency requires ten times the number of tests, for 1000 total tests. It is therefore natural think that group testing also requires ten times the number of tests, for more than 200 total tests. However, this estimation ignores the fact that testing ten times as frequently reduces the probability of infection at the point of each test (conditional on not being positive at previous test) from 1% to only around .1%. This drop in prevalence reduces the number of expected tests – given groups of 20 – to 6.9 at each of the ten testing points, such that the total number is only 69. That is, testing people 10 times as frequently only requires slightly more than three times the number of tests. Or, put in a different way, there is a “quantity discount” of around 65% by increasing frequency.

Peter Frazier, Yujia Zhang and Massey Cashore also point out that you could also do an array-protocol in which each person is tested twice but in two different groups–this doubles the number of initial tests but limits the number of false-positives (both tests must be positive) and the number of needed retests. (See figure.).

Moreover, we haven’t yet taken into account the point of testing which is to reduce the prevalence rate. If we test frequently we can reduce the prevalence rate by quickly isolating the infected population and by reducing the prevalence rate we reduce the number of needed tests. Indeed, under some parameters it’s possible to increase the frequency of testing and at the same time reduce the total number of tests!

We can do better yet if we group individuals whose risks are likely to be correlated. Consider an office building with five floors and 100 employees, 20 per floor. If the prevalence rate is 1% and we test people at random then we will need 23.2 tests on average, as before. But suppose that the virus is more likely to transmit to people who work on the same floor and now suppose that we pool each floor. Holding the total prevalence rate constant, we are now likely to have a zero prevalence rate on four floors and a 5% prevalence rate on one floor. We don’t know which floor but it doesn’t matter–the expected number of tests required now falls to 17.8.

The authors suggest using machine learning techniques to uncover correlations which is a good idea but much can be done simply by pooling families, co-workers, and so forth.

The government has failed miserably at controlling the pandemic. Tens of thousands of people have died who would have lived under a more competent government. The FDA only recently said they might allow pooled testing, if people ask nicely. Unbelievably, after telling us we don’t need masks (supposedly a noble lie to help limit shortages), the CDC is still disparaging testing of asymptomatic people (another noble lie?) which is absolutely disastrous. Paul Romer is correct, testing capacity won’t increase until we put soft drink money behind advance market commitments and start using techniques such as pooled testing. Fortunately or sadly, depending on how you look at it, it’s not too late to do better. Some universities are now proposing rapid, frequent testing using pooling. Harvard will test every three days. Cornell will test frequently. Delaware State will test weekly. Lets hope the idea spreads from the ivory tower.

FDA Allows Pooled Tests and a Call for Prizes

The FDA has announced they will no longer forbid pooled testing:

In order to preserve testing resources, many developers are interested in performing their testing using a technique of “pooling” samples. This technique allows a lab to mix several samples together in a “batch” or pooled sample and then test the pooled sample with a diagnostic test. For example, four samples may be tested together, using only the resources needed for a single test. If the pooled sample is negative, it can be deduced that all patients were negative. If the pooled sample comes back positive, then each sample needs to be tested individually to find out which was positive.

…Today, the FDA is taking another step forward by updating templates for test developers that outline the validation expectations for these testing options to help facilitate the preparation, submission, and authorization under an Emergency Use Authorization (EUA).

This is good and will increase the effective number of tests by at least a factor of 2-3 and perhaps more.

In other news, Representative Beyer (D-VA), Representative Gonzalez (R-OH) and Paul Romer have an op-ed calling for more prizes for testing:

Offering a federal prize solves a critical part of that problem: laboratories lack the incentive and the funds for research and development of a rapid diagnostic test that will, in the best-case scenario, be rendered virtually unnecessary in a year.

…We believe in the ability of the American scientific community and economy to respond to the challenge presented by the coronavirus. Congress just has to give them the incentive.

The National Institutes of Health (NIH) have already begun a similar strategy with their $1.4 billion “shark tank,” awarding speedy regulatory approval to five companies that can produce these tests. Expanding the concept to academic labs through a National Institute of Science and Technology (NIST)-sponsored competition has the added benefit ultimately funding more groundbreaking research once the prize money has been awarded.

This is all good but frustrating. I made the case for prizes in Grand Innovation Prizes for Pandemics in March and Tyler and I have been pushing for pooled testing since late March. We were by no means the first to promote these ideas. I am grateful things are happening and relative to normal procedure I know this is fast but in pandemic time it is molasses slow.

Vaccine Testing May Fail Without Human Challenge Trials

In Why Human Challenge Trials Will Be Necessary to Get a Coronavirus Vaccine I asked, “What if we develop a vaccine for COVID-19 but can’t find enough patients–healthy yet who might get sick–to run a randomized clinical trial?” Exactly that problem is now facing the Oxford vaccine in Britain.

An Oxford University vaccine trial has only a 50 per cent chance of success because coronavirus is fading so rapidly in Britain, a project co-leader has warned.

…Hill said that of 10,000 people recruited to test the vaccine in the coming weeks — some of whom will be given a placebo — he expected fewer than 50 people to catch the virus. If fewer than 20 test positive, then the results might be useless, he warned.

As I wrote, “A low infection rate is great, unless you want to properly test a vaccine.” Challenge trials have issues of external validity and they take time to setup properly but they produce results quickly and they can be especially useful in whittling down vaccine candidates to focus on the best candidates.

1DaySooner now has over 25 thousand volunteers from over 100 countries.

Rapid progress from Fast Grants

I was pleased to read this NYT reporting:

Yet another team has been trying to find drugs that work against coronavirus — and also to learn why they work.

The team, led by Nevan Krogan at the University of California, San Francisco, has focused on how the new coronavirus takes over our cells at the molecular level.

The researchers determined that the virus manipulates our cells by locking onto at least 332 of our own proteins. By manipulating those proteins, the virus gets our cells to make new viruses.

Dr. Krogan’s team found 69 drugs that target the same proteins in our cells the virus does. They published the list in a preprint last month, suggesting that some might prove effective against Covid-19…

It turned out that most of the 69 candidates did fail. But both in Paris and New York [where the drugs were shipped for testing], the researchers found that nine drugs drove the virus down.

“The things we’re finding are 10 to a hundred times more potent than remdesivir,” Dr. Krogan said. He and his colleagues published their findings Thursday in the journal Nature.

The Krogan team was an early recipient of Fast Grants, and you will find more detail about their work at the above NYT link.  Fast Grants is also supporting Patrick Hsu and his team at UC Berkeley:

And the work of the Addgene team:

Early detection of superspreaders by mass group pool testing

Most of epidemiological models applied for COVID-19 do not consider heterogeneity in infectiousness and impact of superspreaders, despite the broad viral loading distributions amongst COVID-19 positive people (1-1 000 000 per mL). Also, mass group testing is not used regardless to existing shortage of tests. I propose new strategy for early detection of superspreaders with reasonable number of RT-PCR tests, which can dramatically mitigate development COVID-19 pandemic and even turn it endemic. Methods I used stochastic social-epidemiological SEIAR model, where S-suspected, E-exposed, I-infectious, A-admitted (confirmed COVID-19 positive, who are admitted to hospital or completely isolated), R-recovered. The model was applied to real COVID-19 dynamics in London, Moscow and New York City. Findings Viral loading data measured by RT-PCR were fitted by broad log-normal distribution, which governed high importance of superspreaders. The proposed full scale model of a metropolis shows that top 10% spreaders (100+ higher viral loading than median infector) transmit 45% of new cases. Rapid isolation of superspreaders leads to 4-8 fold mitigation of pandemic depending on applied quarantine strength and amount of currently infected people. High viral loading allows efficient group matrix pool testing of population focused on detection of the superspreaders requiring remarkably small amount of tests. Interpretation The model and new testing strategy may prevent thousand or millions COVID-19 deaths requiring just about 5000 daily RT-PCR test for big 12 million city such as Moscow.

Speculative, but I believe this is the future of our war against Covid-19.

The paper is by Maxim B. Gongalsky, via Alan Goldhammer.

Supply curves slope upward, Switzerland fact of the day, and how to get more tests done

Under Swiss law, every resident is required to purchase health insurance from one of several non-profit providers. Those on low incomes receive a subsidy for the cost of cover. As early as March 4, the federal health office announced that the cost of the test — CHF 180 ($189) — would be reimbursed for all policyholders.

Here is the article, that reimbursement is about 4x where U.S. levels had been.  The semi-good news is that the payments to Abbott are going up:

The U.S. government will nearly double the amount it pays hospitals and medical centers to run Abbott Laboratories’ large-scale coronavirus tests, an incentive to get the facilities to hire more technicians and expand testing that has fallen significantly short of the machines’ potential.

Abbott’s m2000 machines, which can process up to 1 million tests per week, haven’t been fully used because not enough technicians have been hired to run them, according to a person familiar with the matter.

In other words, we have policymakers who do not know that supply curves slope upwards (who ever might have taught them that?).

The same person who sent me that Swiss link also sends along this advice, which I will not further indent:

“As you know, there are 3 main venues for diagnostic tests in the U.S., which are:

1.       Centralized labs, dominated by Quest and LabCorp

2.       Labs at hospitals and large clinics

3.       Point-of-care tests

There is also the CDC, although my understanding is that its testing capacity is very limited.  There may be reliability issues with POC tests, because apparently the most accurate test is derived from sticking a cotton swab far down in a patient’s nasal cavity.  So I think this leaves centralized labs and hospital labs.  Centralized labs perform lots of diagnostic tests in the U.S. and my understanding is this occurs because of their inherent lower costs structures compared to hospital labs.  Hospital labs could conduct many diagnostic tests, but they choose not to because of their higher costs.

In this context, my assumption is that the relatively poor CMS reimbursement of COVID-19 tests of around $40 per test, means that only the centralized labs are able to test at volume and not lose money in the process.  Even in the case of centralized labs, they may have issues, because I don’t think they are set up to test deadly infection diseases at volume.  I’m guessing you read the NY Times article on New Jersey testing yesterday, and that made me aware that patients often sneeze when the cotton swab is inserted in their noses.  Thus, it may be difficult to extract samples from suspected COVID-19 patients in a typical lab setting.  This can be diligence easily by visiting a Quest or LabCorp facility.  Thus, additional cost may be required to set up the infrastructure (e.g., testing tents in the parking lot?) to perform the sample extraction.

Thus, if I were testing czar, which I obviously am not, I would recommend the following steps to substantially ramp up U.S. testing:

1.       Perform a rough and rapid diligence process lasting 2 or 3 days to validate the assumptions above and the approach described below, and specifically the $200 reimbursement number (see below).  Importantly, estimate the amount of unused COVID-19 testing capacity that currently exists in U.S. hospitals, but is not being used because of a shortage of kits/reagents and because of low reimbursement.  This number could be very low, very high or anywhere in between.  I suspect it is high to very high, but I’m not sure.

2.       Increase CMS reimbursement per COVID-19 tests from about $40 to about $200.  Explain to whomever is necessary to convince (CMS?…Congress?…) why this dramatic increase is necessary, i.e., to offset higher costs for reagents, etc. and to fund necessary improvements in testing infrastructure, facilities and personnel.  Explain that this increase is necessary so hospital labs to ramp up testing, and not lose money in the process.  Explain how $200 is similar to what some other countries are paying (e.g., Switzerland at $189)

3.       Make this higher reimbursement temporary, but through June 30, 2020. Hopefully testing expands by then, and whatever parties bring on additional testing by then have recouped their fixed costs.

4.       If necessary, justify the math, i.e., $200 per test, multiplied by roughly 1 or 2 million tests per day (roughly the target) x 75 days equals $15 to $30 billion, which is probably a bargain in the circumstances.

5.       Work with the centralized labs (e.g., Quest, LabCorp., etc.), hospitals and healthcare clinics and manufactures of testing equipment and reagents (e.g., ThermoFisher, Roche, Abbott, etc.) to hopefully accelerate the testing process.

6.       Try to get other payors (e.g., HMOs, PPOs, etc.) to follow CMS lead on reimbursement.  This should not be difficult as other payors often follow CMS lead.

Just my $0.02.”

TC again: Here is a Politico article on why testing growth has been slow.

Why are we letting FDA regulations limit our number of coronavirus tests?

Since CDC and FDA haven’t authorized public health or hospital labs to run the [coronavirus] tests, right now #CDC is the only place that can. So, screening has to be rationed. Our ability to detect secondary spread among people not directly tied to China travel is greatly limited.

That is from Scott Gottlieb, former commissioner of the FDA, and also from Scott:

#FDA and #CDC can allow more labs to run the RT-PCR tests starting with public health agencies. Big medical centers can also be authorized to run tests under EUA. For now they’re not permitted to run the tests, even though many labs can do so reliably 9/9 cdc.gov/coronavirus/20

Here is further information about the obstacles facing the rollout of testing.  And read here from a Harvard professor of epidemiology, and here.  Clicking around and reading I have found this a difficult matter to get to the bottom of.  Nonetheless no one disputes that America is not conducting many tests, and is not in a good position to scale up those tests rapidly, and some of those obstacles are regulatory.  Why oh why are we messing around with this one?

For the pointer I thank Ada.

Why are Jamaicans the fastest runners in the world?

That is one chapter in Orlando Patterson’s new and excellent The Confounding Island: Jamaica and the Postcolonial Predicament.  One thing I like so much about this book is that it tries to answer actual questions you might have about Jamaica (astonishingly, hardly any other books have that aim, whether for Jamaica or for other countries).  So what about this question and this puzzle?

Well, in terms of per capita Olympic medals, Jamaica is #1 in the world, doing 3.75 times better by that metric than Russia at #2.  This is mostly because of running, not bobsled teams.  Yet why is Jamaica as a nation so strong in running?

Patterson suggests it is not genetic predisposition, as neither Nigeria nor Brazil, both homes of large numbers of ethnically comparable individuals, have no real success in running competitions.  Nor do Jamaicans, for that matter, do so well in most team sports, including those demanding extreme athleticism.  Patterson also cites the work of researcher Yannis Pitsiladism, who collected DNA samples from top runners and did not find the expected correlations.

Patterson instead cites the interaction of a number of social factors behind the excellence of Jamaican running, including:

1. Preexisting role models.

2. The annual Inter-Scholastic Athletic Championship, also known as Champs, which provides a major boost to running excellence.

3. Proximity and cultural ties with the United States, which give athletically talented Jamaicans the chance to access better training and resources.

4. The Jamaican diet and a number of good public health programs, contributing to the strength of potential Jamaican runners (James C. Riley: “Between 1920 and 1950, Jamaicans added life expectancy at one of the most rapid paces attained in any country.”)

5. The low costs of running, and running practice, combined with the “combative individualism” of Jamaican culture, which pulls the most talented Jamaican athletes into individual rather than team sports.  (That same culture is supposed to be responsible for dancehall battles and the like as well.)

Whether or not you agree, those are indeed answers.  The book also considers “Why Has Jamaica Trailed Barbados on the Path to Sustained Growth?”, “Why is Democratic Jamaica so Violent?”, and a number of questions about poverty.  Amazing!  Those are indeed the questions I have about Jamaica, among others.

Recommended, you can pre-order here.

The economics of the Protestant Reformation

Here is the abstract of a new paper by Davide Cantoni, Jeremiah Dittmar, and Noam Yuchtman:

The Protestant Reformation, beginning in 1517, was both a shock to the market for religion and a first-order economic shock. We study its impact on the allocation of resources between the religious and secular sectors in Germany, collecting data on the allocation of human and physical capital. While Protestant reformers aimed to elevate the role of religion, we find that the Reformation produced rapid economic secularization. The interaction between religious competition and political economy explains the shift in investments in human and fixed capital away from the religious sector. Large numbers of monasteries were expropriated during the Reformation, particularly in Protestant regions. This transfer of resources shifted the demand for labor between religious and secular sectors: graduates from Protestant universities increasingly entered secular occupations. Consistent with forward-looking behavior, students at Protestant universities shifted from the study of theology toward secular degrees. The appropriation of resources by secular rulers is also reflected in construction: during the Reformation, religious construction declined, particularly in Protestant regions, while secular construction increased,especially for administrative purposes. Reallocation was not driven by pre-existing economic or cultural differences.

For the pointer I thank the excellent Kevin Lewis.

CEO compensation: the latest results

Here’s the latest:

We analyze the long-run trends in executive compensation using a new panel dataset of top executives in large publicly-held firms from 1936 to 2005, collected from corporate reports. This historic perspective reveals several surprising new facts that conflict with inferences based only on data from the recent decades. First, the median real value of compensation was remarkably flat from the end of World War II to the mid-1970s, even during times of rapid economic expansion and aggregate firm growth. This finding contrasts sharply with the steep upward trajectory of pay over the past thirty years, which coincided with a period of similarly large increases in aggregate firm size. A second surprising finding is that the sensitivity of an executive’s wealth to firm performance was not inconsequentially small for most of our sample period. Thus, recent years were not the first time when compensation arrangements served to align managerial incentives with those of shareholders. Taken together, the long-run trends in the level and structure of compensation pose a challenge to several common explanations for the widely-debated surge in executive pay of the past several decades, including changes in firms’ size, rent extraction by CEOs, and increases in managerial incentives.

I don’t quite think these results are "surprising" any more, though they would have been three years ago.  In my view the analytically noxious "cultural factors" are looming larger in the explanation than we used to think.  It’s become increasingly hard to deny top producers what they, in economic terms, are worth.

Means testing for Medicare

Let’s first quote Mark Thoma’s response to my column; it is indirectly a good summary of what I argue:

I believe the political argument that giving everyone a stake in the
program helps to preserve it has more validity than Tyler does, market
failures (some of which hit all income groups) probably play a larger
role in my thinking about government responses to the health care
problem than in his, and I have more confidence than Tyler that a
universal care system has the potential to lower costs.

And now here’s me:

…the idea of cutting some government transfers provokes protest in
some quarters. One major criticism is that programs for the poor alone
will not be well financed because poor people do not have much political
power. Thus, this idea goes, we should try to make transfer programs as
comprehensive as possible, so that every voter has a stake in the
program and will support more spending.

But even if this argument
holds true now, it may not be very persuasive when Medicare costs start
to push taxation levels above 50 percent. A more modest program, more
directly aimed at those who need it, might prove more sustainable in
the longer run.

Americans have supported the growth of many
programs aimed mainly at the poor. Both Medicaid and the Earned Income
Tax Credit have grown rapidly in size since their inception. The idea
of helping the poor and not having the government take over entire
economic sectors was the original motive behind welfare programs, in
any case.

Furthermore, the argument for comprehensive and
universal transfer programs does not meet the ideal of democratic
transparency. If taking care of the poor is the real value in welfare
programs, those programs should be sold as such to the electorate. We
shouldn’€™t give wealthier people benefits just to €œtrick€ them, for
selfish reasons, into voting for greater benefits for everyone, the
poor included.

Here is another point:

Advocates of health care reform tend to be long on ideas for expanding
care and access, but short on practical solutions for cost control. The
argument is often made that single-payer health care systems in Canada
or Europe are cheaper than health care in the United States. But
Medicare is already a single-payer plan, yet its costs are
unsustainable.

Note that I am calling for higher benefits for the poor and lower benefits for higher-income groups.  That’s not a popular stance, not even with egalitarians.  In fact I view the contemporary left as oddly ill-prepared on the health care issue.  Electorally speaking, the issue is fully 100 percent in their court (and they are used to pressing it aggressively), until of course they get their way and have to "meet payroll," so to speak.  One attitude is to cite Europe and think that the production possibilities frontier can expand under better management of the U.S. system, even as you cover an extra 40 million people.  Another attitude is to face the notion of trade-offs. 

Here is the full column.  (By the way, I think that HSAs are ineffective as health care reform and that the so-called "right" is floundering on
this issue, just to get in my equal opportunity smack on the blog.)

Addendum: You can make a good argument that (some) public health programs are the best health care investment of all; I just didn’t have enough space in the column to cover that issue.

Second addendum: Greg Mankiw didn’t read so closely.  It’s not "an income tax surcharge on sick, old people."  It’s a reallocation of benefits toward people of greater need.  Is any benefit less than infinity an "income tax surcharge"?

Third addendum: Here is Paul Krugman on the topic.

The greatest basketball team ever?

These Spurs are so quiet, but it should be asked whether they are the best NBA team to have walked on the planet Earth.  A few points:

1. Since 1997 they have a winning percentage of over .700, the best in any sport.  This includes two previous championship rings, but the current incarnation of the Spurs is believed to be the best.

2. They have absolutely crushed a variety of strong teams from the West, even when Tim Duncan had sore ankles.

3. Their best player, Tim Duncan, should at this point be MVP every year.

4. They are one of the best defensive teams, ever.  Bruce Bowen is a first-rate stopper.

5. They are one of the best-coached teams, ever.  They have an amazing variety of offensive plays and defensive set-ups.  They can play in many different styles, including run and gun fast break, when needed.  They are far more than the sum of their parts.

6. They do not appear to have problems with personalities or dissension.

7. They have a very strong bench.

8. You would rather have Manu Ginobili than Kobe Bryant.

9. In any sport where performance is measurable, quality rises over time.  Yes there is dilution but overall the best basketball teams are getting better.  And the use of foreign players — prominent on the Spurs — is overcoming the dilution problem rapidly.

Can you imagine Bruce Bowen holding MJ to thirty points and Duncan going around Bill Cartwright at will?  Could they keep the fast break of the Showtime Lakers in check, while exploiting the relatively weak defense of that team?  How would they match up against the 1989-1990 "Bad Boy" Pistons, or the Celtics with Bill Walton?

We should put the low TV ratings aside and start asking these questions.