Sunday, January 9, 2011

Two articles for the Manhattan Institute's Medical Progress Today, on C.P. Snow's "The Two Cultures," and on FDA Reform.



In the last century, the dominant impulse of American healthcare policy has shifted decisively--from cure to care.  Whereas once we sought to eliminate disease through science, now we seek finance disease through insurance.  Why did this huge shift occur?   The answer can be found in the shift in power relations between two cultures in today’s society, one focusing on dynamic science, the other focusing on static literary, political, and legal worldviews.   

In the early 1900s, President Theodore Roosevelt decided to dig the Panama Canal, but he knew he would never succeed if malaria and yellow fever decimated the workforce, as had happened in  the French canal-building effort two decades earlier.   Medial scientists of TR’s day had not yet developed effective treatments for those diseases, but they had learned that mosquitos were the vector, or transmission agent.  So President Roosevelt simply ordered mosquito habitats cleared away from the canal worksite; as a result, infection and death rates were minimized, and the path between the seas was completed.  

Using today’s terminology, we could say that TR “bent the curve” on healthcare costs, and yet he did so not by crimping down on treatment, but by crimping down on the disease itself.  We might note, to be sure, that TR’s environment-changing solution was in keeping with those disease-attacking times; back home in the US, progressives were using the same methods to improve hygiene and sanitation, dramatically reducing rates of infectious disease.  Indeed, all through the early 20th century, forward-looking public health advocates--untroubled by “wetland” protectors and NIMBYs--drained swamps and launched other mosquito-abatement measures.   As a result, malaria and yellow fever virtually disappeared from the US.  

Later, another President Roosevelt, Franklin, also chose to fight disease.  In 1938, FDR established the National Foundation for Infantile Paralysis, soon to become known, after a groundswell of public support--“Send your dime to President Roosevelt!--as the March of Dimes.   Instead of proposing national health insurance as part of his Social Security retirement plan, FDR gave Americans something more precious: health itself.   In 1955, the announcement of a successful new Salk polio vaccine cheered a relieved and grateful public.   

The dominant idea then was not to provide insurance for disease; it was to do away with disease.   And the added advantage to such an approach--beat, don’t treat--was not only its humanitarian impact, but also that it cheaper in the long run.  The expense of developing the polio vaccine was minuscule, compared to the expense of providing multiple wheelchairs and iron lungs, to say nothing of the cost of lost productivity from those afflicted.   

Now we can fast forward to the end of the 20th century.  In the 1990s, Bill Clinton proposed a national health insurance plan that focused entirely on health insurance, as opposed to the actual science of health.   According to science-policy historian Daniel Greenberg, in Clinton’s 1993 speech outlining his healthcare plan to Congress, “The President made no mention of medical research or the role of improved scientific understanding of disease in protecting the health of the American people.”  Instead, he focused solely on “universal access to medical care and cost containment.”

And the subsequent Obamacare, of course, followed the same formula: a huge increase in “coverage,” even as the pipeline for new drugs and cures--and hope--continued to dry up.   

So what changed over the last century?  How did we go from trying to eliminate dreaded  disease to being content merely to finance its ravages?   How could it be, for example, that today we spend $172 billion a year treating Alzheimer’s Disease, but only about $500 million a year  researching the malady?


And the problem will get worse in the decades to come:  Do we really think we can manage the Alzheimer’s epidemic through either budget cuts or “financial innovation”?  Unfortunately, no presidential March-of-Dimes-like effort is in sight, to mobilize against Alzheimer’s Disease, or against any other costly killer.  

One reason for this dire policy reversal might be found in a 1959 lecture, “The Two Cultures and the Scientific Revolution,” delivered by the Englishman C.P. Snow at Cambridge University, later turned into a still-in-print book.  “The intellectual life of the whole of western society,” Snow declared, “is increasingly being split into two polar groups.”  One of these groups is the literary, or traditional culture; the other is the scientific culture.   “Between the two,” Snow continued, stretches “a gulf . . . of hostility and dislike, but most of all lack of understanding.”   

Snow had served as technical director for the Ministry of Labour during World War Two; he had seen, up close, the struggle to advance vital scientific research--most notably, the tide-turning technology of radar.  It was the scientific culture, Snow argued, that had devised the tools for defeating Hitler; yet, after the war, it was the literary culture that had triumphed, submerging the future progress of science in a languorous bath of aesthetic styles and judgments.  Snow, a notable author, well-versed in the humanities, decried the excessive influence of “the literary intellectuals,” who, he added, “while no one was looking took to referring to themselves as ‘intellectuals’ as though there were no others.”  As a result, science, as well as the positive transformations that science can bring, was being pushed out of contemporary political and policy equations.  

This dethronement of science was a huge loss, declared Snow.  “Scientists have the future in their bones,” he added, and yet “the traditional culture responds by wishing the future did not exist.”  That is, scientists seek to advance and change the future, while the literary culture is content to muse over the present.   And yet presently, Snow lamented, “It is the traditional culture . . .  which manages the western world.”

Predictably, Snow’s broadside against the humanities attracted immediate counter-fire.   The eminent literary critic F.R. Leavis denounced Snow as a “public relations man” for science, suffering from “complete ignorance” about the humanities--ignoring the fact that Snow, in addition to his government service, had published 17 novels and works of fiction, as well as a biography of Anthony Trollope.  Yet Snow was adamant: Science should take the lead, as opposed to those he termed “natural Luddites.”

Updating Snow, we can observe that in our time, the literary/humanities culture has been fused with the political/legal/social science culture, creating the current policymaking juggernaut, which we might dub the “literary-legal complex.”  We might further say that the literary-legal complex is inherently oriented to predictability, according to its non-scientific, even anti-scientific, prejudices.  And so the literary-legal worldview focuses on transactions and rule- making--creating durable, but inherently limited, routines in its own image.  By contrast, science is unpredictable and open-ended: the work of scientists disrupts predictable and familiar routines, remaking the world in heretofore unimagined ways.  So people of a scientific turn of mind, for example, produce vaccines, unsentimentally disrupting painful--but at least familiar--patterns of sickness and death.   By contrast, in past eras, the literary-legal turn of mind debated whether it was a sin to use vaccines, thereby thwarting God’s will.   Today, many of those who still retain that predictability-oriented turn of mind are still happy to see science restrained--by custom, law, or lawyers. 

Meanwhile, in politics today, we see endless ideological battles within the left and right factions of the literary-legal complex, over the financial metaphysics of health insurance, pro and con, and little consideration of where health actually comes from.  Thought-leaders of the two parties,  sharing a common background in humanities and the law, naturally end up with similar conclusions about the primacy of predictable transactions and rules.   And so their policies, dueling as they might be, end up mirroring each other.  

The left, inspired by theorists and social critics--leavened with a little Green pseudo-science--produces policies in its own image: regulations, consent decrees, sweeping theories of legal liability.   The right, on the other hand, re-reads the Constitution and studies economics, while seeking further inspiration in the novels of Ayn Rand.  That is, both sides ignore science, because they don’t know science.   So if the left says that the government should provide health insurance, the right says, no, the market should provide it.  And in the midst of that ideological  rumble, the idea of actual improvements in health--improvements that come from medical science--is sadly forgotten. 

And today, that same turn of mind seems As we are currently seeing, ideology is an endless fight that can never be resolved.  By contrast, technology is a fight that can be resolved.   Polio, for example, was resolved.   And we can resolve other diseases as well, if we want to.  But first, we will have to realize that our current exaltation of the literary-legal complex is the solution, but, in fact, the problem. 

^^^

And here's an earlier article, also for The Manhattan Institute's Medical Progress Today, published in early December: 

A consensus is developing that Alzheimer’s Disease (AD) is the next epidemic to worry about, both medically and fiscally.  But unfortunately, there’s nothing close to a consensus on what to do about it.  
Considerations of AD have been subsumed by the debate over spending and the deficit--although Washington mavens don’t yet seem to grasp that a fiscal solution is not possible without a medical solution.   Instead, policy chatterers have zeroed in on cutting entitlement spending, ignoring the medical problems of aging, favoring a strictly fiscal solution.  Such an approach has a soundbite-worthy appeal, but it disregards outside-the-beltway political reality: If the underlying problem of the disease itself is ignored--the incidence of AD is expected to triple in the next 40 years--then popular pressure to spend commensurately will not be ignored by politicians.   Indeed, we can note that if AD triples, it won’t matter much whether or not Obamacare lives or dies; either way, by mid-century, healthcare will be ruinously expensive.   

In the meantime, others seem to believe we should simply have a partisan brawl, aimed at gaining maximum political advantage going into the 2012 elections.  And of course, as soon as the ’12 elections are over, fighting over the ’14 elections will commence.   Yet amidst such see-sawing political  opportunism, we will continue to spend money on AD care--already more than one percent of GDP, and rising fast--without any real prospect for a cost-curve-bending cure.   

In August 2010, The New York Times reported on the work of a medical “jury” convened by the National Institutes of Health to evaluate the various treatments for AD.  The “verdict” of the NIH panel was discouraging in the extreme: “Currently, no evidence of even moderate scientific quality exists to support the association of any modifiable factor (such as nutritional supplements, herbal preparations, dietary factors, prescription or nonprescription drugs, social or economic factors, medical conditions, toxins or environmental exposures) with reduced risk of Alzheimer’s disease.”

To sum up: There's “no evidence” that anything we are doing to forestall or treat Alzheimer's is working.  The chair of the NIH panel, Dr. Martha L. Davigulus of Northwestern University, summed up the current state of research as “primitive.”
  
Yet we do see stirrings of a counteroffensive against the AD onslaught.  In October, former Supreme Court justice Sandra Day O’Connor, joined by Nobel medical laureate Stanley Prusiner and geriatric expert Ken Dychtwald, argued on the op-ed page of The New York Times for a more proactive strategy against AD and its costs:

As things stand today, for each penny the National Institutes of Health spends on Alzheimer’s research, we spend more than $3.50 on caring for people with the condition. This explains why the financial cost of not conducting adequate research is so high. The United States spends $172 billion a year to care for people with Alzheimer’s. By 2020 the cumulative price tag, in current dollars, will be $2 trillion, and by 2050, $20 trillion.

In addition, O’Connor, Prusiner, and Dychtwald brought up the benefits of an effective AD treatment: 

If we could simply postpone the onset of Alzheimer’s disease by five years, a large share of nursing home beds in the United States would empty. And if we could eliminate it, as Jonas Salk wiped out polio with his vaccine, we would greatly expand the potential of all Americans to live long, healthy and productive lives--and save trillions of dollars doing it.

That same month, another leading figure, California first lady Maria Shriver, made an overlapping argument: The goal should not be treating AD, the goal should be beating AD.  Speaking to ABC News’ Diane Sawyer, Shriver invoked the ambitious vision of her famous uncle, “We can launch an expedition on the brain, much like President Kennedy launched an expedition to the moon.”

Once again, the obvious wisdom: A cure for any malady is cheaper than palliative care.   That medical-fiscal reality was true for polio, as well as other diseases that have been mostly or completely eliminated--so why couldn’t it be true for AD?  

And yet the political establishment has paid little heed to O’Connor and Shriver, nor has it thought constructively about the polio precedent.  In the weeks since, two different “blue chip” deficit commissions have released weighty reports; both focused entirely on a “cut” strategy for healthcare, as opposed to a cure strategy.  

Why the neglect of the proactive cure-approach?  Perhaps Washington officialdom is simply incapable of a complicated “two cushion shot”--that is, hitting the billiard ball of medical research in order to hit the ball of lower costs in the long run.  Or perhaps the idea of scientific research, as opposed to writing checks and offering bailouts, is simply out of fashion in policy circles.   Or maybe Washington has quietly concluded that medical research--and just as crucially, the translation of medical research into actual medications in the marketplace--is hitting a dead end.  Why spend more money on, say, the NIH if nothing tangible is achieved?Why have faith in the pharmaceutical companies when a dozen anti-AD efforts have failed in mid- to late-stage testing since 2003?

Yet if nothing can be done to rekindle medical progress as a public-policy tool, then we are, in fact, doomed both to go gray and to go broke by the middle of this century--unless, of course, the death panels are called in.  

So is there any hope for a better outcome?  An outcome that’s both more compassionate--and less ruinous?   If there is such a hope, it will have to come from medical research.  Financial transactions won’t get us there; only scientific transformation can do the job.  

For their part, politicians can help, not by fighting each other, but by clearing away the legal and regulatory roadblocks to medical progress.  MI’s Paul Howard is correct in stating that the FDA is facing a “crisis of confidence”; critics on all sides agree that the agency is “broken.”   So the answer, of course, is to fix the agency, as part of an overall cure strategy. 

Will such an effort succeed?  There’s no way to know for sure, but our long national track record on public-private mobilization--from building the railroads to building the Interstates to building the Internet--should give us considerable hope.  If, that is, we can distill the right leadership lessons from those past large-scale successes. 

The political and economic reward for medical success is monumental.  With an effective treatment for AD, we could, of example, begin to think about raising the retirement age for Medicare and Social Security, thus solving much of the deficit problem.  

In addition, an AD breakthrough would shift cost calculations on just about every fiscal and economic variable.  Today, all those tens of millions--soon to be hundreds of millions--of people around the world suffering from AD are seen as a huge burden.  But with a real AD cure, they could become paying customers for whichever country is making the medicine, able to continue working and producing--for the betterment of each country, for the betterment of mankind.  

Saturday, January 1, 2011

Bloomberg News Points Out The Need for a "Star Trek"-like Medical "Tricorder."




I.  Bloomberg News Highlights a Serious Healthcare Problem 

OK, that’s not quite what Bloomberg News reported on December 30--the word “Tricorder” does not appear in the story anywhere at all.  But we can say this much for sure: The piece highlights the need for better solutions, and a Tricorder would be a great solution.  

Instead, the Bloomberg report chronicled abusive and excessive surgery practices in Minnesota; the muckraking headline reads, “Doctors Getting Rich With Fusion Surgery Debunked by Studies.” Written by reporters Peter Waldman and David Armstrong, the 3500-word article focuses on alleged misdeeds in the Gopher State, even as the article extrapolates that state’s problems on to the rest of the country.   Yet as we shall see, these kinds of abuses--and other problems of medicine--won’t genuinely get better until more technology is provided; provided to the doctor, to the patient, and to all of us.  

The Bloomberg article begins with the story of a Minnesota man, Mikel Hehn, who in 2008 received a spinal vertebrae-fusing operation that turned out badly for him; he is now in chronic pain, taking 10 different medications to deal with chronic pain and depression.  The issues raised here are, indeed, complicated.  We might note, for example, that Hehn had been told by a doctor in his hometown of St. Cloud that he didn’t need the spinal fusing procedure, and so Hehn went to Minneapolis to get a second opinion--and got the operation, with disastrous results.  Other Minnesota patients, too, suffered bad results from their vertebrae surgery--one even died.   

The piece cites further horror stories concerning back surgery in Minnesota, and goes on to suggest close collaboration between doctors and medical equipment companies, suggesting that doctors and companies are working together to perform more back surgeries, hiking up profits for both.  Finally, the piece indicates that Minnesota is a one-state microcosm for the country, citing a government estimate that unnecessary surgery of all kinds costs the nation $150 billion a year.   The Bloomberg reporters further note:

The number of fusions at U.S. hospitals doubled to 413,000 between 2002 and 2008, generating $34 billion in bills, data from the federal Healthcare Cost and Utilization Project show. The number of the surgeries will rise to 453,300 this year, according to Millennium Research Group of Toronto. 

The possibility that many of these and other surgeries are needless has gotten little attention in the debate over U.S. health care costs, which rose 6 percent last year to $2.47 trillion. Unnecessary surgeries cost at least $150 billion a year, according to John Birkmeyer, director of the Center for Healthcare Outcomes & Policy at the University of Michigan.

“It’s amazing how much evidence there is that fusions don’t work, yet surgeons do them anyway,” said Sohail Mirza, a spine surgeon who chairs the Department of Orthopaedics at Dartmouth Medical School in Hanover, New Hampshire. “The only one who isn’t benefitting from the equation is the patient.”

So what could we do to better warn patients against the overuse of this fusion procedure?  What can we do to inhibit doctors from over-medicating?  And how can all the rest of us be alerted to such overuse?   

II.  Non-Solution Solutions 

First we might review what is not likely to work.  What won’t work, unfortunately, is what’s likely to happen next.  

We might begin by recalling that politicians, confronted with bad news, feel the immediate impulse to “do something.”  It will be easy, for example, for a lawmaker to hold a press conference, or to hold a victim-heavy hearing in front of TV cameras, and to then offer legislation to “fix” the problem.  In other words, politicians have a template through which they see a crisis: Decry the problem, further highlight the problem, and then legislate against the problem.   

But as we have learned over the years, many legislative fixes--especially those fueled by a sudden wave of outrage--end up making the problem worse.   If the “solution” to the problems identified in Minnesota is simply to put a layer of regulatory bureaucracy atop the current system, regulating the behavior of all the medical and financial players, well, that’s not overly promising.  
  
We can further observe that the Obama administration will undoubtedly take note of this story, and use the facts cited herein to further its own healthcare policy agenda.   The Healthcare Cost and Utilization Project, for example, which produced the $150 billion estimate for unnecessary operations cited in the article, is a unit of the federal Agency for Healthcare Research and Quality.  And AHRQ, in turn, is a part of the US Department of Health and Human Services; HHS Secretary Kathleen Sebelius is the point person for the Obama administration’s efforts to slow down the rise in healthcare costs--by any means necessary, in the opinion of some.  So those who mistrust the Obama administration’s healthcare efforts--officials have said, many times, that the goal of reducing healthcare costs is more important than improving healthcare results--might feel concerned that the Obamans will seize upon this article to restate their argument that federal experts must step in to cut costs.  In other words, officials of the executive branch will step in, alongside Members of Congress and other elected officials, to “do something” about over-medication.  

The immediate abuses, such as they might be, can be curbed, but as we have learned, over the long run, those being regulated have a way of evading, even subverting, the regulation.  Moreover, if the new rules are applied heavy-handedly, they can make the situation worse. 
  
The beneficiaries of current spinal surgery practices--including the surgeons, the hospitals, the medical equipment companies--know exactly who they are, and what they stand to benefit from a continuation of the system.  Confronted with a threat to their revenue, they will bulk up on lobbyists and lawyers and p.r. people, muster their own counter-arguments.   

Down the road, assuming that some legislative or regulatory change is enacted, we all might discover that the government is not always efficient or competent at what it does.  

Moreover, individual bureaucrats, as well as whole bureaucracies, are subject to “capture”--that is, they become captured by the industries they are supposed to regulate.  In analogous situations across US history--the regulation of the railroads being a paradigmatic instance--the result of greater regulation might have been a short-term increase in “fairness,” but the longterm result was not only a stifling of innovation in an industry but also the overall decay of that industry.   In the case of past regulation, for example, the regulators and the regulatees settled into their new relationship, which soon came to be defined as the mutual maintenance of the status quo. We can add that one status-quo-maintaining result that generally pleases both sides of any regulatory equation is the raising of barriers to entry for would-be newcomers into the field.   That is, if ever player in a given industry is to be regulated, then, of course, each player must be suitably licensed, certified and inspected.  And each of those actions generally makes it harder for new entrants to get into that economic space.   As barriers to entry rise, there’s an inevitable slide in quality--which can soon become a collapse of the industry.   It’s happened before, it can happen again.   In the world of bureaucratic politics, not much has changed since the 19th century--it’s the nature of bureaucracy that’s a constant.   

One variable is that is new, however, is large-scale tort litigation.   The Bloomberg piece is undoubtedly going to attract significant attention from trial lawyers, always sniffing around for new cases.  Nobody should be surprised to see a flurry of lawsuits pouring forth, as trial lawyers look forward to stoking jury passions, already inflamed by the hot news of the moment, in pursuit of big judgements.   Using their legal power of “discovery,” tort lawyers might even discovery new kinds of individual or corporate culpability--that’s what happens when thousands and millions of documents are culled over.    And yet nothing, we should note, in the litigation arsenal actually makes healthcare and medicine better. 

So while some abuses identified in the Bloomberg article might be investigated or regulated out of existence, the overall condition of the industry--and of healthcare--could well get worse.  That’s a negative outcome that merits the attention of all of us. 

Under current parameters, the problem of over-medication is essentially insoluble, we might say, because it results from at least three sources.  First, some doctors be willing violate their professional oath by over-medicating; second, because of corporate cupidity--the zeal to sell things to people that they don’t need.  And third, many patients--influenced, perhaps, by the culture--to demand that something done about their medical problems, even if that “something” makes their own problem worse.   The only “solution” would be a drastic clampdown on medical care, a hammer coming down on the medical-industrial complex.  And if that hammer were to come, it’s a safe bet that overall healthcare would get worse, not better.   

So what can be done?  How do we break out of this regulation-litigation back-and-forth along the  flat-to-negative axises of bureaucracy and litigation?   How do we push things on an upward path?  How do we enact genuine reforms so that patients get the best possible medical advice, as well as the best possible medical devices?   

Here we might be reminded of the wisdom of Albert Einstein, who said, eloquently and elegantly, “No problem can be solved on its own level.”   

III.  The Medical Grid

The Tricorder, not mentioned, in the Bloomberg piece, will have to wait.  But it shouldn’t wait long, because as we have seen, many of the problems pinpointed by the Bloomberg article are insoluble in the current environment.   More regulators and litigators is not a formula for better healthcare.  

Instead, we we will need a deus ex machina--in the most literal sense, we will need a machine coming down from the sky to solve the problems that the Bloomberg article highlighted, and many other problems, as well.   Fortunately, that’s possible, if we wanted to work toward bringing it into existence.
   
For all the problems of healthcare, we might note one in particular: At the heart of the current system is an inefficient and information-deprived relationship--the relationship between the patient and his or her doctor.   Doctors save lives, to be sure, but they have vastly more power than patients, and as we have seen, that power can be abused.   

As the Nobel Economics Laureate Kenneth Arrow demonstrated a half-century ago, an endemic problem in medicine is “asymmetric information.”   That is, the doctor knows a lot more than the patient, and, indeed, the medical system itself knows a lot more than the patient.   So if the doctor, or the system, says that the patient needs something, the patient can’t be expected to know enough to negotiate the optimum outcome with the doctor.  Yes, getting second opinions can be valuable, but some of the most expensive medical situations arise when the patient is in distress, and thus can hardly be expected to think with clarity or patience.  If the patient ends up in an emergency room, there’s not much dickering over price to be done.    We might note that in an asymmetric environment, the problem can go both ways--toward over-medicating, and also under-medicating.   Either way, if the experts say one thing, it will be hard for an ordinary citizen to argue the other way.    

This Arrow asymmetry information inevitably inspires some to look for outside forces to help monitor the practice of medicine.  Health insurance entities, public and private, seek to apply metrics to care, using data to identify, for example, “hot zones” of over-treatment.  (They rarely seem to worry, these days, about under-treatment.)     

Yet another way of looking at the asymmetric relationship is to describe it as a “boutique.”  By “boutique,” we mean that each medical case is handled in relative isolation from other cases, and  as a result, each case risks being isolated from fast-changing awareness as to best practices.  Doctors, whether they work by themselves or as part of a group practice, are hard-pressed to keep up with the literature in their field, even if they are fully and single-mindedly motivated.   There’s simply too much information.   


In other fields, the information glut is automated to the point of being manageable, at least for the task at hand.  Systems are turned into algorithms, boiling down the decision-making process to a relatively few key decision points.   And so complex tasks become simpler, and simple tasks become easy.   Cashiers, for example, no longer have to do math, or even look at the price of items--the prices are scanned in, along with sales tax, discounts, and “rewards” system; meanwhile, the system itself is notified as to what goods are  selling, and what inventory needs to be restocked.    These days, we say that the system itself resides in a “cloud,” defined as all the hardware and software that makes up the decision-making process.   The whole system is automated to speed up the consumer experience and to minimize error and expense.   Of course, if the process looks simple on the surface, the reality is that millions of people have put in billions of hours to figure out how to do all this--it takes a lot of work to make things simple.  But of course, there will always be people, too, in the mix,  There are plenty of robot factories around the world, but we keep humans around in each and every one of them, just to keep track of the ‘bots.   


So is medicine the same as merchandising?  Can we reduce patients to the same level as cans of soup?   Of course not.   But going back to the days of Norbert Wiener, cyberneticists have understood the deep unity of information--in the end, it’s all ones and zeros.   Any problem that can be reduced to ones and zeros can, in turn be solved by the “data crunching” of those same ones and zeros.  That’s the science.  The art, of course, is providing the right intellectual and ethical framework for the information, as it is brought up the scale of complexity, from raw material to finished product.  That is, as those ones and zeros are built up into databases and networks, where their value can be safely and pleasingly used for the benefit of humans.  We can do all this, we can put more information “on the grid,” if we want to, while protecting personal privacy and dignity.   This “gridding” effort might seem complicated, but it is not impossible.  And there are huge gains to be made, as we shall see.   


By now, we have put most important things on “the grid.”  And by “grid” we mean not only the Internet, not only the cloud, but also all larger systems of predictability and transparency, such as the law and codified best practices (which, of course, can be online, in the cloud, among other places).    


The basic notion of time, for example, is on the grid.   That is not, not only does Greenwich Mean Time exist as an objective measure of “best practice” on time, but we all have free and easy access to GMT.   In addition, we all have watches, clocks, and other time pieces, including the now ubiquitous cell phone.   So if we were ever to make an appointment to see an “expert” on time, we would always have a “second opinion” at our fingertips.   The timing device would be our own expert, our own check-and-balance against whatever the “expert” might say.   At one time or another--before the institution of regularized time-zones, for example--time was a huge issue, and mistakes and even tragedies resulted from mistakes.  But now, after centuries of working at fully transparent and abundantly available time, the quality of timekeeping in this country is not much of an issue, and certainly not a crisis.    

Other crises, too, have been alleviated.   Auto repair, for example, has long been a problem area, for reasons of information asymmetry.  Auto mechanics were able to take advantage of customer ignorance; what’s changed, to some extent, are not only the laws and regulations, but also the information environment in which auto repair exists--more information is on the grid.   Parts are modular, there’s more transparency on prices, and there’s a huge industry of do-it-yourself home repair--many lessons are available online.   On YouTube, for example, “auto repair how to” yields up 5540 “hits,” and there are thousands more videos to be found.   For the consumer, the more information, the better.   
As the late Daniel Patrick Moynihan said, we are all entitled to our own opinion, but we are not entitled to our own facts.   The creation and distribution of quality information is vital work, comparable to creating a dictionary in the past, or standardizing weights and measures.   

So we can say, then, that the real issue is expunging bad information practice.  To the extent that we are able, we need to equalize information among all players.   With a few exceptions bad information practice is not a question of good and evil, it is a matter of inefficiency and incompetence--how quickly can information diffuse?
    
To be sure, all of this is complicated.  But humanity has demonstrated that it can handle more complexity.  More to the point, with great effort, we have built the machines that will enable us to manga all this complexity.   Anyone’s laptop computer today has more processing power than existed in all the world sixty years ago.   And the fastest supercomputer today has achieved a processing rate of more than 2.5 “petaflops” per second; that is, 2.5 quadrillion (a thousand trillions) per second.

And computers can do things that once seemed nuanced and subtle.  The game of chess is both and art and a science, and it is certainly complicated; there are 400 different positions in a chess game after each player makes a single move, and 72,084 positions after each player has made two, and nine million or more positions after three moves.  The estimated number of legal positions in chess is estimated at between 10^43 to 10^123; that’s a 1 with 43 zeros after it, unless it’s a 1 with 123 zeros after it.  Indeed, some observers insist that the true number of possible chess positions is infinite.  And yet since 1997, computers have been beating humans at their own game.  

Of course, medicine brings with it ethical issues that transcend any mere game.  But some of the issues in medicine are simply complicated puzzles that need to be solved--and that’s where computers excel.  Consider some bone issues: How did we figure out that Vitamin D cures rickets?   Or that calcium supplements help with osteoporosis?  Or that glucosamine alleviates arthritis?   Those over-the-counter approaches might seem simple, but they are only simple in retrospect.  They weren’t simple at all on the front end.    Aspirin is dirt cheap now, but it was the wonder-drug of its time, and, come to think of it, it still is a wonder drug, even if it is abundant.

That’s the problem-to-solution scenario that we want to see repeated, over and over.   Before computers, humans were able to solve many of these problems, with computers, they can solve many more.  
And so what about back pain?   Back pain, after all, was the issue that got us started--what to do about costly and ineffective treatments in Minnesota?   Can computers crunch our way to a solution for back pain?  

And the answer is, we don’t know yet.   But what we do know is that computers, combined with databases, can put information about backs on the grid, for the benefit of doctors, patients, and the public interest.   

Using the medical grid, we can make better medical information averrable to all concerned.  And in making it available, we can make it symmetrical.   When information is available and symmetrical, it becomes predictable--predictable in the good sense, in the way a ruler is predictable in being exactly 12 inches long, or that a good is guaranteed to contain exactly what the label says it contains.   When all the dimensions of a problem is known and understood, the solution gets easy.    

Doctors might be offended by these comparisons, but then, we might note, all boutiquers are offended by the idea that what they do can or should be mass produced.  By their nature, boutiquers like the idea that they can and should craft out a specific solution to each specific problem.   Yet the downside of boutiqing is that the result can be expensive, and also that it can also suffer from randomness and variability in the doctoring, as well as corruption.  The solution, therefore, is standardization.  Yet as we have seen, we are not talking about mass production.  We are talking, instead, about personalization of a kind that can only occur atop a platform of predictability and quality.  The doctor should always feel free to make an innovative recommendation to the patient, based on his or her own judgment, bolstered by the cumulation of all medical learning--and only a computer can make that possible.   The upside of “boutiquing” is that a solution can be crafted out for each specific situation, but the downside--making a horrendous medical error--is covered by the computer.   

But by the same token, the patient, too, should have access to the best available information, too. Indeed, there’s no reason why the patient shouldn’t see the same information that the doctor has.  Yes, the information is inherently complicated, but that’s where the arts of presentation come in, to help the patient see and understand what his or options might be.  Such art, along with the science of ergonomics of science, moves us toward the complicated becoming simple?  If it can be done for cashiers in stores, it can be done for patients in medical offices.    If knowledge is power, as Francis Bacon said, we all need more knowledge. 

Meanwhile, the reality is that we are already moving in this direction, toward computerization,  albeit at an agonizingly glacial pace.   Much medical information is already on the grid--just not enough.   
How do we get more medicine on the grid--and faster?   We are all familiar with information services such as WebMD,  The Health Central Network,  and Patients Like Me, but those services, valuable as they are, can only rarely get inside the “decision loop” of doctors and patients.  That is, the outside services rarely have access to the needed information about the individual patient.  And from the companies’ point of view, that’s a good thing.  Why?  Because under the current regulatory regime, as legal expert James R. Wootton points out, they would risk being sued for any mistakes that get made in their name.  As we have learned by now malpractice consists of the judgment of a jury, as distinct from what is objectively and demonstrably true.   

Similarly, we have seen that electronic health records (EHR) have not yet taken off.   It seems hard to believe that in an era when credit card companies, for example, process trillions of calculations about sales, that we have not achieved a similar record-keeping of medical information.   But as legal expert Wootton explains, the same fear of liability keeps records from being automated--if all data were automated, it could just as easily be searched by a trial lawyer as by another doctor, and no doctor can afford unlimited predatory searching.  If we want the benefits of large databases, we have to seal them off from John Edwards & Co.   Absent such a change, it’s likely that the Obama administration’s multi-billion dollar effort to digitalize medical records will prove to be a deep disappointment.  
And so, of course, if we can’t even digitalize--digitalize in a larger, data-base sense--EHR’s, which are retrospective, there’s little reason to be optimistic about digitalizing such records as a prospective tool for medical treatment.     

Thus the opportunity to improve healthcare is being lost.   If Mikel Hehn, the man in Minnesota with the bad back, had been able to read an instant summary of the case history of all who had his condition, his surgery, then he might have made a different decision.  And if the doctors who did the procedure had access to the same counter-indicating information, they might have been more hesitant to prescribe surgery that was so hard to defend in the court of medical opinion,  And finally, with better information, fully transparent to all of us, we would all be able to see trouble spots.  That’s what the Dartmouth Atlas purports to do, although its methodology has come under severe challenge.  Such disputation is not a final argument against Dartmouth or the Dartmouth approach, it’s simply a reminder that for others portals, with their own ideas, should enter the same space.  As we all know, an individual data point can be true, but it can be framed into a context that is misleading at best and flatly inaccurate at best.  As the late Jack Kemp said of all experts, “They don’t care that you know till they know that you care.” 


And that’s the problem with Obama approach--people don’t dispute that they have knowledge, but they do worry that they have empathy, and the right medical model in their heads.   And so, for example, the best way to deal with the problem of back pain is to cure back pain.  And if we aren’t there yet, we should get there.  And getting there, getting to a cure, will require more technology, and more trial and error, not less.    

But even along the way, we will see incremental gains.   

If, for example, if a 40-year-old man with back pain goes to the doctor, it would be very helpful to her if both he and her doctor had full access to the full medical information about every other man of her age group who shared her characteristics.   Out of that welter of data, the best medical conclusion would come.   We know that the more data, probably structured, the better.  That’s why the Internet is so valuable, and why search engines and databases that properly structure needed information are even more valuable. And that’s what we all need: A machine that helps us to manage all the information about one of the most precious things we have: our health.  To get there, of course, the “med mal” liability issues would have to be dramatically revised. 
  
But where would this information come from?  And what about privacy?   Would we all want to contribute data to the large cloud?  Who would do all the work?  

The recent experience of the Net is that many people are happy to put up information, and to do real work--all to help others, for no direct gain to themselves.   Wikipedia is the most obvious example, but so are millions of other sites on which people labor at length to create value for others, for no monetary reward.   Many of those sites, of course, concern medical issues--protests against bad treatment, advice on good treatment, and general commentary on health matters.  To be sure, there’s no system-wide quality control, and that’s a problem.  So there’s real value to be found in building up a “brand name,” or names to monitor this information, all the while making it easy to understand by the widest possible audience.   

And as for privacy, those concerns, too, must be addressed.   Uploaded information can be be “de-identified” before it goes into the database; that’s the way that the Federal Aviation Agency handles the sensitive air-safety information that it collects from each airline.   

So how do we get doctors to participate?   As of now, it is estimated that doctors only record about 1 percent of the information they get about a patient.  Part of the reason, of course, is time--it takes time to write things down.   And yet as we know, many of us already possess devices that we use to collect and record data: Examples include heart-rate and blood-sugar monitors, as well as all manner of devices we use to measure are calorie consumption and exercise.  In addition, many machines, from hearing aids to heart pumps, generate information that could be recorded in a doctor’s office, among other places, including the home (which would probably be the best place, although in the “clouded” era, location doesn’t mean as much as it once did).   And one day soon, RFID chips in individual pills will generate still more valuable  information.   

Indeed, it’s already possible to see an “internet of things,” that is, devices communicating with each other, in a language that other devices are equipped to receive and transcribe.  As we have seen, it’s those sorts of rote tasks that machines are so good at doing; the dullest and most routine tasks can be automated, if we are willing to automate them.   But to automate the collection of vast troves of information, and then to centralize them, is to make an open invitation to lawsuits; so once again, we can pause to note that no kind of EHR is going to work if it opens its way to more lawsuits.  So those laws will have to be changed.    That’s not a small undertaking, but it is a necessary undertaking. 
  
That’s the goal: The gridding of medicine in a way that improves personal health, not tort-lawyer wealth.   But now let’s go further, boldly going where no man--and no woman--has gone before.

IV. Beaming Up The Tricorder  

The Bloomberg article makes plain that we have a serious problem.  And yet as we have also seen, traditional politics is unlikely to come up with a useful answer to those problems.  The best answer is not going to be bureaucrats snooping around, getting in between patients and doctors, nor is it going to be trial lawyers swooping in to profiteer.  Indeed, any proposed “solution” that involves having the government more closely monitor the doctor-patient relationship is likely to be harmful, and to generate a backlash. If people are at least somewhat resentful of the information asymmetry possessed by doctors, they are very resentful of the power asymmetry possessed by government officials.  Moreover, any proposed “solution” that seems to impinge on patient privacy is also a non-starter with ordinary people, while any move toward EHR’s that opens the way to more lawsuits is a non-starter for professionals.     
So how to do all that?  In Einsteinian terms, we need to take the problem up a notch, to a new level of technology; we will only solve this problem on a higher level that takes us from the age of bureaucracy to the age of cyber-technology.   And so we need a Tricorder.  

We all remember the Tricorder from “Star Trek.”  It was the handheld device that Starship Enterprise’s Dr. McCoy used to diagnose patients, among other critical functions.   OK, the Tricorder is a fictional device from a TV series set in the 23rd century, but as we have been reminded over and over again, many times real-world solutions come from sci-fi imaginations.   People imagined human flight, for example, and rocket ships, and space travel, long before scientists knew how to achieve those goals. And yet as the website Technovelgy endlessly reminds us, the dreams of literary fabulists have often encouraged practical scientists and engineers to render those imagined things into reality.   
So the challenge is to maintain the boutique benefits of individualized treatment, while also gaining the “scalable” benefits of standardization and mass production.   There is a way, first seen in the 1960s in fanciful form, but now possible--albeit, as a practical matter, a very long way off.   

So if we can enter the “Star Trek” universe, we can say that the Tricorder is a 23rd century portable computer.  And because it is a computer, it must rely on pattern recognition; whatever the symptoms of a disease or an injury might be, even if they are not visible to the naked eye, the Tricorder see them and make sense of them--it recognizes them   That’s pattern recognition, an it’s based on a computer’s ability to crunch through, quickly, a huge number of variables, until it settles on the best candidate for an answer.  And so one way another, each Tricorder computer must be hooked up to “the cloud,” as we call it in the 21st century.   That is, each unit is tethered to vast databases that enable it to identify a disease or syndrome, even if it occurred somewhere else in the galaxy.  That’s what makes the Tricorder so effective: It is on the grid, a universal grid.  It knows everything, and so, of course, it could diagnose everything.

To be sure, the Tricorder can do things that we don’t know how to do.  For example, Dr. McCoy can simply point the Tricorder at a patient and make all manner of diagnosis, instantaneously.  We aren’t there--yet.  But we are getting closer.  Lasers and ultrasound at least point us in the direction of instant diagnosis. So that’s a challenge ahead--to make it ever more real.  

On the other hand, if we look at the picture above, from the mid-1960s, we can see that the Tricorder looks distinctly low-tech.   Indeed,, an Apple iPhone or other smart device is a lot cooler and sleeker than the Tricorder; so in that sense, the machines of 2010 have already leapt ahead of the envisioned machines of the 2200s.    

So we might ask: If smartphones can do so many things, why aren’t they being used more to better our health?   Why is medicine being bypassed in this technological revolution coming out of Silicon Valley?   Why aren’t we moving toward Tricorder-ization?  The answer, as have seen, is that the obstacles to Tricorderization are more legal than technical.   If a smartphone has  access to vast databases to play a game or to buy something, yes, mistakes can happen, but liability is sharply limited--and so such access is a done deal.  But if access to medical records means liability for medical mistake, fuhgeddaboutit.   
The challenge, then, is to make function as niftily effective as form.   And that will require a new kind of framework--a framework of legal and regulatory protection.   Legal protection, we might add, not for the sake of coddling malefactors, but for helping push technological development forward.   

Otherwise, smart as our smart phones might be, they won’t ever do what a Tricorder could do.  Machines can keep track of things for us, but the smartest smart phone can’t diagnose.  We want a smartphone that can go toe to toe, as it were, with a doctor about diagnosis and treatment.  Even if we don’t yet have the technology to make distance-diagnosis of ourselves, we do have the technology to download all pertinent data from the cloud, and from that download we could create a proxy, or avatar diagnosis, based on all available information from throughout the network.  And that proxy diagnosis could be used to benchmark the doctor’s diagnosis.   And that could have been useful in providing an instant second opinion about whether, for example, a doctor in Minnesota is too eager to prescribe a vertebrae-fusing operation.  

If we will need a Tricorder in the 23rd century, we need it, too, in the 21st century.  And we could be making the early versions, now, if we possessed the political vision.

And if we could, we would be addressing the problems identified in that Bloomberg article in a constructive way.   We would be using information as a Baconian power tool--to empower all of us to make better choices, to live better and longer lives.