Editorials

The Southwest Journal of Pulmonary and Critical Care welcomes submission of editorials on journal content or issues relevant to the pulmonary, critical care or sleep medicine. Authors are urged to contact the editor before submission.

Rick Robbins, M.D. Rick Robbins, M.D.

One Example of Healthcare Misinformation

On June 21st  NBC News aired an investigation into HCA Healthcare accusing HCA administration of pressuring doctors, nurses and family to have patients enter hospice care or be discharged (1). Patients entering hospice care can lower inpatient mortality rate and length of stay, increasing profits and bonuses for executives. It works this way — if a patient passes away in a hospital, that death adds to the facility’s inpatient mortality figures. But if that person dies after a transfer to hospice care — even if the patient stays at the same hospital in the same bed — the death doesn’t count toward the facility’s inpatient mortality rate because the patient was technically discharged from the hospital. A reduction in lengthy patient stays is a secondary benefit according to an internal HCA hospital document (1). Under end-of-life care, patients don’t typically live long, so the practice can allow HCA to replace patients that may be costing the facility money because their insurance has run out with those who generate fresh revenues.

These practices are not unique to HCA nor are they new. Manipulation of patient data such as mortality go back at least until the 1990’s. For example, at the Phoenix VA the floor inpatient mortality rate was low while the ICU mortality rate was high. This was apparently due to excess mortality in floor to ICU transfers (2). Reduction of inappropriate ICU transfers from the hospital floor corrected the high ICU mortality rate. Similar changes were seen for length of stay. There were also dramatic reductions in the incidence of ICU ventilator-associated pneumonias and central line-associated blood stream infections just by alternating the reported cause of pneumonia or sepsis. For example, ventilator-associated pneumonia was called “delayed onset community acquired pneumonia” and sepsis was blamed on a source other than the presence of a central line.

These data manipulations were not restricted to the inpatient mortality or length of stay. Outrageously exaggerated claims of improvement and lives saved became almost the norm. In 2003 Jonathan B. Perlin, then VA Undersecretary of Health, realized that outcome data was needed for interventions such as pneumococcal vaccination with the 23-polyvalent pneumococcal vaccine. On August 11, 2003 at the First Annual VA Preventive Medicine Training Conference in Albuquerque, NM, Perlin claimed that the increase in pneumococcal vaccination saved 3914 lives between 1996 and 1998 (3) (For a copy of the slides used by Perlin click here). Furthermore, Perlin claimed pneumococcal vaccination resulted in 8000 fewer admissions and 9500 fewer days of bed care between 1999 and 2001. However, these data were not measured but based on extrapolation from a single, non-randomized, observational study (4). Most studies have suggested that the 23-polyvalent vaccine is of little or no value in adults (5).

It raises the question of why bother to manipulate these data? The common denominator is money. Administrators demand that the numbers meet the requirements to receive their bonuses (1). At the VA the focus changed from meeting the needs of the patient to meeting the performance measures. HCA administration is accused of similar manipulations. Speculation is that many if not most healthcare administrators behave similarly. The rationale is that the performance measures represent good care which is not necessarily true (5).

Who can prevent this pressuring of care givers and patient families to make the numbers look better? One would expect that regulatory organizations such as the Joint Commission, Institute of Medicine, Centers for Medicare and Medicaid Services, Department of Health and Human Services, and Department of Veterans Affairs would require the data reported be accurate. However, to date they have shown little interest in questioning data which makes their administration look good. The Joint Commission is a National Regulatory group that is prominent in healthcare regulation. After leaving the VA in 2006, Perlin was named the President, Clinical Operations and Chief Medical Officer of Nashville, Tennessee-based HCA Healthcare prior to being named the President and subsequently CEO of the Joint Commission in 2022. When regulatory organizations get caught burying their heads in the sand, administrators usually respond by blaming the malfeasance on a few bad apples. An example is the VA wait scandal that led to the ouster of the Secretary of Veterans Affairs, Eric Shinseki, and the termination of multiple administrators at the Phoenix VA. It should be noted that although Phoenix was the focus of the VA Inspector General at least 70% of medical centers were misreporting the wait times similarly to Phoenix (6).

Who should be the watchdogs and whistleblowers on these and other questionable practices – obviously, the hospital doctors and nurses. However, the hospitals have these employees so under their thumb that any complaint is often met with the harshest and most severe sanctions. Doctors or nurses who complain are often labeled “disruptive” or are accused of being substandard. The latter can be accomplished by a sham review of patient care and reporting to the physician or nurse to a regulatory authority such as the National Practitioner’s Databank or state boards of medicine or nursing (7). Financial data may be even easier to manipulate (8). A recent example comes from Kern County Hospital in Bakersville, CA (9). There the hospital’s employee union accuses the hospital of $23 million in overpayment to the hospital executives over 4 years. According to the union the hospital tried to cover up the overpayment. Now the executives have requested the hospital board to cover the overpayments.

The point is that hospital data can be manipulated. One should always look at self-reported data with healthy skepticism, especially if administrative bonuses are dependent on the data. Some regulatory authority needs to examine and certify that the reported data is correct. It seems unlikely that Dr. Perlin’s Joint Commission will carefully examine and report accurate hospital data. Hopefully, another regulator will accept the charge of ensuring that hospital data is accurate and reliable.

Richard A. Robbins, MD

Editor, SWJPCCS

References

  1. NBC News. HCA Hospitals Urge Staff to Move Patients to Hospice to Improve Mortality Stats Doctors and Nurses Say. June 21, 2023. Available at: https://www.nbcnews.com/nightly-news/video/hca-hospitals-urge-staff-to-move-patients-to-hospice-to-improve-mortality-stats-doctors-and-nurses-say-183585349871 (accessed 6/28/23).
  2. Robbins RA. Unpublished observations.
  3. Perlin JB. Prevention in the 21st Century: Using Advanced Technology and Care Models to Move from the Hospital and Clinic to the Community and Caring. Building the Prevention Workforce: August 11, 2003. First Annual VA Preventive Medicine Training Conference. Albuquerque, NM.   
  4. Nichol KL, Baken L, Wuorenma J, Nelson A. The health and economic benefits associated with pneumococcal vaccination of elderly persons with chronic lung disease. Arch Intern Med. 1999;159(20):2437-42. [CrossRef] [PubMed]
  5. Robbins RA. The unfulfilled promise of the quality movement. Southwest J Pulm Crit Care. 2014;8(1):50-63. [CrossRef]
  6. Department of Veterans Affairs Office of Inspector General. Concerns with Consistency and Transparency in the Calculation and Disclosure of Patient Wait Time Data. April 7, 2022. Available at: https://www.va.gov/oig/pubs/VAOIG-21-02761-125.pdf (accessed 6/28/23).
  7. Chalifoux R Jr. So, what is a sham peer review? MedGenMed. 2005 Nov 15;7(4):47; discussion 48. [PubMed].
  8. Beattie A. Common Clues of Financial Statement Manipulation. Investopedia. April 29, 2022. Available at: https://www.investopedia.com/articles/07/statementmanipulation.asp (accessed 7/28/23).
  9. Kayser A. California Hospital Accused of Overpaying for Executive Services. Becker’s Hospital Review. June 28, 2023. Available at: https://www.beckershospitalreview.com/compensation-issues/california-hospital-accused-of-overpaying-for-executive-services.html?origin=BHRE&utm_source=BHRE&utm_medium=email&utm_content=newsletter&oly_enc_id=6133H6750001J5K  (accessed 6/29/23).
Cite as: Robbins RA. One Example of Healthcare Misinformation. Southwest J Pulm Crit Care Sleep. 2023;27(1):8-10. doi: https://doi.org/10.13175/swjpccs029-23 PDF
Read More
Rick Robbins, M.D. Rick Robbins, M.D.

The Need for Improved ICU Severity Scoring

How do we know we’re doing a good job taking care of critically ill patients? This question is at the heart of the paper recently published in this journal by Raschke and colleagues (1). Currently, one key method we use to assess the quality of patient care is to calculate the ratio of observed to predicted hospital mortality, or the standardized mortality ratio (SMR). Predicted hospital mortality is estimated with prognostic indices that use patient data to approximate their severity of illness (2). Examples of these indices include the Acute Physiology and Chronic Health Evaluation (APACHE) score, the Simplified Acute Physiology Score (SAPS), the Mortality Prediction Model (MPM), the Multiple Organ Dysfunction Score (MODS), and the Sequential Organ Failure Assessment (SOFA) (3).

Raschke et al. (1) evaluated the performance of the APACHE IVa score in subgroups of ICU patients. APACHE is a severity-of-illness score initially created in the 1980s and subsequently updated in 2006 (4,5). This index was developed using data from 110,558 patients from 45 hospitals located throughout the United States, and encompassed 104 intensive care units (ICUs) including mixed medical-surgical, coronary, surgical, cardiothoracic, medical, neurologic, and trauma units. The final model used 142 variables including information from the patient’s medical history, the admission diagnosis, and physiologic data obtained during the first day of ICU admission (4). Although it subsequently has been validated using other large general ICU patient cohorts, its accuracy in subgroups of ICU patients is less clear (6).

To benchmark whether the APACHE IVa performed sufficiently, Raschke et al. (1) employed an interesting and logical strategy. They created a two-variable severity score (2VSS) to define a lower limit of acceptable performance.  As opposed to the 142 variables used in APACHE IVa, the 2VSS used only two variables: patient age and need for mechanical ventilation. They included 66,821 patients in their analysis, encompassing patients from a variety of ICUs located in the southwest United States. The APACHE IVa and 2VSS was calculated for all patients. Although the APACHE IVa outperformed the 2VSS in the general cohort of ICU patients, when patients were divided into subgroups based on admission diagnosis the APACHE IVa showed surprising deficiencies. In patients admitted for coronary artery bypass grafting (CABG), the APACHE IVa did no better in predicting mortality than the 2VSS. The ability of APACHE IVa to predict mortality was significantly reduced in patients admitted for gastrointestinal bleed, sepsis, and respiratory failure as compared to its ability to predict mortality in the general cohort (1).

The work by Raschke et al. (1) convincingly shows that APACHE IVa underperforms when evaluating outcomes in subgroups of patients. In some instances, it did no better than a metric that used only two input variables. But why does this matter? One might argue that the APACHE system was not created to function in this capacity. It was designed and validated using aggregate data. It was not designed to determine prognosis on individual-level patients, or even on subsets of patients. However, in real-world practice it is used to estimate performance in individual ICUs, which have unique cases mixes of patients that may not approximate the populations used to create and validate APACHE IVa. Indeed, other studies have shown that the APACHE IVa yields different performance assessments in different ICUs depending on varying case mixes (2).

So where do we go from here? The work by Raschke et al. (1) is helpful because it offers the 2VSS as an objective method of defining a lower limit of acceptable performance. In the future, more sophisticated and personalized tools will need to be developed to more accurately benchmark ICU quality and performance.  Interesting work is being done using local data to customize outcome prediction (7,8). Other researchers have employed machine learning techniques to iteratively improve predictive capabilities of outcome measures (9,10). As with many aspects of modern medicine, the complexity of severity scoring will likely increase as computational methods allow for increased personalization. Given the importance of accurately assessing quality of care, improving severity scoring will be critical to providing optimal patient care.

Sarah K. Medrek, MD

University of New Mexico

Albuquerque, NM USA

References

  1. Raschke RA GR, Ramos KS, Fallon M, Curry SC. The explained variance and discriminant accuracy of APACHE IVa severity scoring in specific subgroups of ICU patients. Southwest J Pulm Crit Care. 2018;17:153-64. [CrossRef]
  2. Kramer AA, Higgins TL, Zimmerman JE. Comparing observed and predicted mortality among ICUs using different prognostic systems: why do performance assessments differ? Crit Care Med. 2015;43:261-9. [CrossRef] [PubMed]
  3. Vincent JL, Moreno R. Clinical review: scoring systems in the critically ill. Crit Care. 2010;14:207. [CrossRef] [PubMed]
  4. Zimmerman JE, Kramer AA, McNair DS, Malila FM. Acute Physiology and Chronic Health Evaluation (APACHE) IV: hospital mortality assessment for today's critically ill patients. Crit Care Med. 2006;34:1297-1310. [CrossRef] [PubMed]
  5. Zimmerman JE, Kramer AA, McNair DS, Malila FM, Shaffer VL. Intensive care unit length of stay: Benchmarking based on Acute Physiology and Chronic Health Evaluation (APACHE) IV. Crit Care Med. 2006;34:2517-29. [CrossRef] [PubMed]
  6. Salluh JI, Soares M. ICU severity of illness scores: APACHE, SAPS and MPM. Curr Opin Crit Care. 2014;20:557-65. [CrossRef] [PubMed]
  7. Lee J, Maslove DM. Customization of a Severity of Illness Score Using Local Electronic Medical Record Data. J Intensive Care Med. 2017;32:38-47. [CrossRef] [PubMed]
  8. Lee J, Maslove DM, Dubin JA. Personalized mortality prediction driven by electronic medical data and a patient similarity metric. PLoS One. 2015;10:e0127428. [CrossRef] [PubMed]
  9. Awad A, Bader-El-Den M, McNicholas J, Briggs J. Early hospital mortality prediction of intensive care unit patients using an ensemble learning approach. Int J Med Inform. 2017;108:185-95. [CrossRef] [PubMed]
  10. Pirracchio R, Petersen ML, Carone M, Rigon MR, Chevret S, van der Laan MJ. Mortality prediction in intensive care units with the Super ICU Learner Algorithm (SICULA): a population-based study. Lancet Respir Med. 2015;3:42-52. [CrossRef] [PubMed]

Cite as: Medrek SK. The need for improved ICU severity scoring. Southwest J Pulm Crit Care. 2019;18:26-8. doi: https://doi.org/10.13175/swjpcc004-19 PDF

Read More
Rick Robbins, M.D. Rick Robbins, M.D.

Remembering the 100,000 Lives Campaign

Earlier this week the Institute for Healthcare Improvement (IHI) emailed its weekly bulletin celebrating that it has been ten years since the end of the 100,000 Lives Campaign (Appendix 1). This was the campaign, according to the bulletin, that put IHI on the map. The Campaign started at the IHI National Forum in December 2004, when IHI's president, Don Berwick, announced that IHI would work together with nearly three-quarters of the US hospitals to reduce needless deaths by 100,000 over 18 months. A phrase borrowed from political campaigns became IHI's cri de coeur: “Some is not a number. Soon is not a time.”

The Campaign relied on six key interventions:

  • Rapid Response Teams
  • Improved Care for Acute Myocardial Infarction
  • Medication Reconciliation
  • Preventing Central Line Infections
  • Preventing Surgical Site Infections
  • Preventing Ventilator-Associated Pnemonia [sic]

According to the bulletin, the Campaign’s impact rippled across the organization and the world. IHI listed some of the lasting impacts:

  • IHI followed with the 5 Million Lives Campaign – a campaign to avoid 5 million instances of harm.
  • Don Berwick and Joe McCannon brought lessons from leading the Campaigns to Centers for Medicare and Medicaid Services (CMS) and the Partnership for Patients.
  • Related campaigns were launched in Canada, Australia, Sweden, Denmark, UK, Japan, and elsewhere.

IHI's profile definitely grew. One indicator tracked by IHI was media impressions, which rose to 250 million in the final year of the Campaign. IHI even put a recreational vehicle on the streets to promote their Campaign (Appendix 1). Campaign Manager Joe McCannon was on CNN to discuss the results of the Campaign.

How did IHI achieve such remarkable results in saving patients' lives? The answer is they did not. Review of the evidence basis for at least 3 of these interventions revealed fundamental flaws (1). The largest trial of rapid response teams failed to result in any improvements and the interventions to prevent central line infections and ventilator-associated pneumonia were non- or weakly-evidenced based and unlikely to improve patient outcomes (2-4). The poor methodology and sloppy estimation of the number of lives saved were pointed out in the Joint Commission’s Journal of Quality and Safety by Wachter and Pronovost (5). IHI failed to adjust their estimates of lives saved for case-mix which accounted for nearly three out of four "lives saved." The actual mortality data were supplied to the IHI by hospitals without audit, and 14% of the hospitals submitted no data at all. Moreover, the reports from even those hospitals that did submit data were usually incomplete. The most striking example is that the IHI was so anxious to announce their success that the data was based on only 15 months of data. The final three months were extrapolated from hospitals’ previous submissions. Important confounders such as the background of declining inpatient mortality rates were ignored. Even if the Campaign "saved" lives, it would be unclear if the Campaign had anything to do with the reduction (5). Buoyed by their success, the IHI proceeded with the 5,000,000 Lives Campaign (6). However, this campaign ended in 2008 and was apparently not successful (7). Although IHI promised to publish results in major medical journals, to date no publication is evident.

A fundamental flaw in the logic behind the 100,000 Lives Campaign was that preventing a complication, for example an infection, results in a life saved. Many of our patients in the ICU have an infection as their life-ending event. However, the patients are often in the ICU because their underlying disease(s). In many instances their underlying disease(s) such as cancer, heart disease, or chronic obstructive pulmonary disease are so severe that survival is unlikely. It is akin to poisoning, stabbing, shooting and decapitating a hapless victim and saying that had the decapitation been prevented, survival was assured. IHI also assumed that the data was collected completely and honestly. However, the data was incomplete as pointed out above and the honesty of self-reported hospital data has also been called into question (8).

The bulletin correctly pointed out that Berwick did carry this political campaign with its sloppy science to Washington as CMS' administrator. Under Berwick's leadership, CMS would announce a campaign, have the hospitals collect the data, extrapolate the mortality or other benefit, and prepare a press release. This scheme continues until this day (9). CMS further confounded the data by providing financial incentives to hospitals, often resulting in bonuses to hospital executives, making the data further suspect. Certainly, CMS would not examine the hospital data with skepticism because the success of their campaign was in their own political best interest.

The 100,000 Lives Campaign also had one other outcome. It made many of us who believe in the power of evidence-based medicine to enrich patients' lives to be suspicious of these political maneuvers. To rephrase a well-known quote, "The first victim of politics is the truth". These campaigns certainly financially benefit hospitals and their administrators and politically benefit bureaucrats, but whether they benefit patients is questionable. The bulletin from IHI should be viewed for what it is, a political self-promotion to rewrite the failed history of the 100,000 Lives Campaign.

Richard A. Robbins, MD

Editor, SWJPCC

References

  1. Robbins RA. The unfulfilled promise of the quality movement. Southwest J Pulm Crit Care. 2014;8(1):50-63. [CrossRef]
  2. Hillman K, Chen J, Cretikos M, Bellomo R, Brown D, Doig G, Finfer S, Flabouris A; MERIT study investigators. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet. 2005;365(9477):2091-7. [CrossRef] [PubMed]
  3. Hurley J, Garciaorr R, Luedy H, Jivcu C, Wissa E, Jewell J, Whiting T, Gerkin R, Singarajah CU, Robbins RA. Correlation of compliance with central line associated blood stream infection guidelines and outcomes: a review of the evidence. Southwest J Pulm Crit Care 2012;4:163-73.
  4. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care 2011;3:40-8.
  5. Wachter RM, Pronovost PJ. The 100,000 Lives Campaign: A scientific and policy review. Jt Comm J Qual Patient Saf. 2006;32(11):621-7. [PubMed]
  6. Institute for Healthcare Improvement. 5 million lives campaign. Available at: http://www.ihi.org/about/Documents/5MillionLivesCampaignCaseStatement.pdf (accessed 6/24/16).
  7. DerGurahian J. IHI unsure about impact of 5 Million campaign. Available at: http://www.modernhealthcare.com/article/20081210/NEWS/312109976 (accessed 6/24/16).
  8. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med. 2012;157:305-12. [CrossRef] [PubMed]
  9. AHRQ Report: Hospital-Acquired Conditions Continue To Decline, Saving Lives and Costs. Dec 1, 2015. Available at: http://www.ahrq.gov/news/newsletters/e-newsletter/496.html#1 (accessed 6/24/16).

Cite as: Robbins RA. Remembering the 100,000 lives campaign. Southwest J Pulm Crit Care. 2016;12(6):255-7. doi: http://dx.doi.org/10.13175/swjpcc058-16 PDF 

Read More
Rick Robbins, M.D. Rick Robbins, M.D.

CMS Penalizes 758 Hospitals for Safety Incidents

The Centers for Medicare and Medicaid Services (CMS) is penalizing 758 hospitals with higher rates of patient safety incidents, and more than half of those were also fined last year, as reported by Kaiser Health News (1).

Among the hospitals being financially punished are some well-known institutions, including Yale New Haven Hospital, Medstar Washington Hospital Center in DC, Grady Memorial Hospital, Northwestern Memorial Hospital in Chicago, Indiana University Health,  Brigham and Womens Hospital, Tufts Medical Center, University of North Carolina Hospital, the Cleveland Clinic, Hospital of the University of Pennsylvania, Parkland Health and Hospital, and the University of Virginia Medical Center (Complete List of Hospitals Penalized 2016). In the Southwest the list includes Banner University Medical Center in Tucson, Ronald Reagan UCLA Medical Center, Stanford Health Care, Denver Health Medical Center and the University of New Mexico Medical Center (for list of Southwest hospitals see Appendix 1). In total, CMS estimates the penalties will cost hospitals $364 million. Look now if you must, but you might want to read the below before on how to interpret the data.

The penalties, created by the 2010 health law, are the toughest sanctions CMS has taken on hospital safety. Patient safety advocates worry the fines are not large enough to alter hospital behavior and that they only examine a small portion of the types of mistakes that take place. On the other hand, hospitals say the penalties are counterproductive and unfairly levied against places that have made progress in safety but have not caught up to most facilities. They are also bothered that the health law requires CMS to punish a quarter of hospitals each year. CMS plans to add more types of conditions in future years.

I would like to raise two additional concerns. First, is the data accurate? The data is self-reported by the hospitals and previously the accuracy of these self reports has been questioned (2). Are some hospitals being punished for accurately reporting data while others rewarded for lying? I doubt that CMS will be looking too closely since bad data would invalidate their claims that they are improving hospital safety. It seems unlikely that punishing half the Nation's hospitals will do much except encouraging more suspect data.

Second, does the data mean anything? Please do not misconstrue or twist the truth that I am advocating against patient safety. What I am advocating for is meaningful measures. Previous research has suggested that the measures chosen by CMS have no correlation or even a negative correlation with patient outcomes (3,4). In other words, doing well on a safety measure was associated with either no improvement or a negative outcome, in some cases even death. How can this be? Let me draw an analogy of hospital admissions. About 1% of the 35 million or so patients admitted to hospitals in the US die. The death rate is much lower in the population not admitted to the hospital. According to CMS' logic, if we were to reduce admissions by 5% or 1.75 million, 17,500 lives (1% of 1.75 million) would be saved. This is, of course, absurd.

Looking at hospital acquired infections which make up much of CMS' data, CMS' logic appears similar. For example, insertion of urinary catheters, large bore central lines or endotracheal intubation in sick patients is common. The downside is some will develop urinary, line or lung infections as a complication of these insertions. Many of these sick patients will die and many will have line infections. The data is usually reported by saying hospital-acquired infections have decreased saving 50,000 lives and saved $12 billion in care costs (5). However, the truth is that hospital-acquired infections are often either not the cause of death or the final event in a disease process that caused the patient to be admitted to the hospital in the first place. If 50,000 lives are saved that should be reflected in the hospital death rates or a savings on insurance premiums. Neither has been shown to my knowledge.

So look at the data if you must but look with a skeptical eye. Until CMS convincingly demonstrates that the data is accurate and that their incentives decrease in-hospital complications, mortality and costs-the data is suspect. It could be as simple that the hospitals receiving the penalties are those taking care of sicker patients. What this means is that some hospitals, perhaps the ones that need the money the most, will have 1% less CMS reimbursement, which might make care worse rather than better.

Richard A. Robbins, MD

Editor

SWJPCC

References

  1. Rau J. Medicare penalizes 758 hospitals for safety incidents, Kaiser Health News. December 10, 2015. Available at: http://khn.org/news/medicare-penalizes-758-hospitals-for-safety-incidents/ (accessed 12/11/15).
  2. Robbins RA. The Emperor has no clothes: the accuracy of hospital performance data. Southwest J Pulm Crit Care 2012;5:203-5.
  3. Robbins RA, Gerkin RD. Comparisons between Medicare mortality, morbidity, readmission and complications. Southwest J Pulm Crit Care. 2013;6(6):278-86
  4. Lee GM, Kleinman K, Soumerai SB, et al. Effect of nonpayment for preventable infections in U.S. hospitals. N Engl J Med. 2012;367(15):1428-37. [CrossRef] [PubMed]
  5. Department of Health and Human Services. Efforts to improve patient safety result in 1.3 million fewer patient harms, 50,000 lives saved and $12 billion in health spending avoided. December 2, 2014. Available at: http://www.hhs.gov/about/news/2014/12/02/efforts-improve-patient-safety-result-1-3-million-fewer-patient-harms-50000-lives-saved-and-12-billion-in-health-spending-avoided.html (accessed 12/11/15).

Cite as: Robbins RA. CMS penalizes 758 hospitals for safety incidents. Southwest J Pulm Crit Care. 2015;11(6):269-70. doi: http://dx.doi.org/10.13175/swjpcc153-15 PDF

Read More
Rick Robbins, M.D. Rick Robbins, M.D.

Smoking, Epidemiology and E-Cigarettes

"The true face of smoking is disease, death and horror - not the glamour and sophistication the pushers in the tobacco industry try to portray." - David Byrne

In our fellows’ conference we recently reviewed the evolution of the science of clinical epidemiology as it relates to the association of smoking and lung cancer and the concurrent history of tobacco marketing in the United States. 

This story begins in 1950, when Richard Doll and Austin Bradford Hill published their landmark case control study demonstrating the association between smoking and lung cancer (1). This study was performed with methodological standards that have rarely been matched in the 63 years since.  Exhaustive analysis of possible confounders, a multi-stage evaluation of study blinding, determination of dose-effect, and the use of multiple analyses to establish consistency are among many examples of superb attention to detail exercised by Doll and Hill in this study.  The results showed that patients with lung cancer were about 15 times more likely than matched control patients to have smoked tobacco (Odds ratio 15).  The p-value was 0.00000064  - indicating that the probability of calculating such a result by chance alone is less than one-in-a-million.  In comparison, many modern case control trials are characterized by weak associations (odds ratios of 1-3) with p-values that are barely significant.  Yet the phenomenal and nearly unparalleled results of this study had practically no discernable effect on the increasing rate of smoking in the following decade.

Many factors opposed the conclusions of Doll and Hill.  Atmospheric pollution – perhaps emanating from motor car exhaust or asphalt tarmac – was felt to be the leading suspect in the increasing incidence of lung cancer.  At the time, it seemed inconceivable to most people that smoking could cause cancer.  Two thirds of British men smoked.  Smoking was widely endorsed by the medical profession – Doll and Hill themselves had both previously been smokers.  The British Department of Health did not endorse their findings, amid worries that the study might start a panic.  Several prominent statisticians, including Sir Ronald Fisher, publicly criticized their study design and conclusions.  Fisher was a polymath – a genius with significant accomplishments in multiple disciplines, widely recognized as the founder of modern statistics, having invented Fisher’s exact test, and ANOVA and having collaborated in the development of the Student’s T test.  Fisher was also an avid smoker.  It was later disclosed that Fisher had lucrative financial ties to the tobacco industry, raising questions whether Fisher’s criticisms of Doll and Hill were bought and paid for.

Doll and Hill followed up with a stronger study design – performing one of the finest cohort studies ever – the British Physician’s study.  They enrolled over 40,000 British Physicians – almost 70% of all registered in Britain.  Outcomes in this cohort were eventually evaluated over 50 years, and contributed to our knowledge in many areas of medicine.  But the results in regards to the relationship between smoking and lung cancer were objectively convincing within the first decade of follow-up.  In an interim analysis in 1961 (2), the relative risk for lung cancer in smokers was found to be increased 18 times – consistent with the findings of their case control trial.  Fisher’s exact test was incalculable in 1961 since it required the quantization of enormous factorials, but I calculated a p-value of 0.0000000000000001 (one in 100-quadrillion) using their data and an on-line Microsoft statistics program.  It’s satisfying to find that Fisher’s namesake statistic so convincingly validates the conclusions that he personally refuted.  Sir Austin Bradford Hill is famous for his contention that we often over-focus on achieving a p-value < 0.05 in modern medical research – the incomparable statistical significance of this study illustrates his point.  

Despite increasing scientific evidence against smoking, cigarette consumption in the U.S. continued to rise, and did not fall below pre-1950 levels until the early eighties.  A further generation of young men took on the habit, many of which were introduced to smoking in the armed services - cigarettes having been routinely included in C-rations of US soldiers who fought in WWII, Korea and Viet Nam.  Cigarette smoking was endorsed by everyone from movie stars, to sports stars to doctors – Bob Hope, Mickey Mantle and Ronald Reagan among them.  Santa Claus appeared in multiple ads with a cigarette in one hand, and his red toy bag in the other – fecklessly endorsing multiple different brands including Lucky Strikes and Pall Malls. 

Several tobacco advertisement campaigns were particularly influential.  Philip Morris introduced the “Marlboro Man”, considered one of the most brilliant ad campaigns in history, in 1954.  Marlboro cigarettes were filtered.  The implied (but factitious) protective benefits of the filter were not explicitly marketed, but filtered cigarettes were considered “feminine” at the time.  The use of real rodeo cowboys in the Marlboro ads dramatically changed that impression – particularly in the minds of post adolescent boys.  One indication of the success of the Marlboro Man is that Philip Morris is said to have spent $300 million dollars finding a replacement when Darrell Winfield, the most famous of the Marlboro men, retired.

In the late sixties, Philip Morris also marketed smoking to young women with a brand designed specifically for women called Virginia Slims.  Riding the wave of women’s liberation, the slogan “You’ve come a long way baby” promoted smoking as a way to express emancipation and empowerment.   RJ Reynolds introduced the “Joe Camel” ad campaign in 1987, allegedly targeting children with a cool-looking cartoon of an anthropomorphic camel.  Sounds silly, I know, but it worked.  In 5 short years after starting this campaign, the annual sales of Camel cigarettes to teenagers rose from 6 million to 470 million dollars.  At its peak, it was shown that six-year-old children could associate the character of “Joe Camel” with Camel cigarettes about as frequently as they could associate Mickey Mouse with Disney.  A study published in JAMA concluded that tobacco experimentation by 700,000 adolescents per year could be attributed to targeted advertising (3). 

Although public education had already made great inroads in reducing smoking in the US by the 80’s, legal and governmental anti-smoking pressure began to build thereafter.  In 1988, Rose Cipollone  posthumously won the first successful wrongful harm lawsuit of a smoker against a tobacco manufacturer.  Mangini sued RJ Reynolds on behalf of children in regards to the Joe Camel ad campaign.  In the 1988 Report of the Surgeon General, C Everett Koop concluded that nicotine has an addictiveness similar to that of heroin.  C Everett Koop’s continuing efforts to raise public awareness initiated some of the first public discourse in regards to the dangers of second-hand smoke (subsequently found to cause 50,000 deaths per year in the U.S.).  Smoking rates in the United States declined from 38% to 27% during his tenure.

In the 1990s, the tobacco lobby engaged in a comprehensive and aggressive political effort to neutralize clean indoor air legislation, minimize tobacco tax increases, and preserve the industry's marketing strategies.  However the famous Waxman congressional hearings intervened in 1997.  In sworn testimony before congress, the CEOs of seven major tobacco companies famously asserted that smoking tobacco was not addictive, contrary to incontrovertible scientific evidence.  Two sources revealed their insincerity.  The first was testimony of previous employees of the tobacco industry, such as Jeffrey Wigman and Victor DeNoble, who testified that the addictive and carcinogenic properties of cigarette tobacco had been artificially manipulated by the industry.   The second was the discovery of internal tobacco industry memos, which revealed that the addictive properties of tobacco were well recognized within the industry as early as 1960s.  A few excerpts follow:

“… nicotine is addictive. We are, then, in the business of selling nicotine, an addictive drug” July 17, 1963 report by then Brown & Williamson general counsel/vice president Addison Yeaman.

 “The cigarette should be conceived not as a product but as a package. The product is nicotine. …Think of a cigarette as a dispenser for a dose unit of nicotine…”  1972 William Dunn, Jr., of the Philip Morris Research Center, “Motives and Incentives in Cigarette Smoking.”

“Within 10 seconds of starting to smoke, nicotine is available in the brain. . . giving an instantaneous catch or hit . . . Other “drugs” such as marijuana, amphetamines, and alcohol are slower”  Circa 1980  C.C. Greig in a BAT R&D memo

The Waxman hearings resulted in a $368 billion dollar assessment against the tobacco industry, and increased restrictions on advertising and lobbying.  Shortly thereafter, the Joe Camel and Marlboro Man ad campaigns were terminated.  With the public revelation that three previous Marlboro Men had died from lung cancer, that ad campaign had lost its appeal.  

In the late 90s/early 2000s, the nicotine content of all major brands of cigarettes was progressively increased on average by 1.8% per year.  This might theoretically make it harder for smokers to kick the habit.  Sales promotions totaling about $400 per year per smoker were directed at loyal smokers.  Despite restrictions, the tobacco industry continued to invest $25 million dollars per year in lobbying.  Upon further negotiation, the tobacco master settlement was reduced to 200 billion – only 12.7 billion to be paid up front.  The full details of this settlement have become increasingly legally obfuscated over time in my opinion; some states are actually selling tobacco settlement bonds now to protect themselves against loss of future return from the settlement.   

Although US cigarette consumption has dramatically fallen, worldwide sales are peaking, and the international rates of women smokers are still on the rise.  Philips Morris restructured and rebranded their corporation as Altria (sounds like the word “altruistic”).  They subsumed Kraft and Nabisco foods, but the majority of their >100,000 million dollars in annual revenue are derived from tobacco sales, about two-thirds of which are international. 

Many US tobacco firms are rapidly investing in production and marketing of electronic cigarettes that vaporize nicotine for inhalation.  It is likely that inhaling vaporized nicotine is less dangerous than smoking tobacco.  However, the health effects of inhaling vaporized nicotine are not well studied yet.  The purported benefits of vaping over smoking have already been publicly aired as an argument to turn back current restrictions on public smoking.  Electronic cigarettes are being advertised as glamorous again in advertisements reminiscent of tobacco ads seen in the 1970s.  E-cigs in which nicotine is flavored with chocolate, or various fruit flavors, seem to once-again target children.  The promotion of a highly addictive drug to children and young adults cannot be beneficial to society in the long term, even if vaping doesn’t lead to lung cancer.  But the rapid increase in vaping promises that another round in the societal struggle against nicotine addiction is about to begin again.

Doll and Hill’s work played a tremendous beneficial role in this story.  Their case control and cohort studies set the methodological standard by which all subsequent observational trials should be measured – although our experience in journal club is that modern observational trials don’t even come close.  Furthermore, their work became the basis for the subsequent formulation of the “Bradford Hill Criteria” for establishing causation, which still plays a dominant role in medical and medicolegal reasoning.   

Robert A. Raschke, MD

Associate Editor 

References

  1. Doll R, Hill AB. Smoking and carcinoma of the lung; preliminary report. Br Med J. 1950;2(4682):739-48. [CrossRef]
  2. Doll R, Hill AB. The mortality of doctors in relation to their smoking habits; a preliminary report. Br Med J. 1954;1(4877):1451-5. [CrossRef]
  3. Pierce JP, Choi WS, Gilpin EA, Farkas AJ, Berry CC. Tobacco industry promotion of cigarettes and adolescent smoking. JAMA. 1998;279(7):511-5. [CrossRef] [PubMed]   

Reference as: Raschke RA. Smoking, epidemiology and e-cigarettes. Southwest J Pulm Crit Care. 2013;7(1):41-5. doi: http://dx.doi.org/10.13175/swjpcc092-13 PDF

Read More
Rick Robbins, M.D. Rick Robbins, M.D.

A New Paradigm to Improve Patient Outcomes

A Tongue-in-Cheek Look at the Cost of Patient Satisfaction

A landmark article entitled “The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality” was recently published in the Archives of Internal Medicine by Fenton et al. (1). The authors conducted a prospective cohort study of adult respondents (n=51,946) to the 2000 through 2007 national Medical Expenditure Panel Survey. The results showed higher patient satisfaction was associated with higher admission rates to the hospital, higher overall health care expenditures, and increased mortality.

The higher costs are probably not surprising to many health care administrators. Programs to improve patient satisfaction such as advertising, valet parking, gourmet meals for patients and visitors, massages, never-ending patient and family satisfaction surveys, etc. are expensive and would be expected to increase costs. Some would argue that these costs are simply the price of competing for patients in the present health care environment. Although the outcomes are poorer, substituting patient satisfaction as a surrogate marker for quality of care is probably still valid as a business goal (2). Furthermore, administrators and some healthcare providers are paid bonuses based on patient satisfaction. These bonuses are necessary to maintain salaries at a level to attract the best and brightest.

Although it seems logical that most ill patients wish to live and get well as quickly and cheaply as possible, the Archives article demonstrates that this is a fallacy. Otherwise, higher patient satisfaction would clearly correlate with lower mortality, admission rates and expenses. Since the hospitals and other health care organizations are here to serve the public, some would argue that giving the patients what they want is more important that boring outcomes such as hospital admission rates, costs and mortality.

The contention of this study – that dissatisfaction might improve patient survival – may have biological plausibility.  Irritation with the healthcare process might induce adrenal activation, with resulting increases in beneficial endogenous catecholamines and cortisol.  The resulting increase in global oxygen delivery might reduce organ failure.  Furthermore, the irritated patient is less likely to consent to unnecessary medical procedures and is therefore protected from ensuing complications.  An angry patient is likely to have less contact with healthcare providers who are colonized with potentially dangerous multi-drug resistant bacteria.

Specific bedside practices can be implemented in order to increase patient dissatisfaction, and thereby benefit mortality.   Nurses can concentrate on techniques of sleep deprivation such as waking the patient to ask if they want a sleeping pill.  Third year medical students can be employed to start all IVs and perform all lumbar punctures.  Attending physicians can do their part by being aloof and standoffish.  For instance, a patient suffering an acute myocardial infarction might particularly benefit from hearing about the minor inconveniences the attending suffered aboard a recent south Pacific cruise ship – “I ordered red caviar, and they brought black!”  During the medical interview, non-pregnant women should always be asked “when is the baby due?”  Repeatedly confusing the patient’s name, or calling them by multiple erroneous names on purpose, can heighten their sense of insecurity.  Simply making quotation signs with your fingers whenever the physician refers to themselves as their “doctor” can be quite off-putting. 

Simple props can be useful.  Wads of high-denomination cash, conspicuously bulging from all pockets of the attending’s white coat, can promote a sense of moral outrage.  Conspicuously placing a clothespin on your nose upon entering the patient’s room can be quite effective.  Simply placing your stethoscope in ice water for a few minutes before applying it to the patient’s bare chest can make a difference   

Other more innovative techniques might arise.  Charging the patient in cash for each individual medical intervention might be quite useful, emphasizing the magnitude of overcharging.  This would be made apparent to the patient who for instance might be asked to pay $40 cash on the barrelhead for a single aspirin pill.

Often the little things make a big difference – dropping a pile of aluminum food trays on the floor at 4 AM, clamping the Foley tube, purposely ignoring requests for a bedpan, or making the patient NPO for extended periods for no apparent reason can be quite effective. 

However, we fear that health care professionals may have difficulty overcoming their training to be responsive to patients. Therefore, we suggest a different strategy to National health care planners seeking to reduce costs and improve patient mortality, what we term the designated institutional offender (DIO). A DIO program where an employee is hired to offend patients would likely be quite cost effective. The DIO would not need expensive equipment or other resources. The DIO role is best suited for someone with minimal education and a provocative attitude. Only the most deficient and densest (as opposed to the best and brightest) should be hired.

Clearly, an authoritative group must be formed to establish guidelines and bundles for both the DIO and healthcare providers. We suggest formation of the Institute of Healthcare Irritation, or IHI.  They could certify DIOs to insure that the 7 habits of highly offensive people are used (3).  IHI can also establish clinical practice bundles like the rudeness bundle, the physical discomfort bundle, the moral outrage bundle, etc.

We suggest the following as an example to muster compliance with the physical discomfort bundle. The patient must be documented to be experiencing:

  • Hunger
  • Thirst
  • Too cold (or too hot)
  • Sleep deprivation
  • Drug-related constipation
  • And the inability to evacuate their bladder

Patient satisfaction with even a single component indicates failure of bundle compliance. Of course a cadre of personnel will need to be hired to ensure compliance with the bundles.

Based on the evidence from the Archives article, there was a 9.1% cost differential between the highest and the lowest satisfaction quartile. Shifting patients to lower satisfaction quartiles could result in huge cost savings. If the DIO and IHI strategies to offend are particularly effective, many patients will not return for health care at all, resulting in further savings. Targeting those who are the largest consumers of care could result in even larger savings.

The DIO and IHI would also save lives. Those patients in the highest satisfaction quartile had a 26% higher mortality rate than the lowest quartile. If patients who have poor self-related health and > 3 chronic diseases are excluded, the mortality rate is 44% higher in the highest satisfaction quartile.

Administrators could now be paid bonuses for not only compliance with the IHI bundles, but also lower patient satisfaction scores, since they can argue that lower satisfaction is actually good for patients. Furthermore, the administrators should receive higher compensation since the DIO and the personnel hired to ensure compliance with the IHI guidelines would be additional employees in their administrative chain of command and administrative salaries are often based on the number of employees they supervise.   

Richard A. Robbins, MD

Robert A. Raschke, MD

References

  1. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med 2012;172:405-11.
  2. Browne K, Roseman D, Shaller D, Edgman-Levitan S. Analysis & commentary. Measuring patient experience as a strategy for improving primary care. Health Aff (Millwood). 2010 May;29(5):921-5
  3. Bing S. The seven habits of highly offensive people. Fortune magazine available at http://money.cnn.com/magazines/fortune/fortune_archive/1995/11/27/208025/index.htm (accessed 7-7-12).

Reference as: Robbins RA, Raschke RA. A new paradigm to improve patient outcomes: a tongue-in-cheek look at the cost of patient satisfaction. Southwest J Pulm Crit Care 2012;5:33-5. (Click here for a PDF version of the editorial) 

Read More
Rick Robbins, M.D. Rick Robbins, M.D.

A Little Knowledge is a Dangerous Thing

An article entitled “A Comprehensive Care Management Program to Prevent Chronic Obstructive Pulmonary Disease Hospitalizations: A Randomized, Controlled Trial” from the VA cooperative studies program was recently published in the Annals of Internal Medicine (1).  This article describes the BREATH trial mentioned in a previous editorial (2). BREATH was a randomized, controlled, multi-center trial performed at 20 VA medical centers comparing an educational comprehensive care management program to guideline-based usual care for patients with chronic obstructive pulmonary disease (COPD). The intervention included COPD education during 4 individual and 1 group sessions, an action plan for identification and treatment of exacerbations, and scheduled proactive telephone calls for case management. After enrolling 426 (44%) of the planned total of 960 the trial was stopped because there were 28 deaths from all causes in the intervention group versus 10 in the usual care group (hazard ratio, 3.00; 95% CI, 1.46 to 6.17; p = 0.002). Deaths due to COPD accounted for the largest difference (10 deaths in the intervention group versus 3 in usual care; hazard ratio, 3.60; 95% CI, 0.99 to 13.08). This trial led us to perform a meta-analysis of educational interventions in COPD (3). In this meta-analysis of 2476 subjects we found no difference in mortality between intervention and usual care groups and that the recent Annals study was heterogenous compared to the other studies.

Should the recent VA study have been stopped early? Several reports demonstrate that studies stopped early usually overestimate treatment effects (4-7). Some have even suggested that stopping trials early is unethical (7). A number of articles suggest that trials should only be stopped if predetermined statistical parameters are exceeded, with the p value for stopping set at a very low level (4-7).  There was no planned interim analysis for any outcome in the recent VA trial. The rationale for stopping a study for an adverse effect when there is no a priori reasonable link between the intervention and the adverse effect is missing in this instance.  It seems unlikely that education would actually lead to increased deaths in COPD patients.  Any effect should logically have impacted the COPD related mortality, yet there was no significant increase for COPD related deaths in the intervention group. An accompanying editorial by Stuart Pocock makes most of these points and suggests that chance was the most likely cause of the excess deaths (8).

The VA Coop Trials coordinating center told the investigators that the reason for stopping the trial was that there were “significant adverse events” in the intervention group. Inquires regarding what adverse events went unanswered. This would seem to be a breakdown in VA research oversight. The information provided to both investigators and research subjects was incomplete and would seem to be a violation of the informed consent, which states the subject would be notified of any new information that significantly altered their risk.

Lastly, investigators were repeatedly warned by the VA coordinating center that “all communications with the media should occur through your facility Public Affairs office”. It seems very unlikely that personnel in any public affairs office have sufficient research training to answer any medical, statistical or ethical inquiries into the conduct of this study.

In our meta-analysis we have shown that self-management education is associated with a reduction in hospital admissions with no indication for detrimental effects in other outcome parameters. This would seem sufficient to justify a recommendation of self-management education in COPD. However, due to variability in interventions, study populations, follow-up time, and outcome measures, data are still insufficient to formulate clear recommendations regarding the form and content of self-management education programs in COPD.

Richard A. Robbins, M.D.*

Editor, Southwest Journal of Pulmonary

   and Critical Care

References

  1. Fan VS, Gaziano JM, Lew R, et al. A comprehensive care management program to prevent chronic obstructive pulmonary disease hospitalizations: a randomized, controlled trial. Ann Intern Med 2012;156:673-683.
  2. Robbins RA. COPD, COOP and BREATH at the VA. Southwest J Pulm Crit Care 2011;2:27-28.
  3. Hurley J, Gerkin R, Fahy B, Robbins RA. Meta-analysis of self-management education for patients with chronic obstructive pulmonary disease. Southwest J Pulm Crit Care 2012;4:?-?.
  4. Pocock SJ, Hughes MD. Practical problems in interim analyses, with particular regard to estimation.Control Clin Trials 1989;10:209S-221S.
  5. Montori VM, Devereaux PJ, Adhikari NK, et al. Randomized trials stopped early for benefit: a systematic review. JAMA 2005;294:2203-9.
  6. Bassler D, Briel M, Montori VM, et al. Stopping randomized trials early for benefit and estimation of treatment effects: systematic review and meta-regression analysis. JAMA 2010;303:1180-7.
  7. Mueller PS, Montori VM, Bassler D, Koenig BA, Guyatt GH. Ethical issues in stopping randomized trials early because of apparent benefit. Ann Intern Med. 2007;146:878-81.
  8. Pocock SJ. Ethical dilemmas and malfunctions in clinical trials research. Ann Intern Med 2012;156:746-747.

*Dr. Robbins was an investigator and one of the co-authors of the Annals of Internal Medicine manuscript (reference #1).

Reference as: Robbins RA. A little knowledge is a dangerous thing. Southwest J Pulm Crit Care 2012;4:203-4. (Click here for a PDF version of the editorial) 

Read More
Rick Robbins, M.D. Rick Robbins, M.D.

Why Is It So Difficult to Get Rid of Bad Guidelines?

Reference as: Robbins RA. Why is it so difficult to get rid of bad guidelines? Southwest J Pulm Crit Care 2011;3:141-3. (Click here for a PDF version of the editorial)

My colleagues and I recently published a manuscript in the Southwest Journal of Pulmonary and Critical Care examining compliance with the Joint Commission of Healthcare Organization (Joint Commission, JCAHO) guidelines (1). Compliance with the Joint Commission’s acute myocardial infarction, congestive heart failure, pneumonia and surgical process of care measures had no correlation with traditional outcome measures including mortality rates, morbidity rates, length of stay and readmission rates. In other words, increased compliance with the guidelines was ineffectual at improving patient centered outcomes. Most would agree that ineffectual outcomes are bad. The data was obtained from the Veterans Healthcare Administration Quality and Safety Report and included 485,774 acute medical/surgical discharges in 2009 (2). This data is similar to the Joint Commission’s own data published in 2005 which showed no correlation between guideline compliance and hospital mortality and a number of other publications which have failed to show a correlation with the Joint Commission’s guidelines and patient centered outcomes (3-8). As we pointed out in 2005, the lack of correlation is not surprising since several of the guidelines are not evidence based and improvement in performance has usually been because of increased compliance with these non-evidence based guidelines (1,9).

The above raises the question that if some of the guidelines are not evidence based, and do not seem to have any benefit for patients, why do they persist? We believe that many of the guidelines were formulated with the concept of being easy and cheap to measure and implement, and perhaps more importantly, easy to demonstrate an improvement in compliance. In other words, the guidelines are initiated more to create the perception of an improvement in healthcare, rather than an actual improvement. For example in the pneumonia guidelines, one of the performance measures which have markedly improved is administration of pneumococcal vaccine. Pneumococcal vaccine is easy and cheap to administer once every 5 years to adult patients, despite the evidence that it is ineffective (10). In contrast, it is probably not cheap and certainly not easy to improve pneumonia mortality rates, morbidity rates, length of stay and readmission rates.

To understand why these ineffectual guidelines persist, one needs to understand who benefits from guideline implementation and compliance. First, organizations which formulate the guidelines, such as the Joint Commission, benefit. Implementing a program that the Joint Commission can claim shows an improvement in healthcare is self-serving, but implementing a program which provides no benefit would be politically devastating. At a time when some hospitals are opting out of Joint Commission certification, and when the Joint Commission is under pressure from competing regulatory organizations, the Joint Commission needs to show their programs produce positive results.

Second, programs to ensure compliance with the guidelines directly employ an increasingly large number of personnel within a hospital. At the last VA hospital where I was employed, 26 full time personnel were employed in quality assurance. Since compliance with guidelines to a large extent accounts for their employment, the quality assurance nurses would seem to have little incentive to question whether these guidelines really result in improved healthcare. Rather, their job is to ensure guideline compliance from both hospital employees and nonemployees who practice within the hospital.

Lastly, the administrators within a hospital have several incentives to preserve the guideline status quo. Administrators are often paid bonuses for ensuring guideline compliance. In addition to this direct financial incentive, administrators can often lobby for increases in pay since with the increase number of personnel employed to ensure guideline compliance, the administrators now supervise more employees, an important factor in determining their salary. Furthermore, success in improving compliance, allows administrators to advertise both themselves and their hospital as “outstanding”.

In addition, guidelines allow administrative personnel to direct patient care and indirectly control clinical personnel. Many clinical personnel feel uneasy when confronted with "evidence-based" protocols and guidelines when they are clearly not “evidence-based”. Such discomfort is likely to be more intense when the goals are not simply to recommend a particular approach but to judge failure to comply as evidence of substandard or unsafe care. Reporting a physician or a nurse for substandard care to a licensing board or on a performance evaluation may have devastating consequences.

There appears to be a discrepancy between an “outstanding” hospital as determined by the Joint Commission guidelines and other organizations. Many hospitals which were recognized as top hospitals by US News & World Report, HealthGrades Top 50 Hospitals, or Thomson Reuters Top Cardiovascular Hospitals were not included in the Joint Commission list. Absent are the Mayo Clinic, the Cleveland Clinic, Johns Hopkins University, Stanford University Medical Center, and Massachusetts General.  Academic medical centers, for the most part, were noticeably absent. There were no hospitals listed in New York City, none in Baltimore and only one in Chicago. Small community hospitals were overrepresented and large academic medical centers were underrepresented in the report. However, consistent with previous reports, we found that larger predominately urban, academic hospitals had better all cause mortality, surgical mortality and surgical morbidity compared to small, rural hospitals (1).

Despite the above, I support both guidelines and performance measures, but only if they clearly result in improved patient centered outcomes. Formulating guidelines where the only measure of success is compliance with the guideline should be discouraged. We find it particularly disturbing that we can easily find a hospital’s compliance with a Joint Commission guideline but have difficulty finding the hospital’s standardized mortality rates, morbidity rates, length of stay and readmission rates, measures which are meaningful to most patients. The Joint Commission needs to develop better measures to determine hospital performance. Until that time occurs, the “quality” measures need to be viewed as what they are-meaningless measures which do not serve patients but serve those who benefit from their implementation and compliance.

Richard A. Robbins, M.D.

Editor, Southwest Journal of Pulmonary and Critical Care

References

  1. Robbins RA, Gerkin R, Singarajah CU. Relationship between the veterans healthcare administration hospital performance measures and outcomes. Southwest J Pulm Crit Care 2011;3:92-133.
  2. Available at: http://www.va.gov/health/docs/HospitalReportCard2010.pdf (accessed 9-28-11).
  3. Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med. 2005;353:255-64.
  4. Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA 2006;296:2694-702.
  5. Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytle BL, Brindis RG, Smith SC Jr, Pollack CV Jr, Newby LK, Harrington RA, Gibler WB, Ohman EM. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA 2006;295:1912-20.
  6. Fonarow GC, Yancy CW, Heywood JT; ADHERE Scientific Advisory Committee, Study Group, and Investigators. Adherence to heart failure quality-of-care indicators in US hospitals: analysis of the ADHERE Registry. Arch Int Med 2005;165:1469-77.
  7. Wachter RM, Flanders SA, Fee C, Pronovost PJ. Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure. Ann Intern Med 2008;149:29-32.
  8. Stulberg JJ, Delaney CP, Neuhauser DV, Aron DC, Fu P, Koroukian SM.  Adherence to surgical care improvement project measures and the association with postoperative infections. JAMA. 2010;303:2479-85.
  9. Robbins RA, Klotz SA. Quality of care in U.S. hospitals. N Engl J Med. 2005;353:1860-1.
  10. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care 2011;3:40-8.

The opinions expressed in this editorial are the opinions of the author and not necessarily the opinions of the Southwest Journal of Pulmonary and Critical Care or the Arizona Thoracic Society.

Read More
Rick Robbins, M.D. Rick Robbins, M.D.

Guidelines, Recommendations and Improvement in Healthcare

“You will never understand bureaucracies until you understand that for bureaucrats procedure is everything and outcomes are nothing.”-Thomas Sowell

Reference as: Robbins RA, Thomas AR, Raschke RA. Guidelines, recommendations and improvement in healthcare. Southwest J Pulm Crit Care 2011;2:34-37. (Click here for PDF version)

In the February, 2011 Critical Care Journal Club two articles were reviewed that dealt with Infectious Disease Society of America (IDSA) guidelines (click here for Critical Care Journal Club). The first by Lee and Vielemeyer (1) reviewed the evidence basis for the 4218 IDSA recommendations and found that only 14% were based on Level 1 evidence (data from >1 properly randomized controlled trial). The graph summarizing the data in Figure 1 of the manuscript is exemplary in its capacity to communicate the weak evidence basis for many of the IDSA recommendations.

A second study by Kett et al. (2) examined the outcomes when the American Thoracic Society (ATS)/IDSA therapeutic guidelines for management of possible multidrug-resistant pneumonia were followed. The authors found a 14% difference in survival when the guidelines were followed, but surprisingly, the survival was better if the guidelines were not followed. Dr. Kett and colleagues are to be congratulated for their candor in reporting their retrospective analysis of empirical antibiotic regimens for patients at risk for multidrug-resistant pathogens. The ATS/IDSA guidelines (3) state that “combination therapy should be used if patients are likely to be infected with MDR pathogens (Level II or moderate evidence that comes from well designed, controlled trials without randomization…”. However, the ATS/IDSA guidelines go on to state, “No data have documented the superiority of this approach compared with monotherapy, except to enhance the likelihood of initially appropriate empiric therapy (Level I evidence…from well conducted, randomized controlled trials)” (4).

The problem comes with the interpretation and implementation of these and other guidelines. Some, usually inexperienced clinicians or nonclinicians, seem to believe that following any set of guidelines will enhance the “quality” of patient care. Not all guidelines or studies are created equally. Some are evidence-based, important, correct and likely to make a real difference. These usually come from professional societies and are authored by well-respected, experts in the field whose goal is improve patient outcomes. As suggested by Kett’s article even these guidelines may not be infallible. Other guidelines are not evidence-based, unimportant, incorrect and can border on the trivial. These are often authored by nonprofessional, nonexperts to create a “political statistic” (5) rather than improve patient care.

If some guidelines are bad, how can those be separated from the good? We suggest 5 traits of quality guidelines: 

  1. The guideline’s authors are identified and are well-respected, experts in the field appropriate to the guideline.
  2. The authors identify potential conflicts of interest.
  3. The evidence is graded and supported by references to relevant scientific literature.
  4. The guidelines state how they selected and reviewed the references on which the guidelines are based.
  5. After completion, the guidelines are reviewed by a group of reasonably knowledgeable individuals (for example the IDSA Board of Directors) that can be identified and are willing to risk the reputation of themselves and their organization on the guidelines.

Even with the above safeguards guidelines may be non-evidence-based, unimportant, incorrect or trivial, and if so, implementation may be at best a waste of resources, or at worst harmful to patient care. We ask that guideline writing committees show restraint in authoring documents which are little more than their opinions. Not every medical question, especially the trivial and the unimportant, needs a guideline. Furthermore, we would ask an endorsement from professional organizations that only guidelines based on randomized clinical trials be given a strong recommendation. As pointed out by Lee and Vielemeyer (1) only 23% of the IDSA guidelines were supported by randomized trials while 37% of strong recommendations were supported only by opinion or descriptive studies.

IDSA states on their guidelines website, “It is important to realize that guidelines cannot always account for individual variation among patients. They are not intended to supplant physician judgment with respect to particular patients or special clinical situations. IDSA considers adherence to the guidelines listed below to be voluntary, with the ultimate determination regarding their application to be made by the physician in the light of each patient’s individual circumstances” (6). Despite this and other disclaimers, guidelines often take on a life onto themselves, frequently carrying the weight of law, regardless of the supporting evidence. We call for professional societies to end the practice of strongly recommending those guidelines based on opinion. Such practices have led and will continue to lead to systematic patient harm. Only those guidelines based on strong evidence should be given a strong recommendation. If the professional societies believe an opinion on a particular issue is appropriate despite a lack of evidence, a different designation such as recommendation or suggestion should be used to clearly separate it from a guideline.  The term guideline should be reserved for those statements that are evidence-based, important, and almost certainly correct and can make a real difference to patients.

Richard A Robbins MD, Allen R Thomas MD, and Robert A Raschke MD

 

References

  1. Lee DH, Vielemeyer O. Analysis of overall level of evidence behind infectious diseases society of America practice guidelines. Arch Intern Med. 2011;171:18-22.
  2. Kett DH, Cano E, Quartin AA, Mangino JE, Zervos MJ, Peyrani P, Cely CM, For KD, Scerpella EG, Ramirez JA. Implementation of guidelines for management of possible multidrug-resistant pneumonia in intensive care: an observational, multicentre cohort study.  Lancet Infect Dis 2011 Jan 19. [Epub ahead of print].
  3. American Thoracic Society, Infectious Diseases Society of America. Guidelines for the management of adults with hospital-acquired, ventilator-associated, and healthcare-associated pneumonia. Am J Respir Crit Care Med 2005;171:388–416.
  4. Paul M, Benuri-Silbiger I, Soares-Weiser K, Liebovici L. Beta-Lactam monotherapy versus beta-lactam–aminoglycoside combination therapy for sepsis in immunocompetent patients: systematic review and metaanalysis of randomised trials. BMJ, doi:10.1136/bmj.38028.520995.63 (published March 2, 2004). Available at URL http://bmj.bmjjournals.com/cgi/reprint/bmj.38028.520995.63v1.pdf?ck_nck (accessed February 11, 2011).
  5. Churchill, Winston. London, UK. 1945. as cited in The Life of Politics, 1968,  Henry Fairlie, Methuen, pp. 203-204.
  6. Infectious Disease Society of American. Standards, Practice Guidelines, and Statements Developed and/or Endorsed by IDSA. Available at URL http://www.idsociety.org/content.aspx?id=9088 (accessed February 12, 2011).
Read More