New Limits Proposed for Arsenic in Apple Juice

By Kim Smiley

The FDA recently proposed a new limit for the amount of arsenic allowed in apple juice.  The proposed limit would match what has already been established for bottled water.  This marks the first time that the FDA has established an arsenic limit for food or drinks other than water.

This issue can be analyzed by building a Cause Map, or visual root cause analysis.  A Cause Map lays out the causes that contribute to an issue in an intuitive, visual format so that the cause-and-effect relationships are obvious.  The first step of the process is to fill in an Outline with the basic background information for an issue.  The Outline also documents the impacts that the issue is having on the organizational goals so that the full effect of the issue can be clearly understood.  In this example, the concern that consumers may be exposed to arsenic, a known carcinogen, is an impact to the safety goal.  The media hype surrounding this issue is also important to consider because consumer concern could impact sales.

After the Outline is complete, the next step is to ask “why” questions and use the answers to build the Cause Map.  So why is there arsenic in apple juice?  Arsenic is a naturally occurring substance that is found in the environment.  There are also places that have been contaminated by arsenic, primarily the result of arsenic-based pesticides.  Use of arsenic-based pesticides in the US ended by 1970, but parts of the world still use them.

To understand this issue, it’s also important to understand the public relationships portion of  the puzzle. The concern over arsenic in apple juice exploded after the issue was featured on the “The Dr. Oz Show” in 2011.  Outcry after the segment was well covered by major media outlets and the issue has repeatedly made headlines over the past two years. Consumer Reports has also issued a report about samples of apple juice that test above the limit for drinking water.  None of this can possibly be good for the apple juice business.

The final step of the Cause Mapping process is to use the Cause Map to develop solutions.  A limit for arsenic in apple juice should go a long way to easing concerns if it is established.  The proposal is to set the limit for arsenic in apple juice to match that for drinking water, which should be conservative since consistent consumption of more apple juice than water seems unlikely.   Producers of apple juice that is found to contain arsenic above the limit could face legal action and the juice could be removed from the market.  How much the new limit will actually impact the products on the shelf is unclear because different sources have reported widely different sample results, but at least action could be taken if any juice is found to have arsenic levels above the limit.

 

Patient Wakes While Being Prepped for Organ Harvesting

By ThinkReliability Staff

An extremely rare but tragic case has been recently brought to light.  On October 16, 2009, a patient was brought to a hospital center in Syracuse, New York after suffering a drug overdose.  Over the next several days, the patient was in a deep coma, though she did not meet the requirements for brain death based on scans performed at the hospital.   The family was notified and agreed to donate her organs.  The patient, after being sedated, was prepped for donation after cardiac death (DCD).  The organ harvesting stopped prior to any organs being removed when the patient opened her eyes on the operating table.

The hospital was cited not only for the error, but for the inadequate response and investigation after the error was made by the state Department of Health and the Centers for Medicare & Medicaid Services (CMS).  Specifically, the CMS report states “The hospital’s Quality Assurance Performance Improvement program did not conduct thorough reviews of an adverse occurrence involving a patient who was being considered for withdrawal of life-sustaining treatment when she regained consciousness.”

We can examine the error using a Cause Map, or visual root cause analysis, to determine the issues related to the incident.  This provides a starting point for developing solutions to reduce the risk of such an incident recurring, and improving healthcare reliability at this site.

It’s important to frame the issue with respect to an organization’s goals.  In this case, the patient safety goal was impacted due to the risk of patient death from having organs removed.  The accidental removal of organs can also be considered an impact to the patient services goal.  The compliance goal is impacted because of the sanction and fine (though a minimal $6,000) from the Health Department.  Negative press and public opinion as a result of this incident – which was uncovered and reported to the Health Department by the press – is an impact to the Organizational goal.

Beginning with an impacted goal – in this case the Patient Safety goal – asking “Why” questions allows us to develop the cause-and-effect relationships that led to the issue.  In this case, the risk for patient death was due to risk of removing her organs.  The risk for removing organs is because the organ harvesting process had begun.  (The investigation did find that there were no concerns with the organ donation process itself, indicating that errors were prior to the donation prep process.)  The process began because the family agreed to donate organs after the patient was (incorrectly) determined to have suffered cardiac death.

There were a combination of errors that resulted in the patient being incorrectly declared “dead”.  Because all of these factors acted together to result in the impact to the goals, it is important to capture and fully investigate all of them to be able to improve processes at the organization.  In this case, the patient was injected with a sedative, which was not recorded in the doctor’s notes.  It is unclear who ordered the sedative and why.  (It’s also unclear why you would sedate a dead patient, as another doctor stated “If you have to sedate them . . .they’re not brain dead.”)  The patient had previously been in a deep coma due to the drug overdose.  It is possible the coma went on longer than usual because the patient was not given activated charcoal to inhibit absorption of the drugs by the body after the staff was unable to  unable to place a tube.  There appears to have been no additional effort – another area that should be investigated to ensure that protocol is sufficient for patient safety.

The hospital’s evaluation of the patient’s condition before a diagnosis of cardiac death was insufficient.  Specifically, it has been noted that the staff performed an inadequate number of brain scans, inadequate testing to determine the drug levels remaining in the body, and ignored signs that the patient was regaining consciousness prior to preparing her for organ donation.  Because details of these issues were not thoroughly investigated, it’s impossible to know whether the protocols in place at the organization were inadequate for determining cardiac death or whether the protocols were adequate and weren’t followed by staff.

Determining if changes need to be made to protocols as a result of this tragic (though I do want to emphasize rare – the state was unable to find any similar cases in its records) incident is of utmost importance to reduce the risk of an incident like this happening again.  Hopefully the additional scrutiny from the state and CMS will ensure improved patient safety in the future.

To view the  Timeline, Outline and Cause Map of this issue, please click “Download PDF” above.  Or click here to read more.

HIPAA Breach Compromised Data from 187,533 Patients

By ThinkReliability Staff

On July 1, 2013, 187,533 clients of the Indiana Family and Social Security Agency (FSSA) were notified that their medical and financial information may have been accidentally sent to other clients.  Of these, nearly 4,000 may have had their social security numbers disclosed.  Not only is this a breach of the Healthcare Insurance Portability and Accountability Act (HIPAA), it can potentially result in identity theft for those patients affected.

There’s more to this case than initially meets the eye, and many open questions.  We can get our bearings around what is known and what is as yet unknown that may have resulted in issues for patients and the agency involved by capturing the information within a Cause Map, or visual root cause analysis.  Doing so for events that occur can increase Healthcare reliability by delving deeper into related causes, leading to better solutions.

The first step when beginning an investigation is to capture the what, when and where of an incident as well as the impacts to the goals.  If more than one date is relevant, it may be helpful to capture it in a timeline.  In this case, the error was introduced on April 6, 2013.  The error was fixed (at which point the data breach ended) on May 21, 2013.  However, clients were not notified of the potential breach until July 1, 2013.

The impacts to the organization’s goals are those things that prevent an organization from having a perfect day.  In this case, nobody was injured and it’s unclear if there was an impact to employees.  The compliance goal was impacted due to the HIPAA breach.  The organization is impacted because of the breach of patient trust.  Patient services were impacted due to compromised confidential patient information and the potential for identity theft.

We begin with one of the impacts to the goals and ask “Why” questions to develop the cause-and-effect  relationships that led to the impact.  In this case, identity theft is a potential issue because of the compromised patient and financial information, especially social security numbers.  However, the longer the period between the potential breach and when patients are notified, the greater the risk for identity theft.  In this case, from the date that the programming error was incorporated into the system until the patients were notified of the breach was 86 days.  Of this, 34 days elapsed before the error was noticed, but there has been no explanation for the additional 52 days before the notification.  Because the speed of the notification is so important, the “why” here should be addressed in the Cause Map and solutions developed to ensure a speedier notification system in the case of another breach.

We can also ask additional “why” questions to determine how the breach happened in the first place.  Clients were sent confidential health and financial information belonging to other clients.  Though details are sparse, an improperly used variable resulted in an error in the customized coding provided by a contractor to the agency.  How the error made it in – and why it wasn’t found by either the contractor or the agency involved – is unclear.  These are questions that need to be answered during the root cause analysis to reduce the risk of this kind of issue happening again.

The potentially compromising mailing continued for 45 days, increasing the number of people impacted.  (The agency says that because of the way the mailings are done, they have no way to know whose information was actually sent out.)  Of these 45 days, it took 34 days to notice the error.  (How the error was noticed is also not clear but is additional information that should be included in the analysis.)  After the error was discovered, the mailings apparently continued while the error was being fixed for 11 days.  This is yet another line of inquiry to be undertaken during the analysis.  Ideally solutions will help to implement fixes faster – and make sure that breaches don’t continue when a system is known to be working improperly.

In a letter sent to the clients potentially affected, the FSSA stated that the contractor who provides the programming “also is taking steps to improve their computer programming and testing processes to prevent similar errors from occurring in the future.”   While this is certainly necessary, the FSSA should also be looking at their own processes for verifying contractor work and notifying clients in the case of a data breach.

To view the Outline and Cause Map, please click “Download PDF” above

Is a Doctor onboard? Management of inflight medical emergencies depends on other passengers

By ThinkReliability Staff

In a recent article, Pierre M. Barker, M.D. describes a terrifying situation – a passenger stops breathing on a plane over the Atlantic Ocean.  Turns out inflight medical emergencies are not that uncommon.  A study published in the New England Journal of Medicine says that about 1 in 600 flights has an inflight medical emergency – for a total of about 44,000 a year, worldwide.  Although the number of people who die as a result of these emergencies is fairly low, the incident that Dr. Barker was involved in indicates there is much room for improvement.

Taking the information from Dr. Barker’s article, we can perform a visual root cause analysis, or Cause Map, of the medical emergency on his flight.  Information gleaned from performing an analysis of one particular incident can provide valuable insight to improving outcomes for similar incidents – in this case, all inflight medical emergencies.

After recording the what, when, and where of the incident (here it’s inflight over the Atlantic Ocean), we capture the incidents to the goals.  Based on Dr. Barker’s description, this situation is aptly described as a “near miss” for patient safety.  What this means is that, had a lot of luck not headed this passenger’s way, he may very well have died on this flight.  We’ll discuss exactly what it is that made it a near miss – and not a fatality – later.   In this situation – and many other inflight emergencies – it seems that the employees are inadequately prepared for medical emergencies.  This is an impact to them – certainly it must be very stressful to have this sort of situation happen on their watch while feeling like there’s not much they can do.   In this case (and occasionally other, similar inflight emergencies), the flight was diverted, an impact to the organization’s goals.  Considering the sick passenger as a “patient” (and this is how I’ll refer to him going forward), the patient services were impacted because the ventilation bag did not connect to the oxygen tank.  Lastly, other passengers were called on to treat the “patient”, which was found to be very typical from the study.  This is an impact to the labor/time goal.

Once we’ve determined which goals were impacted, we can ask “Why” questions to determine which cause-and-effect relationships led to the impacted goals.  In this case there’s a combination of negative impacts and positive impacts – which is how the situation ended up as a “near miss”.  On the negative side, the patient stopped breathing and suffered cardiac arrest.  Because the conditions on a plane are hardly ideal for health, this may contribute to inflight medical emergencies.  There was difficulty in giving the patient oxygen, because the ventilation bag did not connect to the oxygen tank.  Additionally, there was a lack of patient medical history.  The patient was unconscious and there was no health information available which may have aided in his treatment.

The situation described above could have gone very, very badly.  There are some positive causes that contributed as well to make this a near miss.  First, the fact that the patient had stopped breathing was noticed very quickly, because he happened to have Dr. Barker – a pediatric lung specialist – two rows behind him who noticed his difficulty breathing, and then when it stopped altogether.  Because this was not by design but rather a stroke of rather good luck, this is how we get a “near miss”.  After all, you certainly can’t count on a lung specialist tracking the breathing of every person on a plane to stop inflight emergencies.  Not only was the issue noticed quickly it was treated quickly, by Dr. Barker as well as two ER nurses, a surgeon and an infectious disease doctor, as well as a flight attendant who performed a cardiac massage.  This ad-hoc medical team managed to do a heroic job of stabilizing the patient – including use of an AED, which was on the flight, an IV with fluids and glucose, and administration of an aspirin donated by another patient (though according to the study, aspirin should be included in the emergency medical kit on each flight as well).

The flight was diverted – as quickly as possible – to Miami.  This took about two and a half hours, during which time the medical team kept the patient stable until he was transferred off-plane.  This patient was extremely lucky to have these medical personnel aboard.  According to the NEJM study, doctors are present about 50% of time on flights, and the responsibility for treatment of inflight medical emergencies – as well as the decision whether to divert a plane – is generally left up to them.  When an inflight medical emergency occurs and a doctor is not present, the plane is more likely to divert.

As a result of this incident, Dr. Barker has some recommendations on how to make flying safer.  The NEJM study also makes some recommendations.  These solutions are placed directly on the Cause Map, and evaluated for effectiveness.  In this case, creating a standard emergency kit (there is an FAA-mandated emergency medical kit but as seen in this incident, the pieces may or may not work together properly and the kit may be different on different flights) for all flights should be developed.  This kit should ensure that all necessary equipment and medication for the most common and dangerous inflight medical conditions is included and that all flight attendants know where to find and how to put together the necessary pieces of equipment in the kit.  If, as seems to be the case, medical professionals on flights are expected to be responsible for other sick passengers in the case of an emergency, they should be notified as such.  If this occurred, flight attendants would also be aware of where to find these medical professionals.  This could involve a briefing similar to that received by personnel who sit in exit rows.  Where easy diversion is not possible (such as flights over oceans or uninhabited areas), at least one flight attendant should receive EMT training which includes in-depth instruction on how to use the medication and equipment available in the medical kit.  Coordination with onground medical staff should continue, with a focus on trying to make medical history available when possible.

The aviation industry has made flying incredibly safe.  Although inflight medical emergencies are rare and usually non-fatal, the industry now has the opportunity to make experiencing a medical emergency onboard a flight even safer.

To view the Outline, Cause Map, and proposed solutions,  please click “Download PDF” above.  Or click here to read more.