Consumers outraged by EpiPen price increases

By ThinkReliability Staff

Outrage over rampant increases in drug prices is nothing new. But it seems to have reached a new high with the rising cost of the EpiPen, an auto-injection device that delivers epinephrine to severe allergy sufferers who are at risk of their throat closing shut due to anaphylaxis. Disgust with the increase is at such levels that Martin Shkreli, the CEO of Turing, who, in 2015, raised the price of Daraprim, a malaria-HIV drug 5,000% (from $13.50 to $750 a tablet) has asked “What drives this company’s moral compass?” and referred to Mylan (the manufacturer of the EpiPen®) as “vultures”. (See our previous blog on the price increase of Daraprim.)

The EpiPen has been in use since 1977 (so research and development costs have long since been recouped). It was bought by Mylan in 2007, when the cost of an EpiPen was less than $60 and revenue from the product was about $2 million a year, with a profit margin of 9%.   Fast forward to 2016, when the cost of an EpiPen 2-pack (the only way they are sold) is more than $600, with a profit margin of 55%. Last year, the revenue from EpiPen was $1.2 billion, accounting for 40% of Mylan’s profits. Over that time, the compensation for Mylan’s CEO increased from $2.5 million to $18.9 million. Profits are rising rapidly along with the EpiPen costs.   (As a comparison, the cost of two EpiPens in France is about $85.)

The US Senate has demanded information related to the cost increase. It’s not based on the cost of raw materials (the amount of epinephrine in an EpiPen costs less than a dollar).   Mylan claims that it’s due to an improved product, but provides no specifics, and there doesn’t appear to be anything noticeably different to users. Many claim that the price increase is simply because Mylan can. Senator Amy Klobuchar, the ranking member of the Senate Judiciary Committee, says “This outrageous increase in the price of EpiPens is occurring at the same time that Mylan Pharmaceutical is exploiting a monopoly market advantage that has fallen into its lap.”

Mylan currently has an effective monopoly on the EpiPen. There is no generic alternative.   EpiPen has a monopoly in the US via patent until 2025. Another manufacturer’s application to the US Food and Drug Administration (FDA) was rejected for “major deficiencies”. It does not expect its product to be brought to market until 2017 at the earliest. Three alternatives to the EpiPen have been removed from the market since 2012 although one, Adrenaclick, is back on the market. However, Adrenaclick is not considered pharmaceutically equivalent, so it has to be prescribed by a doctor, which is unlikely due to the name recognition of EpiPen (thanks to its massive marketing campaign and free giveaways). It’s also considered more difficult to use, somewhat expensive on its own, and not covered by many insurance plans.

Mylan says that “In 2015, nearly 80% of commercially insured patients using the My EpiPen Savings Card received EpiPen Auto-Injector for $0.” That leaves nearly 20% of insured patients (and nearly all uninsured patients) paying out-of-pocket costs that have been reported at more than $1,000 for a two-pack (the only way EpiPens are currently sold – many times two doses are needed). To add insult to injury, an EpiPen is usually good for a year or less because epinephrine is extremely unstable and a full dose is needed in the case of a reaction.

Unfortunately, there aren’t many good alternatives to paying for the EpiPen. Going without, or using an expired EpiPen, could be extremely dangerous. While epinephrine can be injected via a normal syringe without the EpiPen functionality, that in itself carries risk and should only be performed by a trained professional. (Many governments are providing epinephrine, syringes and training to emergency medical responders to avoid the cost of multiple EpiPens.)

Most of the general public will just have to wait: for a generic to be introduced, for Adrenaclick to be covered by insurance, or for the Senate to quash the price gouging.

To view a one-page PDF showing the cause-and-effect relationships associated with the EpiPen price increases, click “Download PDF” above. Or, click here to read more.

Sewage Leak Kills Two Mechanics

By ThinkReliability Staff

Two workers were overwhelmed by hydrogen sulfide and later died of their injuries after attempting to fix a sewage leak at a waste water treatment plant in Wichita Falls, Texas on July 2, 2016. The city has released its internal investigation findings into the incident. In order to fully understand the issues associated with the incident, we can capture the information within a Cause Map, or visual form of root cause analysis.

The first step in the Cause Mapping process is to capture the what, when and why of an incident, as well as the impacts to the goals. In this case, the injuries that caused the deaths occurred July 2 in Wichita Falls waste water treatment plant’s basement #2 while the two employees were attempting to repair a valve causing a wastewater leak. Capturing the task being performed at the time of the incident can provide useful information related to the incident.

The deaths of the two employees are an impact to the safety goal. The high levels of hydrogen sulfide are an impact to the environmental goal, while the need for a valve repair is an impact to the property goal. The emergency response and external incident investigation impact the labor goal. The investigation begins with one of the impacted goals as the first “effect” of a cause-and-effect relationship. Additional causes are added by asking “why” questions. Capturing all relevant cause-and-effect relationships offers the highest number of potential solutions, which can act on any cause to reduce the risk of a problem recurring.

The employee fatalities resulted from exposure to high levels of hydrogen sulfide. The high levels of hydrogen sulfide resulted from the leak of sewage with high levels of sulfur. While problems with ventilation can result in high gas levels, the intake and outgoing ventilation were both found to be functioning properly and the door to the building was left open. The sewage was contained in the pipes (as it was a waste water treatment plant) and leaked from a failed valve. However, the investigation found that the sulfur levels were higher than expected and set out to find the cause of the elevated sulfur. Industrial users of the waste water system have been interviewed and did not indicate any sulfur-using production processes or discharges with elevated sulfur. Although the investigation continues, the source of the elevated sulfur may never be determined. The proposed solution – installing permanent detectors in the basement and other areas of risk – will mitigate the risk of potential personnel injury due to high gas levels, regardless of the cause.

The exposure to high levels of hydrogen sulfide also resulted from the employees (both mechanics) being in the basement with the elevated hydrogen sulfide levels without self-containing breathing apparatuses (SCBAs). The employees were in the basement repairing the leak and initially entered wearing SCBAs, having been warned that the area smelled “gassy”. However, for reasons that are unclear the employees did not take air quality measurements or complete initial checklists for using SCBAs. A proposed solution is to require employees to carry gas detectors (which would provide immediate feedback as to whether an additional oxygen source was required) in areas where gas could be present. The investigation found that the training provided to these and other employees regarding the use of SCBAs was adequate and no changes to the program were recommended.

These potential solutions, which would provide real-time feedback as to gas exposure hazards, would increase the likelihood that employees would wear SCBAs or other personal protective equipment, thus reducing the risk of exposure to high levels of gas, when present. To view the outline, initial Cause Map and proposed solutions, please click on download PDF above. Or, click here to learn more.

Put down the cookie dough

By ThinkReliability Staff

Almost everybody knows that there are potential risks with eating raw cookie dough (or any other raw batter).  However, much of that risk was thought to be due to the potential of salmonella from raw eggs and so, if the plan was to eat, rather than cook, the dough, the eggs could just be left out.  No more! say health experts.  Turns out that just removing the eggs and eating the raw dough may protect you from salmonella, but it still leaves you at risk for E. coli.

A Cause Map, or visual form of root cause analysis, can help demonstrate the risks (or potential impacts) associated with an issue, as well as the causes that lead to those risks.  The process begins by capturing the what, when and where of an incident, as well as the impact to the goals in an Outline.  In this case, the problems being addressed are risk of illness from eating raw cookie dough, as well as a recall associated with contaminated flour.  The when and where are just about everywhere that dough or batter is being made (or eaten).  The safety risks most commonly associated with eating raw cookie dough are salmonella and now E. coli.  The environmental goal is impacted because flour is contaminated with E. coli and the property goal is impacted because of 45 million pounds of flour that have been recalled by the current recall.

Once the impacted goals are captured, they become the first “effects” in the cause-and-effect relationships.  The Cause Map is created by capturing all the causes that led to an effect.  In this case, the risk of contracting salmonella from eating raw cookie dough results from eggs being exposed to salmonella, and the salmonella not being effectively destroyed (by the heat of baking).  The risk of contracting E. coli results from a similar issue.

Cookie dough contains raw flour.  The cooking process kills E. coli (as well as salmonella), meaning cookies and other baked goods are safe to eat, but dough is not.  Distributed raw (uncooked) flour was found to be contaminated with E. coli (leading to the impacted environmental goal and the recall).  The flour was likely contaminated with E. coli while it was still wheat in the field.  Birds and other animals do their business just about wherever they want, and it’s got some bacteria in it, meaning that excrement that falls on wheat fields can deliver contamination to pre-flour.  (Quick side note: we frequently get asked when to stop asking “why” questions.  When you get to an answer that is completely outside your control, like why birds poop in wheat fields, for example, this is a good place to end the cause-and-effect reasoning.)

While flour is processed, the process isn’t designed to completely kill pathogens (unlike pasteurization, for example) and according to Martin Wiedmann, food safety professor at Cornell University, “There’s no treatment to effectively make sure there’s no bacteria in the flour.”  Flour is not designed to be a ready-to-eat product.

Once the causes related to an issue have been developed, the next step is to brainstorm and select solutions.  Unfortunately, health professionals have been clear that they’re not getting far on keeping birds from pooping in fields, nor is there some sort of miracle treatment that will ensure raw flour is safe from disease.  (Scientists underscore that flour isn’t less safe, it’s just that we are becoming more aware of the risks.  Says Wiedmann, “Our food is getting safer, but also our ability to detect problems is getting better.”)  The only way to reduce your risk of getting sick from raw cookie dough is . . . not to eat it at all.  Also, wash your hands whenever you handle flour. (This is of course after you’ve thrown out the floor involved in the recall, which you can find by clicking here.)

To view the Cause Map of the problems associated with raw cookie dough, please click on “Download PDF” above.

Patient receives double dose of radiotherapy

By ThinkReliability Staff

The risk associated with medical treatment administration is high. There is a high probability for errors because of the complexity of the process involved in not only choosing a treatment, but ensuring that the amount and rate of treatment is appropriately calculated for the patient. The consequence associated with treatment errors is significant – death can and does result from inappropriately administered treatment.

Medical treatment includes delivery of both medication and radiation. Because of the high risk associated with administering both medication and radiation therapy, independent checks are frequently used to reduce risk.

Independent checks work in the following way: one trained healthcare worker performs the calculation associated with medical treatment delivery. If the treatment is then delivered to the patient, the probability that a patient will receive incorrect treatment is the error rate of that healthcare worker. (For example, a typical error rate for highly trained personnel is 1/1,000. If only one worker is involved with the process, there is a 0.1% chance the patient will receive incorrect treatment.) With an independent check, a second trained worker performs the same calculations, and the results are compared. If the results match, the medication is administered. If they don’t, a secondary process is implemented. The probability of a patient receiving incorrect treatment is then the product of both error rates. (If the second worker also has an error rate of 1/1,000, the probability that both workers will make an error on the same independently performed calculation is 1/1,000 x 1/1,000, or 0.0001%.)

However, in a case last year in Scotland, a patient received a significant radiotherapy overdose despite the use of independent checks, and verification by computer.   In order to better understand how the error occurred, we can visually diagram the cause-and-effect relationships in a Cause Map. The error in this case is an impact to the patient safety goal, as a radiotherapy overdose carries a significant possibility of serious harm. The Cause Map is built by starting at an impacted goal and asking “why” questions. All causes that result in an effect should be included on the Cause Map.

In this case, the radiotherapy overdose occurred because the patient was receiving palliative radiotherapy, the incorrect dose was entered into the treatment plan, and the incorrect dose was not caught by verification methods. Each of these causes is also an effect, and continuing to ask “Why” questions will develop more cause-and-effect relationships. The incorrect dose was entered into the treatment plan because it was calculated incorrectly (but the same) by two different radiographers working independently. Both radiographers made the same error in their manual calculations. This particular radiotherapy program involved two beams (whereas one beam is more common). The dose for each beam then must be divided by two (to ensure the overall dose is as ordered). This division was not performed, leading to a doubled calculated dose. The inquiry into the overdose found that both radiographers used an old procedure which was confusing and not recommended by the manufacturer of the software that controlled the radiotherapy delivery. While a new procedure had been implemented in February 2015, the radiographers had not been trained in the new procedure.

Once the two manual calculations are performed, the treatment plan (including the dose) was entered into the computer (by a third radiographer). If the treatment plan does not match the computer’s calculations, the computer sends an alert and registers an error. The treatment plan cannot be delivered to the patient until this error is cleared. The facility’s process at this point involves bringing in a treatment planner to attempt to match the computer and calculated doses. In this case, the treatment planner was one of the radiographers who had first (incorrectly) performed the dose calculation. The radiographers involved testified that alerts came up frequently, and that any click would remove them from the screen (so sometimes they were missed altogether).

The inquiry found that somehow the computer settings were changed to make the computer agree with the (incorrect) manual calculations, essentially performing an error override. The inquiry found that the radiographers involved in the case believed that the manually calculated dose was correct, likely because they didn’t understand how the computer calculated doses (not having had any training on its use) and held a general belief that the computer didn’t work well for calculating two beams.

As a result of this incident, the inquiry made several recommendations for the treatment plan process to avoid this type of error from recurring. Specifically, the inquiry recommended that the procedure and training for manual calculation be improved, independent verification be performed using a different method, procedures for use of the computer be improved (including required training on its use), and requiring manual calculations to be redone when not in agreement with the computer. All of these solutions will reduce the risk of the error occurring.

There is also a recommended solution that doesn’t reduce the risk of having an error, but increases the probability of it being caught quickly. This is to outfit patients receiving radiotherapy with a dosimeter so their received dose can be compared with the ordered dose. (In this case, the patient received 5 treatments; had a dosimeter been used and checked the error would likely have been noticed after only one.)

To view the Cause Map for this incident, please click on “Download PDF” above.

CDC provides guidance for states to respond to Zika cases

By ThinkReliability Staff

The first Zika cases related to the current outbreak were found in Brazil in May 2015, along with a dramatic increase in microcephaly in babies born in that year. (See our previous blog about the possible link – now verified – between Zika and microcephaly.) Microcephaly is a serious birth defect that impacts many children whose mothers contract Zika while pregnant.

Active Zika transmission currently exists in nearly all of South and Central America, the Caribbean, and some Pacific Islands. 934 people in the US have been infected with Zika; 287 of those infected are pregnant women. Most of these people were infected outside the country and then traveled to the US. Zika is primarily spread by mosquitos, but can also be transmitted through blood transfusion, laboratory exposure and sexual contact.

While no cases of transmission by mosquito have yet been reported in the continental US, the Centers for Disease Control and Prevention (CDC) has released a blueprint for states to respond to locally transmitted cases of Zika. A visual diagram outlining the steps to be taken from the blueprint (a Process Map) can be helpful. (To view the Process Map for the CDC’s interim Zika response process, click on “Download PDF”.)

The CDC’s plan involves four stages. The first stage is implemented during mosquito season. This stage involves surveillance for suspected locally transmitted infections (i.e. persons with “symptoms compatible with Zika virus infection who do not have risk factors for acquisition through travel or sexual contact”, with pending test results). Upon a suspected infection, state officials and the CDC should be notified. State or local officials will open an epidemiological investigation (including ongoing surveillance) and begin implementing controls, involving both reducing mosquito populations and continuing public outreach, with CDC assistance as needed.

Stage 2 occurs upon confirmation of a locally transmitted infection. At this point, notification expands to include local blood centers as well as others required by International Health Regulations. The CDC will assist with an expanded investigation, surveillance, and communication, including deployment of an emergency response team (CERT) if desired. Once Stage 2 has been reached, stand down will only occur after 45 days (3 mosquito incubation periods) without additional infections or when environmental conditions no longer permit transmission.

If there is confirmed Zika in two or more persons whose movement during the exposure period overlaps within a one-mile diameter, Stage 3 (widespread local transmission) is entered. First, local officials will attempt to determine the transmission area, the “geographic area in which multiperson local transmission has occurred and may be ongoing”. Communication, surveillance, testing and controls are enhanced and expanded. Interventions for blood safety and vulnerable populations (including pregnant women) are implemented.

Once the infection has spread outside a county, it enters Stage 4 (widespread multijurisdictional transmission). All steps taken in previous stages are expanded and enhanced. The CDC will evaluate whether local capacity is adequate for response, and will assist as needed. Stage 4 actions will be continued until the criteria for stand down is met.

Based on previous experience with two mosquito-transmitted diseases, chikungunya & dengue fever, the CDC does not believe Stage 4 will be reached within the United States. However, as Dr. Tim F. Jones, an epidemiologist for the State of Tennessee, says, “Even though the percentages and the likelihoods are incredibly low, the outcome is awful.” Risk is a function of probability and consequence. Even with a low probability, the high consequence makes the risk from Zika considerable, and worth planning for.

To view the Process Map, click on “Download PDF” above. Or, click here to view the CDC’s interim guidance.

.

Man Paralyzed By Medical Error Hopes to Fix System for Others

By ThinkReliability Staff

The team investigating medical errors that happened at a Washington hospital has an unusual member: the man who was paralyzed as a result of these medical mistakes.  Not only does he want to know what happened, he hopes that his design experience (he formerly designed for Microsoft) can be translated to healthcare to “make hospitals everywhere safer for patients.”

While the full analysis of his particular case is not yet complete, the information that is known can be captured in a Cause Map, a visual form of root cause analysis.  The process begins by capturing the what, when and where of the incident, as well as the impact to the organization’s goals.  In this case, treatment was at a Washington hospital’s emergency room for a back injury obtained May 11, 2013. The interaction with the facility involved a missed diagnosis, poor communication, and eventually resulted in paralysis to the patient.  In this case the patient safety goal is impacted due to the paralysis of the patient.  The financial goal is impacted due to a $20 million settlement against the hospital. (Part of the settlement included the hospital working with the patient on the analysis.)  The labor/ time goal is impacted due to the months of rehabilitation required after the injury.

The second step of the process, the analysis, develops cause-and-effect relationships beginning with one of the impacted goals.  In this case, the patient safety goal was impacted due to the paralysis of a patient. The paralysis resulted from a spinal cord injury, which was caused by a significant back fracture.  There are times when more than one cause is required to produce an effect.  The significant back fracture was caused by an untreated hairline fracture on the back AND the patient being moved inappropriately.  If either of these things had not occurred, the outcome may have been very different.  The analysis continues by asking ‘why’ questions of both the causes.

The patient had a hairline fracture on his back that resulted from a fall out of bed (due to “luxurious sheets”) and a condition (ankylosing spondylitis), which makes the spine brittle and more prone to fractures.  Beginning on May 12, 2013, the patient visited the hospital’s emergency room four times in two weeks. The hairline fracture was untreated because it was not diagnosed during any of those visits, despite the patient’s insistence that, because of his condition, he was concerned about the possibility of a back fracture.  While the hairline fracture is visible on the imaging scan, according to the patient’s lawyers, it was missed because the scan was focused on the abdomen.  The notes from the first doctor’s visit were not documented until 5 days after the encounter.

On May 25, 2013, two weeks after the initial injury, the patient returned to the emergency room for severe pain and an MRI was ordered.  While being positioned in the MRI, the patient lost neurological function from about the neck down.  He was transferred to another hospital, who found it likely that the paralysis had resulted from being positioned in the MRI.

The patient was inappropriately moved, given his injury (which at this point was still undiagnosed and untreated).  The patient was being positioned for an MRI ordered to find the cause of his back pain (probably due to the untreated hairline fracture of his back).  Either the previous imaging scan was not reviewed by the doctors at this visit, or the scan was unavailable.  Had the imaging scan indicating a hairline fracture been available, the MRI may not have been necessary. If the patient was given an MRI anyway, the staff would have been aware of the fracture and would likely have moved the patient more carefully.

However, the staff was not aware of the injury.  The patient’s repeated concern over having a back fracture was unheeded during all his visits, and the staff appeared to be unaware of the medical information from the three previous visits, likely due to ineffective communication between providers (a common issue in medical errors).

As more detail regarding the case is discovered, it can be added to the Cause Map.  Once all the information related to the case is captured, solutions that would reduce the risk of the problem recurring can be developed and those that are most effective can be implemented.  The patient will be a part of this entire process.

To view the initial Cause Map of this issue, click on “Download PDF” above. Or click here to read more.

It’s Faster to Send a Rescue Mission to the International Space Station Than to the South Pole

By ThinkReliability Staff

Yes, you read that correctly. Says Ron Shemenski, a former physician for the station, “We were stuck in a place that’s harder to get to than the International Space Station. We know we’re on our own.” A sick astronaut on the International Space Station can jump in the return vehicle permanently parked at the station and make it back to earth in about 3.5 hours. In contrast, just to get a plane to the Amundsen-Scott South Pole research station takes 5 days – in good weather. Which is not at all the situation right now – at the South Pole it’s the very middle of winter.

This makes for an incredibly risky evacuation. It’s so risky that the scientists at the station expect to stay there from February to October, no matter what. The on-site physician biopsied and administered chemotherapy to herself in 1999. A scientist who suffered a stroke in 2011 had to wait until the next scheduled flight. However, winter medical evacuations have been performed twice before in the history of the station (since 1957), in April 2001 and September 2003. These two evacuations were performed by the same company that will perform this rescue. On June 14, the National Science Foundation (who runs the station) approved the medical evacuation of a scientist there. Two flights left Calgary, Canada that same day.

What makes the evacuation so risky that there is a debate over whether or not to rescue an ailing scientist? There are multiple factors that are considered in the decision. These issues can be developed within a cause-and-effect diagram, presented as a Cause Map. The first step in the process is to determine the impacts to the goals that result from a problem. In this case, we will look at the problem of a scientist at the South Pole becoming ill and requiring evacuation. There is an impact to the patient safety goal due to the delay of medical treatment. There’s also an impact to the safety of the aircrew on the flights used to rescue the scientist. There’s also an impact to property/ equipment and labor/ time due to the risky, complex evacuation process.

In the analysis (the second step of the process), the impacted goals become the effect in the first cause-and-effect relationships. The delay in medical treatment for the patient (the ailing scientist) results because required treatment is not available at the station, although a physician and physician’s assistant staff the clinic throughout the winter. There’s also a delay for the decision to send an evacuation plane. In this case, a day and a half of deliberation were required. As previously discussed, normally planes do not arrive at the station during the winter. It’s happened only twice previously in the last nearly 60 years. In order to ensure safety, the crew at the station undergoes a rigorous medical screening, to prevent illnesses requiring evacuation as much as possible.

Medical treatment is also delayed by the time required for the plane to arrive at the South Pole, and then for the plane to return the patient to a medical treatment center. (Which center is determined by the nature of the medical issue, which has not been disclosed, but the nearest centers are thousands of miles away.) The trip to the South Pole takes at least 5 days because of the complexity of the process. It also poses a risk to the air crews making the trip. (There are two planes sent in; one for evacuation and one to remain nearby in a search-and-rescue capability.)

The conditions in Antarctica are the cause of many of the difficulties. The sun set at the station in March, and will not rise again until September, so the plane must land without any daylight. It also has to land on packed snow/ ice, which requires skis, as there are no paved runways and the average winter temperature is -76°F (with wind chill it feels like -114°F). At those temperatures, most jet fuel freezes, so only certain planes can make the trip. (This is why they’re coming from Canada.) The planes can only hold 12-13 hours of fuel, and the last leg of the trip (across Antarctica) takes 10 hours (again, in good weather) so after a few hours into the flight, the plane has to either turn back, or they must land at the South Pole, regardless of conditions. Due to the desolation of the area, there’s nowhere else to land or refuel.

Currently one plane has made it to the South Pole, where it will wait for at least ten hours to allow the flight crew to rest and monitor the weather. The second plane remains at the Rothera Research Station, on Adelaide Island on the edge of Antarctica. Check for updates by clicking here. View the one-page downloadable Cause Map by clicking “Download PDF” above.

 

The end of the Guinea worm?

By Kim Smiley 

Guinea worm disease is poised to become the second human disease to be eradicated (after smallpox). In the 1980s, there were millions of cases of Guinea worm disease each year and the number has plummeted to only two confirmed cases so far in 2016, both believed to have been contained before the disease had a chance to spread. This accomplishment is particularly impressive considering that there is no cure or vaccine for Guinea worm disease. In fact, the most effective “cure” for the disease used today is the same one that has been used for thousands of years – to wrap the worm around a stick and slowly pull it out. (Read our previous blog “Working to Eradicate a Painful Parasite” to learn more about the problems caused by Guinea worm disease.)

So how has this horrible disease been fought so effectively?  We need to understand how the disease spreads to understand how the cycle was broken.  (Click on “Download PDF” to see a Process Map of the Guinea worm lifecycle.) The Guinea worm is a human parasite that spreads from host to host through the water supply.  The (rather disgusting) lifecycle begins with Guinea worm embryos squirming and wiggling in a freshwater pond, hoping to attract the attention of unsuspecting water fleas.  Once consumed by a water flea, the Guinea worm embryos drill out of the water flea’s digestive tract, move around the body cavity and feed on the water flea.  When a human then drinks the water containing the infected water flea, the lifecycle continues.

The water flea is dissolved by digestive juices in the human’s stomach and the Guinea worm embryo drills out of the intestines and crawls into the abdominal blood vessels, remaining in the body for several months until it reaches sexual maturity.  If the human is unlucky enough to be hosting both a male and female Guinea worm, the parasites will mate.  The male then die and millions of embryos grow in the female.  The female worm will usually make her way to the host’s leg or foot, pierce the skin and release an irritant that creates a painful blister.

Human hosts will often put the fiery blister into water to soothe the pain.  The female worm senses the water and releases thousands of embryos from her mouth.  She doesn’t release all her embryos at once, but will continue to release embryos when she senses water over a period of time.  If the embryos happen to land in a pond with water fleas, the whole painful process can start anew.

Once the lifecycle of the Guinea worm was understood, communities and aid organizations were able to use the information to disrupt the lifecycle and prevent the Guinea worm from spreading.  Some aid organizations helped provide access to clean drinking water or straws with filters that removed water fleas and prevented Guinea worm infections. In other places, the Guinea worm larvae were killed by treating the water with larvicide. But the most effective solution has been simply keeping infected people out of the water supply.  Once most people understood the consequence of putting Guinea worm blisters in drinking water they simply (if painfully) avoided the ponds used for drinking water, but some communities also implemented new laws and fines or posted guards at water holds to ensure that no infected individuals went into the water. These methods have proven very effective and the Guinea worm is now one of the most endangered animals on the planet.

The key to fighting the Guinea worm was education. The most effective solutions were simple and low-tech. No modern vaccine or modern medical knowledge was needed to prevent Guinea worm infections, just knowledge about how the disease spread. Guinea worms have been infecting people for millions of years (they have even been seen in Egyptian mummies), and the lifecycle could have been broken long ago if it had been better understood.

Government advisory group provides recommendations to help those with hearing loss

By ThinkReliability Staff

Controversial recommendations from a government advisory group studying hearing treatment call for hearing aids to be treated more like glasses. In order to understand why these recommendations were made, we’ll look at the problem of untreated hearing loss in a Cause Map, a visual diagram of cause-and-effect relationships that create a root cause analysis.

The first step in any problem solving method is to determine the problem. In this case, it’s untreated hearing loss. Using the Cause Mapping method, we’ll go a step further and document the impacts to the goals as a result of the problem. Untreated hearing loss impacts the public safety goal in several ways: it can lead to social isolation, depression, cognitive dysfunction, and dementia. The patient service goal is impacted because patients are unable to hear properly. The financial goal is also impacted because of the out-of-pocket expense of many hearing aids.

The next step is to analyze the issue by capturing the cause-and-effect relationships. We do this by beginning with one of the impacted goals. Cognitive dysfunction and dementia have been found to result from hearing loss because of the additional strain on the brain while it attempts to understand garbled sound in patients that are unable to hear properly. Additionally, reduced auditory input may cause parts of the brain to shrink. We can add in the other impacted goals as appropriate. Social isolation and depression result from the inability to fully participate in activities due to being unable to hear properly. This itself is an impact to the goals.

Patients are unable to hear properly when they suffer from hearing loss and they are not using hearing aids. About 30 million Americans suffer from hearing loss. In many cases, this is an effect of aging, but the causes of hearing loss were not discussed in detail by the advisory group. The group found that only a small number of those Americans suffering from hearing loss actually used hearing aids.

Some patients are unaware they are suffering from hearing loss. Hearing evaluations are not part of routine checkups, including annual Medicare wellness visits. For patients who know of their hearing loss, the out-of-pocket expense of hearing aids may be keeping them from treating it. The cost of hearing aids and fitting services average about $4,700. Insurance reimbursement is limited and while Medicare covers diagnostic hearing tests, it doesn’t pay for hearing aids. Patients’ abilities to shop around or switch providers can be limited. In many cases, patients don’t have access to their hearing tests and are unable to give their test results to a different provider from the one who performed the diagnostic testing. Hearing devices and services are often bundled, making it difficult to compare costs and some devices can only be programmed by certain providers, limiting access and increasing costs of servicing.

The advisory group has recommended that hearing aids be sold more like eyeglasses: the results of a hearing test given to a patient, who can then choose to get a prescription hearing aid from the provider of their choice or an over-the-counter wearable for mild hearing problems. (There are various products like this ranging in price from $50 to $500 but the FDA doesn’t consider them hearing treatments.) This would require action by both the FDA and hearing evaluation providers.

The group also recommends hearing aid providers itemize invoices and disclose if the aids can only be programmed by certain providers. Additional recommendations are for Medicare and other insurance providers to evaluate coverage of hearing aids and related care, and for the scientific/ medical community to perform more research into the physical health effects of hearing loss.

To view the impacted goals, Cause Map including cause-and-effect relationships, and solution recommendations related to untreated hearing loss, please click “Download PDF” above.

Millions of sippy cups recalled

By Kim Smiley

On May 27, 2016, it was announced that 3.1 million Tommee Tippee Sippee spill-proof cups were being recalled because of concerns about mold. The issue came to light after consumers called the company to complain about finding mold in children’s cups and several alarming photos of moldy cup valves were posted on the company’s Facebook page, some shared thousands of times. There have been more than three thousand consumer reports about mold forming in the cup valves, including 68 cases of illness that are consistent with consuming mold.

A Cause Map, a visual root cause analysis, can be built to better understand this issue. The first step in the Cause Mapping process is to fill in an Outline with the basic background information, including how the issue impacts the overall goals. In this case, the safety goal is impacted because 68 cases of illness have been reported. The regulatory call is impacted by the recall of the cups and the economic goal is impacted because of the high cost associated with recalling and replacing millions of cups. The time required to investigate and address the issue can be considered an impact to the labor/time goal. Additionally, the customer service goal is impacted because more than 3,000 consumers have reported mold in their sippy cups and because of the negative social media.

The next step in the Cause Mapping process is to build the Cause Map itself. The Cause Map is built by asking “why” questions and visually laying out the answers to show the cause-and-effect relationships. Understanding the many causes that contribute to an issue can help a broader range of solutions to be considered rather than focusing on a single “root cause” and focusing on solving only one issue. In this example, the mold is growing in the one-piece valve used in this model of cup. The valves remained moist, likely because they are not allowed to dry between uses, and they were not cleaned frequently enough to prevent mold growth. Many consumers have complained that it is very difficult or even impossible to adequately clean the cup valve which has contributed to the mold issue. In addition to the growth of the mold, one of the reasons children have gotten is sick is because it is hard to see the mold. Caregivers are unaware of the fact that the cups are moldy and continue to use them. (To see how these issue might be captured on a Cause Map, click on “Download PDF” above.)

The final step in the Cause Mapping process is to develop and implement solutions that will reduce the risk of the problem from reoccurring. In this case, all cup designs that use the single one-piece valve are being recalled and the valve replaced with either a trainer straw cup with no valve or a sippy cup with a new design two-piece valve that is easier to clean. The new two-piece valve comes apart in such a way that should also make it much easier to identify a potential mold issue, which should hopefully reduce the likelihood that a child will ingest mold. (If you think you may own one of these cups, you can get more information about how to get a replacement here.)

One of the interesting pieces of this case study is that the company has to work to address the technical issue with the valve design, but it also has to work to rebuild consumer trust. Consumers, especially when buying products for small children, will avoid a company if they don’t believe they take safety concerns seriously. This company has taken a beating online by outraged parents in the months leading up to the recall. In addition to designing a valve that will be less likely to harbor mold, it benefited the company to ensure the new design made it easy for parents to see that the cup valve was mold-free and safe. The company has also worked to spread information about the recall and tried to make it easy for consumers to get their recalled cups replaced. How a recall is handled has a huge impact on how consumers respond to the issue. A recall that isn’t handled well on top of an issue that has already shaken consumer trust can quickly spell disaster for a company. Consumers can be much more forgiving of an issue if a company responds quickly and if any necessary recalls are done as quickly and effectively as possible. It will be interesting to see how this company weathers this storm now that the cups have been recalled and the mold issue addressed.