Omnia Health is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Medical Errors: Prevention is Possible

Article-Medical Errors: Prevention is Possible

a graphics of a suitcase full of medical supplies

To err is human, as the Institute of Medicine report stated in 1999, but to not put in place processes that can prevent human errors from becoming fatal is inhumane. Together that’s what we need to do. Hospitals need to implement known processes, of which there are more than 30, to avoid killing nearly five million people every year, in our hospitals globally.What if you had the opportunity to save a life? The life of a loved one, a close friend, or even a stranger?

What if I told you it’s possible to reach zero preventable deaths in hospitals by 2020 by simply making a commitment to zero and implementing actionable patient safety processes? By making a public commitment to zero, implementing a patient safety focused culture, or even sharing your actions and patient safety processes, you could save not just one life, but thousands.

What if I told you the only way to stop preventable patient hospital deaths, the 14th leading cause of death around the world, is if you made patient safety your personal responsibility? The latest estimate is that over 4.8 million people are dying annually; that equates to over 13,000 people dying each day; that’s 45 fully loaded 787’s crashing every day and killing all of its passengers!

According to Dr. Tedros Ghebreyesus, the Director-General of the World Health Organization, “the reality is that every year, millions of patients die or are injured because of unsafe and poor-quality healthcare. Adverse events are now estimated to be the 14th leading cause of death and injury globally. That puts patient harm in the same league as tuberculosis and malaria. There are an estimated 421 million hospitalisations in the world every year, and on average, one in 10 of those results in adverse events. This is a frightening statistic. Especially when we know that at least half of adverse events could be prevented.”

So, what can we do to prevent medical errors and preventable patient deaths in hospitals?

First, join us in our fight. Our mission at the Patient Safety Movement Foundation is to eliminate preventable deaths in hospitals by 2020. We are an action-oriented organisation. We are proactively collecting commitments from hospital systems, open data pledges from healthcare technology companies, and ‘Commitment to Action’ letters from key associations, professional organisations, advocacy groups, and non-profits who are also working day in and day out to improve patient safety. We are growing stronger and closer to reaching zero preventable deaths each year, together. I urge you to join us and make a commitment to improve patient safety. It’s free. 

Second, take action. Research shows that evidence-based processes can be put into place, which prevents medical error and reduces preventable harm. Since I launched the Patient Safety Movement Foundation in 2012, we have teamed up with some of the world’s leading medical experts, hospital administrators and patient advocates to share best practices and the latest evidence-based solutions to the leading causes of preventable harm in hospitals. Today, we have 31 Actionable Patient Safety Solutions (APSS) that cover the 16 leading causes of preventable patient death, which include hand hygiene, healthcare-associated infections and more. Close to 5,000 hospitals across 44 countries have implemented these APSS or their own novel solutions to reduce preventable mortality. Last year, between 81,533 and 200,000 lives were saved as a result of these hospitals’ patient safety efforts. 

We offer the APSS at no cost. They are free to download and are written in a checklist format to allow hospitals to audit their systems and identify areas for improvement. I encourage you to use them or any other evidence-based processes to protect patients and clinicians. The key is to implement processes and learn from them and improve them.

Third, implement a culture of safety and begin tracking cases of preventable harm. For the last six years, we have worked in concert with leading medical experts around the globe to identify the leading causes of preventable patient harm from handoff communications to delayed detection of sepsis. Remarkably, the leading cause of preventable patient deaths is when hospitals lack a culture of safety. In fact, a 2017 review of patient safety in the Arab countries identified that punitive response to error is seen as a serious issue, which needs to be improved. Healthcare professionals in the Arab countries tend to think that a ‘culture of blame’ still exists that prevents them from reporting incidents. 

Studies report that hospital departments where staff have more positive patient safety culture perceptions have fewer adverse events. So, what does a culture of safety look like? A strong safety culture promotes the identification and reduction of risk as well as the prevention of harm. A poorly defined and implemented culture of safety may often result in concealing errors and therefore a failure to learn from them. According to the Institute of Medicine, “the biggest challenge to moving toward a safer health system is changing the culture from one of blaming individuals for errors to one in which errors are treated not as personal failures, but as opportunities to improve the system and prevent harm.”

Hospitals like the United States’ Parrish Medical Center have seen dramatic improvements as a result of their culture of safety. The hospital is consistently rated “A” by the Leapfrog Group, #1 Safest Hospital by Florida Consumer Reports and won the first-ever five-star Hospital Ranking by the Patient Safety Movement Foundation. At Parrish Medical Center, they have put action behind their culture of safety by continuously tracking and monitoring cases preventable harm. As a result of their measuring and monitoring of preventable harm, they’ve dramatically reduced preventable harm. For example, they’ve achieved zero ventilator-related pneumonia in 12 years, one catheter-related UTI in 10 years and one central line-associated bloodstream infection (CLABSI) in the past ten years. 

Finally, start now and start somewhere! Hospitals are proving that zero is possible. We’re already seeing hospitals getting to zero deaths in certain areas such as healthcare-associated infections. For example, like Parrish Medical Center, Tri-City Medical Center in San Diego, California, recently celebrated seven years of zero central line-associated blood stream infections (CLABSIs) in its neonatal ICU. Intermountain Healthcare System based in Salt Lake City, Utah, hasn’t seen a single catheter-associated urinary tract infection in its 160-bed LDS in six months. The common thread is that these and other hospitals remarkable patient safety outcomes are putting systems in place to improve patient safety processes while creating a culture focused on what’s best for the patient. 

And the positive momentum is growing. On November 15, we partnered with the Dubai Healthcare City Authority for the first conference to present regionally-relevant patient safety initiatives and models from the UAE’s health sector. The Dubai Healthcare City Best Practice Conference 2018 called on hospitals and clinics in the UAE to share their applied patient safety best practices to help advance a culture of safety. The conference drove DHCA’s commitment to bring the Patient Safety Movement to the Middle East and reduce the number of preventable deaths in hospitals to zero by 2020. 

The conference had three categories – Infection Control and Medication Management; Advancing a Culture of Safety; and, Enhancing a Positive Environment of Care. These categories have been identified as some of the leading patient safety challenges facing hospitals today. DHCA was the first group in the Middle East to make a public commitment through the PSMF to improve their culture of safety. By gathering to focus on patient safety and share best practices at this conference, they set an example for the world that reaching ZERO is possible.For details, log on to https://www.dhcr.gov.ae/en/DHCC-Best-Practice-Conference

Zero preventable patient deaths is possible, but it is up to you, not the person on your right or your left, but you. Act now!

Continuous and Flash Glucose Monitoring: Effective Diabetes Management Strategies

Article-Continuous and Flash Glucose Monitoring: Effective Diabetes Management Strategies

insulin, diabetes counters, apple, and shoes

Glucose monitoring is a core component of a successful management strategy for people with diabetes, especially for those who are insulin-treated. It facilitates intensification of insulin therapy, with a subsequent reduction in diabetes-related complications, while minimising the risk of hypoglycaemia. Since 1971, when the first glucose monitor was used, the most common method of glucose monitoring has been the use of intermittent capillary blood glucose monitoring using standard finger-prick methods. This has revolutionised diabetes management in several ways. It allows patients to immediately detect and treat hyperglycaemic or hypoglycaemic excursions; it facilitates change in patients’ lifestyle by demonstrating the effect of lifestyle activities on glycaemia; and it allows therapy adjustment to achieve target HbA1c level in the long-term. 

There are many advantages for this method of testing. It is fast, accurate, portable, simple and cost-effective. Devices used for self-monitoring of blood glucose (SMBG) have evolved with many developments allowing for improved accuracy, reduced size, memory function, reduced required blood volume, rapid analysis, ability to test for blood ketones and bolus advisor integration (Smart SMBG). 

There is evidence for improvement in glycaemic control with increased frequency of SMBG in patients with type 1 diabetes. However, SMBG only provides a snapshot of the glucose profile at the point of testing. Therefore, it is missing important information about magnitude, direction, and duration of glycaemic excursions. This can be crucial especially at important times, from glycaemia point of view, when the patient is unable to test like driving, exercise or sleeping. Furthermore, the procedure is invasive and seen by many patients as painful, which can result in reduced compliance with the recommended frequency of monitoring with subsequent negative impact on diabetes control. 

Continuous Glucose Monitoring (CGM)

Emergence of CGM technology has addressed an important drawback of SMBG technology by providing patients and healthcare professionals with continuous information about the glucose profile. A CGM system comprises two essential components; a body-worn glucose sensor and an electronic unit for signal processing and wireless data transmission. Some CGM systems also comprise a unit to display glucose values in real time, which has been replaced by mobile phones for data display in some of the new generation CGM systems.

Glucose biosensors combine a glucose recognition component with a physiochemical detector. They can be classified according to sensing technique, level of invasiveness or target biofluid (blood or interstitial fluid).These systems can either display glucose values in real-time (RT-CGM) or store glucose data for retrospective analysis by healthcare professionals (blinded CGM). Real-time devices display glucose value accompanied by a trend arrow to show direction and magnitude of rate of change. These devices also feature an alarm function when glucose level is outside a pre-determined range or when a hypoglycaemic event is predicted. For instance, FreeStyle Libre Flash Monitoring System (FGM) is a relatively new glucose monitoring system where glucose data can be accessed by actively scanning a reader over the sensor rather than being continuously displayed in real-time. 

Research evaluating the effectiveness of CGM technology is extensive. It has studied the effect of CGM on several glycaemic outcomes including effect on HbA1c, hypoglycaemia measures and glycaemic variability measures. It has also studied non-glycaemic outcomes including effect on quality of life. The effectiveness of CGM has been evaluated in different settings (ambulatory, inpatient and in intensive therapeutic unit (ITU)) and in different types and subgroups of diabetes. 

However, several confounding factors need to be considered while evaluating the CGM evidence. As the CGM is a diagnostic tool, its effectiveness relies on effective translation of the CGM data into an effective therapeutic intervention that will eventually impact the outcome. This effective translation depends on patient’s training, skills and compliance. It also depends on the experience of the diabetes team and the level of support provided to patients. Therefore, some of the CGM studies might not only evaluate the use of CGM and its accuracy, but also evaluate factors related to patient and diabetes team interaction with the CGM. Furthermore, CGM cannot be investigated in a double-blind manner. Therefore, the best possible evidence can be obtained from large-scale open-label randomised controlled crossover studies, where subjects act as their own control. Another important factor to consider when evaluating the CGM evidence is the rapid development in CGM technology. The continuous development in CGM sensor fabrication and algorithms used for glucose data analysis has resulted in significant improvement in CGM accuracy. Therefore, studies conducted a few years ago using older generations of CGM systems might have shown different results if they were conducted using newer generations of CGM with enhanced accuracy. 

Recent Advances in CGM 

Over the last decade, CGM technology has gathered a significant pace. This started in 2008 with the publication of the landmark Juvenile Diabetes Research Foundation CGM study, which demonstrated the value of continuous use of CGM technology in improving glycaemic and reducing HbA1c. However, several limitations affected the uptake of the technology. This was evident from T1D exchange data demonstrating that CGM technology was being used by only 6.5 per cent of people with type 1 diabetes in the U.S., despite reimbursement, and that among individuals who have used a CGM, two-thirds stopped using it. Some of the important limitations were related to inaccuracy of available CGM systems at the time and the relatively high cost. However, there has been steady improvement in CGM accuracy in recent years with subsequent changes in licensing by regulatory bodies that allowed non-adjuvant use (ability to rely on the system for self-adjustment of insulin doses without the need to confirm with SMBG first). 

Improvement in CGM accuracy has also been accompanied by reducing the frequency of sensor calibration or the need for calibration at all. Both Dexcom G6 and Freestyle Libre FGM are factory calibrated and do not require calibration by the patient. Other than reducing a patient’s burden by reducing the need for SMBG testing, calibration-less CGM systems avoids errors that can result from calibration using erroneous data from an inaccurate SMBG test. 

On the basis of available evidence, RT-CGM has been used therapeutically for further optimisation of subcutaneous continuous insulin pump therapy regimen if the target HbA1c has not been achieved or for patients with recurrent disabling hypoglycaemia, those with hypoglycaemia unawareness or debilitating fear of hypoglycaemia. However, DIAMOND and GOLD randomised controlled studies have recently demonstrated the positive impact of CGM on markers of glycaemia in patients using multiple daily insulin injections. This has challenged the clinical pathway that requires the use of insulin pump therapy before CGM is considered.

Combining the benefits of CGM and those of insulin pump therapy (sensor augmented pump therapy) was evaluated in a number of studies. This has paved the way to closed-loop systems, where an algorithm uses input from CGM data to control insulin delivery via the insulin pump. There has been extensive evidence from research studies showing the positive impact of use of closed-loop systems on glycaemic markers. In 2016, Medtronic received US Food and Drug Administration (FDA) approval for its first hybrid closed loop system (MiniMed® 670G system) in the United States. 

Flash Glucose Monitoring (FGM) 

Flash glucose monitoring is sometimes regarded as a separate entity from CGM. It differs from RT-CGM in two main aspects. First, it does not have an alarm function (although this will change with the next generation FreeStyle Libre 2); second, it requires active scanning of the sensor unit by a reader or a cell phone to access glucose data rather than passive display of glucose continuously, which is updated at five-minute intervals. 

Existing flash glucose monitoring system has the advantage of good accuracy, factory calibration, a two-week sensor lifetime, good user acceptance and relatively low cost. 

The value of FGM has been demonstrated in both type 1 diabetes (The IMPACT study) and in type 2 diabetes (the REPLACE study). While there was minimal change in HbA1c, there was significant reduction in hypoglycaemia and markers of glycaemic variability, and positive impact on patients’ reported outcomes. 

Future of CGM

Until the dream of developing a cure for type 1 diabetes is realised, diabetes technology and automated insulin delivery represent the best available management strategy for diabetes control and reduce the burden of the disease. This could not have been realised without the development of CGM and the significant advances that have been achieved in this field recently. However, development of an accurate non-invasive affordable CGM system remains an important goal for people with diabetes and a significant challenge for research groups and industry. 

Strategies for Hospitals During Mass Casualty Events

Article-Strategies for Hospitals During Mass Casualty Events

doctors looking down

It’s not a matter of ‘if’ but it’s a matter of ‘when’ a mass casualty situation either human-made or natural will happen. This can happen anytime, anywhere in the world. In 2015, for instance, 346 disasters were reported worldwide, having affected 98,580,793 people and with more than 20,000 losing their lives. These events had a huge economic bill of US$66.5 billion. 

The National Emergency Medical Services Information System (NEMSIS; Salt Lake City, Utah, U.S.) defines Mass Casualty Incident (MCI) as “an event which generates more patients at one time than locally available resources can manage using routine procedures or resulting in a number of victims large enough to disrupt the normal course of emergency and healthcare services and would require additional non-routine assistance”. 

All the developed countries have reserved a regular annual budget to simulate disaster preparedness activities of the healthcare system. During the last decade, the emergency departments have steadily become busier and crowded; at the same time, MCIs have become frequent and devastating. This current scenario has been recognised as a threat to MCI preparedness. Recent mass casualty incidents all over the world illustrate the unique challenges that such occurrences pose to normal hospital operations. The sudden, unexpected patient surges in case of MCI can overwhelm the hospital resources, staff and space. 

Adequate planning at an organisational level is the key to optimise the response to unexpected events. “Failing to plan is planning to fail” is a particularly relevant aphorism for managing mass casualty incidents. Due to the recent surge in MCIs, governments and healthcare systems have a special focus on preparation for MCI management. But still, in a recent survey conducted by the American College of Emergency Physicians (ACEP), nearly all participants said their “emergency departments are not fully prepared for patient surge capacity in the event of a natural or man-made mass casualty incident”. 

This review aims to provide the hospitals with an overview of MCI management principles, mainly pre- and post-MCI phases. The best practices of planning and  preparation are evolving and it is important to update current practices to provide a relevant action plan. 

Hospital Plan for Mass Casualty

Incidents MCI plan is an agreed set of action plans used to prepare for, respond to and recover from such emergency situations. An MCI plan should be generic enough to be applicable to multiple risks, yet specific enough that each individual in the hospital knows about their roles and responsibilities. 

There are three main phases of an MCI plan:

  1. Pre-MCI Phase 
  2. Response to MCI 
  3. Post MCI Phase 

1. Pre-MCI Phase

This phase aims to enhance hospital preparedness during MCI planning. 

A. Draft a plan to deal with MCI:

An MCI plan development is needed at all hospital levels to ensure that common goals are set and methods are devised to achieve a favourable outcome in demand-critical circumstances. Developing such a plan requires active participation from the pre-hospital team (EMS), hospital management – both clinical and non-clinical teams. MCI plan should result in a clear definition of the roles and responsibilities of all the professionals involved. Joint Commission International (JCI) recommends “the hospital develops, maintains, and tests an emergency management programme to respond to emergencies, epidemics, and natural or other disasters that have the potential of occurring within their community”. Therefore, it is very important that the MCI plan of a hospital should detail the process, planning, and policies. 

A MCI plan should include: 

  • Details of MCI committee 
  • Control and command centre 
  • Triage and patient management 
  • How to deal with surge capacity 
  • Equipment and supplies 
  • Communication channels within hospital and outside 
  • Security and staff protection

 B. Vulnerability and capacity assessment: 

The purpose of vulnerability and capacity assessment (VCA) — is to identify hazards or threats and their possible effects on communities, activities or organisations, and their capacity to prevent and respond to MCIs. It is vital that hospitals identify such threats at the local level so that it will allow institutions to prioritise their preparations and this facilitates rapid and relevant response specific for an MCI. However, it is not always possible to discover all the hazards in the community. 

C. Training and Education:

Making a policy and an action plan is not enough to deal with MCIs. It is very important that the professionals who are part of the MCI team should be appropriately trained and educated. This can be done in multiple ways. It also involves informing the medical professional of the appropriate responses for different types of emergencies. Training and education strategies may include workshops, tabletop exercises, courses, seminars, self-directed learning, individual tuition exercises, formal education programmes, conferences, and lectures. 

D. Monitoring and Evaluation: 

Once the planning phase has been formulated, the next step is to devise MCI simulation exercises. MCI exercise is an instrument, which helps to train, assess and improve performance in protection, response, and recovery capabilities in a risk-free environment. The simulation exercise not only help to validate plans, policies and interagency agreements, but also helps to improve communication, clarifying roles, identifying shortcomings in the preparation. 

In developed countries, it is common to do regular MCI simulation exercises. It is also common practice that hospitals involve specialised training agencies to execute and evaluate such exercises. It is very important that the objectives of such exercises should be very clear. These specialised agencies after exercise issue an evaluation report, which contains recommendations to improve the process. The idea to have this kind of simulation exercises is to improve MCI preparedness over a period of time. Evaluation reports should be objective, clear, reliable and credible. 

Hsu et al., (2004) performed a systematic literature review and concluded that due to the lack of objective data (e.g., the data of hospital responses to actual MCIs are rarely made available to the public), the effectiveness of MCI drills, as a tool for hospital MCI preparedness is difficult to determine. Verheul et al., (2018) found that researchers have been unable to assess if the members participating in MCI exercises in the Netherlands learn from their participation. There are few other studies, which echoes the same conclusions. Therefore MCI simulation exercises of a hospital, should be validated and have objective tools to measure learning effects. 

Another important debate about simulation exercises is that there should be no notice exercises. No notice exercise reduces the element of bias and reflects the true surge capacity and preparedness of an organisation. Hence, it is more useful than a typical planned exercise, which is usually highly choreographed. Planned exercises not only lack realism but also tend to limit the size of the surge. 

Wexham et al., (2017) reported that they have conducted a successful no notice exercise that can be used by any hospital to assess its crisis surge capacity in the aftermath of a large-scale MCI. The U.S. Department of Health and Human Services (Washington, DC, U.S.), Office of the Assistant Secretary for Preparedness and Response (ASPR), in conjunction with the Hospital Preparedness Program (HPP), commissioned RAND Corporation (Santa Monica, California, U.S.) to develop this exercise. 

2. Response to MCI: 

Hospital response to MCI is the time to put all training and practices into live action by following the policies and guidelines developed in the planning phase. 

A. Notification:

Hospitals usually get notified about an incident by pre-hospital agencies and police. However, it is not always possible to anticipate the scale of the incident. Timely and accurate information will help organisations to develop a proportional response. 

B. Activation of MCI Plan:

Organisations usually pre-nominate individuals who assess the information about the incident and have authority to activate the MCI plan. There are different levels of activation ranging from standby level up to level 3. Once the MCI plan has been activated, routine activities should be withheld until the MCI plan has been deactivated. 

C. Patient Triage & Management: 

During MCI, triage at the hospital often leads to bottleneck as too many critical patients compete for limited resources, so it is very important that a senior emergency physician takes this responsibility. Primary triage should be performed on the arrival of patients, in a dedicated area, which is much larger than the usual triage area. The patient should be considered for secondary triage once some interventions have been made or more resources have become available. Tertiary triage is the least familiar triage category to hospital staff; it is usually performed on patients who have received advanced or ongoing interventions. It usually takes place in wards. Emergency room (ER) management should focus on providing resuscitation and stabilising the patients. Patients should be moved from ER once stable enough for definitive or damage control management, which depends upon available resources. 

D. Hospital Security: 

MCI plan activation leads to security augmentation, which not only restricts staff, patients and public movement but also provides a higher level of to staff protection. These kinds of security arrangements are especially important in biological, radiological and infectious emergencies. Inside the hospital, it is optimal to control entrances electronically whereas, outside the hospital, help from security agency or police should be sorted. 

E. Communication: 

Communication is of utmost importance for the smooth running of MCI plans. A control and command system should be in place. Relevant additional staff should be informed to come in for help. The organisation can use different means to communicate with the public, other organisations and additional staff for help. 

D. Deactivation of MCI Plan:

During an MCI it is important to nominate a person who begins to plan for the recovery phase while the MCI plan is still going on. This planning includes staff support, re-supply, discharge planning, patient transfers and demobilising surplus staff to return the facility to daily operations level once MCI plan is deactivated. MCI plan deactivation is a very important step. The incident commander and supporting staff should only take a decision after proper assessment.

3. Post MCI: 

A. MCI Response Review: 

Hospitals have a main role in MCI response. Once MCI or a simulation exercise is over it is important to analyse the response. Working of an institution during an MCI differs significantly than the working in a routine environment, which leads to a lot of pressure on the front line workers. Hence it is important that after every such event, a detailed review should be conducted, which will help to learn from both strengths and weakness of an MCI response. Post MCI reviews have the potential to enhance resilience and sensitivity of an organisation. 

B. Longer Term Demands: 

During an acute MCI, clinical care is focused on resuscitation and damage control. Once the acute patient influx is over, it takes days to weeks for an organisation to return to baseline. All the admitted patients will need further definitive treatments, which have a significant impact on day-to-day operations of the hospital. Hospitals will need to open more operating rooms and ICU beds to deal with the acute influx. 

Increased logistic requirement needs to be met for days and should be continuously monitored. After MCI, rehabilitations start early as well, which includes medical health professionals including doctors, nurses, physiotherapists and occupational therapists. Depending on the type of incidents, patients may need frequent and long-term follow up, which adds additional work on a continuous basis. Traumatic experiences can lead to mental health issues in patients and families; hence, a continuous support is required to identify the people at risk. 

C. Staff Support:

The most important lesson emerged from recent MCIs is the effects of traumatic events on the physical and psychological health of the medical staff. Healthcare staff has been exposed to things they have never seen in the past. Hence, the organisation has to prepare to deal with the aftermath of such events. The medical staff’s response to these kinds of incidents are quite different, some will have a quick recovery while others take months to recover and have profound effects on their lives. Therefore, immediate staff support is of extreme importance along with a planned delayed interval response. Staff support is a critical component of medium and long-term post-MCI planning. 

MCI provides continuous challenges to healthcare systems. These kind of incidents leads to lots of demand on the system and can create suboptimal conditions. So it very important to learn from national and international incidents and keep updating local MCI plans.

Description and Analysis of Emergency Department Demands, Constraints and Consequences

Article-Description and Analysis of Emergency Department Demands, Constraints and Consequences

doctors rushing in the halway

Emergency medicine has developed rapidly over the last 50 years with notable successes in developing purpose-built units, training programmes and postgraduate examinations with consequent improvements in the morbidity and mortality outcomes for millions of patients.

However, these departments, systems and processes have developed in a rather piecemeal manner; seldom have single departments, let alone whole systems been built, resourced and managed in an optimal manner. For the few that have, the inevitable increase in attendances and admissions plus advancements in medical science have ensured that even they have become increasingly challenged.

Comparison of the emergency care systems of various countries have been published and the conclusions disseminated widely. Hence it is recognised that the system of emergency care in North America, Australasia, the UK and Ireland is substantially different from that in many mainland European countries. Such comparisons are of value but seldom lead to system changes.   

Surprisingly we are often blind to the significant differences within our own systems. In England there are over 180 Emergency Departments operating within 130 hospitals or groups. The scope to better analyse variation between these departments is considerable, yet until recently has not been systematically undertaken. Moreover, such analysis can illuminate key constraints and opportunities, which are more likely to resonate with patients and staff than international comparisons. Such intra-system variations are also more likely to drive improvements by highlighting unwarranted variation. 

In determining how best to use metrics to analyse performance of emergency departments and illuminate comparisons it is essential to avoiding both simplistic reduction and meaningless complexity. 

Work undertaken by a number of national bodies in England has identified over 1,000 potential metrics of which 40 appear to be the most discriminatory. For the purposes of this article these metrics are subdivided into four key domains; Demand, Capacity, Flow and Outcomes. Importantly this is not a standardisation methodology. Indeed, inherent in the analysis is a recognition that often there are good reasons for variations in both demand and outcomes. 

ED Demand 

To properly appreciate the performance of an emergency department (ED) it is essential to recognise the variation of demand between ‘apparently’ similar departments. Four metrics in particular are edifying. 

a. Attendance rate 

b. Proportion of attendances over 75 years 

c. Deprivation profile of attendances 

d. Conversion rate of attendances to admissions 

Whilst these are not independent variables they are sufficiently discriminatory for our purpose. 

From our data we now know that the attendance rate varies from 16 to 42 per cent of the catchment population per year. This reflects both geographical challenges e.g. distance travelled as well as the availability (or otherwise) of other urgent care services e.g. primary care and treatment centres. 

The proportion of patients attending who are aged over 75 varies from 16 to 43 per cent. For many, but not all hospitals, the need to reflect this case load by providing frailty and geriatric services is self-evident yet the data shows the provision of such services is patchy and not obviously aligned always with demand. 

Deprivation levels (as measured by the proportion of the catchment population that are in the 20 per cent of the population that is most deprived) varies from less than one per cent to almost 80 per cent. Both the nature of illness/injury and the linkages to social care/public health that are determined by such variation are also self-evident. 

Finally, the proportion of attendances to an ED that require an admission varies from 13 to 44 per cent. This will require fundamentally different resource configurations both of estate and manpower to effectively manage such variation. 

Thus, by examining only four variables we are already much better informed of the range of challenges each ED must face. If we are to have a debate around ED performance, we must recognise the very different demands placed upon them even within a single country, region or even city. 

ED Capacity

Whereas ED demand is largely without the control of the department or its associated hospital, ED capacity is most obviously not. It is this issue that demonstrates such a high degree of unwarranted variation ie; variation for which there can be no proportionate justification. 

Data shows that in England, on average, 1,250 admitted patients must be accommodated for every emergency department majors/ resuscitation bay. As such, each of these clinical spaces must manage between three and four admitted patients per day and depending on the conversion rate at least twice as many non-admitted patients also. Simple arithmetic shows that in order to accommodate these patients the average ‘time in bay’ must be less than three hours. 

Remarkably however these numbers and calculations apply only to the statistical mean. Half of all departments will have to manage more patients per bay and in some cases twice as many! 

Some departments are simply too physically small to be fit for purpose.

Flow, Exit Block and Implicit Harms

Flow is key to ED performance. The timely assessment, treatment and disposition of each patient is important to both the patient and the healthcare system. Delays and bottlenecks impair experience and outcomes, yet are seen all too often in many EDs in most healthcare systems. 

The Four Hour Standard was introduced in the UK in 2004 specifically to provide a key driver to timely flow in the ED. It has achieved notable success and without such a metric, performance and outcomes in the ED would be much worse. 

However, two valid criticisms of the Four Hour Standard are of genuine concern. Firstly, it applies to all patients including those with minor illness and injury. This can paradoxically encourage systems to ensure large numbers of patients with minor conditions are managed quickly to offset delays for fewer, more seriously ill patients. Secondly, the standard is binary, anything under 240 mins is a success and over is a failure. 

The first criticism is most easily dealt with by referencing the Admitted Patient Breach Rate (APBR) separately — this records the proportion of patients who require admission that breach the Four Hour Standard. As such it refocuses attention on the more seriously ill and injured. 

Avoiding the binary nature of the Four Hour Stand is also relatively straightforward using a derived metric — the Aggregated Patient Delay (APD). 

This metric summates the accumulated delay beyond four hours from time of arrival for all ED patients requiring admission. It is then expressed as ‘hours delay per hundred admitted patients’. A worked example of how this would apply to three different EDs highlights how this metric extends the clinical relevance of any ED time standard (Fig.1). 

However, these new metrics are most powerful when plotted as a function of each other. Charting the Admitted Patient Breach Rate vs Aggregated Patient Delay for each ED in England produces a visual and contextual insight into the flow delays experienced by patients. 

Those patients attending hospitals whose performance is plotted within the top right quadrant are evidently at much greater risk of delay-associated morbidity and mortality than those in the bottom left quadrant. Importantly these metrics are not binary but continuous variables. They resonate with clinicians and managers because they reflect the ‘lived-experience’ of both staff and patients. Because there is no cut-off threshold every patient counts; as does every hour of delay. Every system can credibly aspire to improve both their relative and absolute position on the APD/APBR chart. 

Numerous studies from North America, Australasia and the UK have shown morbidity and mortality consequences of overcrowding in the ED and Exit Block related delays. Hitherto we have lacked a methodology to differentiate performance of various EDs and hospitals in a manner that was reliably proportionate to these harms. This methodology, focusing on patients requiring admission, directly addresses this deficit and importantly can also be applied to any ED in any country. 

The use of nationally and locally collected data can provide valuable insights into the demand and capacity profiles of an emergency department. Such data when systematically analysed using clinically referenced benchmarks can better inform redesign, reconfiguration and investment decisions.

Figure 1

However, these new metrics are most powerful when plotted as a function of each other. Charting the Admitted Patient Breach Rate vs Aggregated Patient Delay for each ED in England produces a visual and contextual insight into the flow delays experienced by patients.

Those patients attending hospitals whose performance is plotted within the top right quadrant are evidently at much greater risk of delay-associated morbidity and mortality than those in the bottom left quadrant.

Importantly these metrics are not binary but continuous variables. They resonate with clinicians and managers because they reflect the ‘lived-experience’ of both staff and patients. Because there is no cut-off threshold every patient counts; as does every hour of delay. Every system can credibly aspire to improve both their relative and absolute position on the APD/APBR chart.

Numerous studies from North America, Australasia and the UK have shown morbidity and mortality consequences of overcrowding in the ED and Exit Block related delays. Hitherto we have lacked a methodology to differentiate performance of various EDs and hospitals in a manner that was reliably proportionate to these harms. This methodology, focusing on patients requiring admission, directly addresses this deficit and importantly can also be applied to any ED in any country.

The use of nationally and locally collected data can provide valuable insights into the demand and capacity profiles of an emergency department. Such data when systematically analysed using clinically referenced benchmarks can better inform redesign, reconfiguration and investment decisions.

Effective Treatment of Craniosynostosis and Deformational Plagiocephaly Improves with Early Diagnosis

Article-Effective Treatment of Craniosynostosis and Deformational Plagiocephaly Improves with Early Diagnosis

man holding baby head

Deformation plagiocephaly (DP) and craniosynostosis (CS) are the leading causes of abnormal head shape in infants worldwide. Accurate differentiation of these two entities is important because their treatment is entirely different. Prompt diagnosis is important because recommended treatments for both DP and CS are most effective when started early.

Medical teaching in the 20th century often suggested watchful waiting of abnormal infantile head shapes, with referral to a specialist only if the head shape did not improve during the first year. The advantages of minimally invasive surgical techniques to treat CS encourages providers to change that practice pattern because these techniques are best utilised in young infants, typically between two and six months of age.  

Here we describe typical features of DP and CS; we emphasise ways to distinguish them and outline the typical treatment options. 

Common Head Shape Anomalies in Infants 

Normal infantile head shape can vary widely and is largely influenced by familial tendencies and genetic heritage. In this paper, we describe the most common head shape abnormities that fall outside cultural norms: deformational plagiocephaly (DP) and craniosynostosis (CS). 

DP is now the most common infantile head abnormality. The 1992 “back to sleep” programme resulted in a 40 per cent reduction in sudden infant death syndrome but there has been a concomitant dramatic increase in DP. The prevalence of DP was estimated as low as 5 per cent prior to 1992, but recent studies estimate a 21 per cent to 46 per cent prevalence in infants less than one year of age, depending on the criteria used to define DP. 

CS, which occurs in approximately one in every 2,000 to 2,500 live births, is much less common than DP. Differentiating DP from CS can be difficult for healthcare providers who don’t specialise in head shape abnormalities, especially when the deluge of patients with DP overwhelms a healthcare system’s ability to adequately assess abnormal head shapes. 

Differentiating DP from CS

DP, also referred to as posterior positional plagiocephaly, positional moulding, occipital plagiocephaly or plagiocephaly without synostosis, is usually not present at birth, is often associated with torticollis, and results in a flat posterior skull, with associated anterior displacement of the ipsilateral ear and forehead. CS is typically present at birth, is not usually associated with torticollis, and results in a widely variable but predictable pattern of head shape abnormalities. 

Skull x-rays and/or CT scans were previously relied upon to differentiate DP from CS, but the sheer volume of DP patients and concern about radiation exposure in infants and children make these imaging studies impractical CS screening tools for all patients with abnormal head shapes. 

Fortunately, patients with CS have a typical appearance, based on the observation of Virchow in the mid-19th century, that skull growth is perpendicular to each cranial suture. Early cranial suture closure therefore leads to predictable head shapes. Multiple suture craniosynostosis also occurs, in the presence or absence of an associated syndrome, with resulting head and face shapes that are characteristic and readily differentiated from DP. 

Early closure of midline cranial sutures is differentiated from DP because the resulting head shape is not posteriorly asymmetric, a hallmark of DP. Sagittal craniosynostosis, by far the most common form of CS, results in a long, narrow head with a shortened biparietal diameter (dolichocephaly) as a result of early closure of the sagittal suture, with compensatory growth of the remaining sutures resulting in frontal and occipital bossing. Early closure of the metopic suture leads to a triangular head shape (trigoncephaly) and close-set eyes (hypotelorism), features also not seen in DP. 

CS that results from early closure of the coronal and/or lambdoid sutures is more difficult to differentiate from DP because unilateral closure of one of these sutures leads to an oblong head shape, which can be mistaken for DP. Coronal CS is more easily differentiated from DP because the anterior skull is severely affected, the posterior skull is often relatively spared, and there are associated severe asymmetries of the orbital rim and the nose (tip of the nose deviates away from the affected coronal suture), all features that are quite different from the mild forehead asymmetry typically seen in DP (Figure 1).

Effective Treatment of Craniosynostosis and Deformational Plagiocephaly Improves with Early Diagnosis

Figure 1

Effective Treatment of Craniosynostosis and Deformational Plagiocephaly Improves with Early Diagnosis

Figure 2

Differentiating lambdoid CS from DP is more difficult because early closure of a single lambdoid suture results in flattening of the back of the skull, which can appear similar to DP. However, important morphologic differences can help the clinician differentiate the two entities in most circumstances. Early closure of the lambdoid suture leads to posterior and inferior displacement of the ipsilateral ear, the opposite of the situation in DP. As a result, the head shape, when viewed from the top, is similar to a parallelogram in DP and trapezoidal in lambdoid CS (Figure 2). In addition, the head of a child with lambdoid CS often looks “windswept” when viewed from the front, meaning that the contralateral parietal bone tends to be higher than the affected side.

Lambdoid craniosynostosis is quite rare, comprising less than 2 per cent of all cases of CS, resulting in only a small proportion of infants with posterior skull flattening being afflicted with lambdoid CS. There is a common misconception that early fontanel closure should be used as an indicator of CS.

The anterior fontanel does close early in some forms of CS, especially those with multiple suture CS or a genetic syndrome, but the fontanel can close at the typical period of development in infants with CS, even those involving the sagittal suture. Similarly, head circumference measurements are not typically reduced in most patients with simple, single suture CS, which make up the majority of children with CS. In fact, in patients with isolated sagittal CS, which accounts for over half of all patients with CS, the head circumference tends to be larger than average because the skull takes on a long, narrow appearance in order to accommodate the normal underlying brain growth. Ridging along the metopic suture, without the associated trigoncephaly or hypotelorism of metopic CS, is often a normal variation that does not require treatment.

Evaluation and Treatment of Deformational Plagiocephaly

Evaluation of DP

The clinical presentation and appearance of most patients with DP is fairly uniform. The typical child develops unilateral posterior skull flattening within the first two months of life, which progressively worsens and is often associated with anterior displacement of the ear, bossing of the ipsilateral frontal bone, and sometimes facial asymmetry. When examined carefully, many young infants with DP also have associated “wry neck”, with limited range of neck motion or torticollis.

Most infants with DP have otherwise normal examinations, but a careful evaluation to search for other associated conditions should be performed.

Treatment of DP

If recognised early, DP can be effectively treated by keeping the baby’s head from resting on the flattened area as much as possible, by promoting supervised tummy time, and by encouraging neck stretching exercises. With these measures, once babies are able to sit, crawl and walk on their own, their DP has usually improved significantly, with continued rounding of the head occurring over the ensuing years, resulting in most children having minimal residual deformity by school age.

Skull moulding helmets used to reshape the infant’s head have been used for decades, but their widespread use has been questioned because DP is a condition that will improve with the aforementioned techniques in the majority of patients. Cranial orthotic helmets can certainly remould an infant’s head, a concept that was even appreciated in ancient civilisations, but the necessity of using a helmet for a condition that tends to resolve has been carefully scrutinised recently. A recent prospective trial in the Netherlands, where 84 babies with DP were randomly assigned to receive helmets, showed that there was no improvement in head shape at two years in babies who were treated with helmet therapy compared to those who only had the typical repositioning and exercise treatments that are commonly employed. Like all clinical studies, this study has limitations, including the exclusion of babies that have the most severe forms of DP. Until further confirmatory research is completed, we generally recommend helmet therapy only for patients with severe forms of DP or those who have not responded to typical treatment options. Helmet therapy usually lasts three to six months and is best performed between approximately four and 12 months, when the skull has greater malleability and growth potential.

Evaluation and Treatment of Craniosynostosis

Evaluation of CS

CS involving multiple sutures or those forms associated with syndromes result in characteristic patterns of skull and facial deformity that are easily recognised. Patients with these uncommon diagnoses are often recognised shortly after birth and are referred to neurosurgeons, plastic surgeons and other craniofacial surgeons early in life.

Diagnosing CS in much more common – the single-suture, non-syndromic baby is more nuanced, but a clear understanding of the patterns of presentation allows the primary care provider to recognise most patients with CS. In addition to pattern recognition, palpation of the suture in question can aid in the diagnosis: a prematurely closed cranial suture is immobile and a ridge of bone is often palpable.

If CS is suspected, referral to a neurosurgeon, plastic surgeon or some other craniofacial specialist is recommended before x-rays are performed when such specialists are available, in order to minimise unnecessary radiation exposure in children in whom the specialist can rule out CS by examination. When such specialists are not readily available, a simple skull x-ray series usually secures the diagnosis.

Other radiology studies that don’t expose children to radiation have been explored, but none are widely being used as screening tools at this time. MRI can confirm CS in specialised centres, but it is not recommended as a screening tool at this time. Cranial ultrasound has been shown more promise, but its usefulness is highly dependent on the familiarity of individual technologists and radiologists and is also not widely used as a screening tool at this time.

Treatment of CS

Surgery remains the treatment of choice for CS, but the surgical options have changed and improved in the past 20 years. CS surgery during the second half of the 20th century consisted primarily of extensive open procedures to remove portions of the skull, orbits or face and to reconstruct these structures in an aesthetically pleasing way, while expanding the skull to allow greater intracranial volume. These open operations are generally performed on children between six and 18 months of age, they last many hours, often required a transfusion and require multiple days of hospitalisation. Results of these procedures are often good, and they remain the mainstay of CS surgical treatment in patients who are not diagnosed early in life.

More recently, minimally invasive techniques have become the treatment of choice for infants with CS in many specialised craniofacial centres worldwide. These techniques involve minimal bone removal in a young infant, typically less than six months of age, with subsequent gradual skull and facial remolding using helmets or internal springs. Compared to the traditional operations of the late 20th century, these minimally invasive procedures are performed through much smaller incisions, result in minimal blood loss, usually don’t require a blood transfusion and typically involve only an overnight hospital stay. These advantages are offset by lack of an immediate improvement in head shape, which often requires many months while the orthotic helmet or spring device slowly changes the child’s head and facial shape as the baby grows. 

The most commonly performed minimally invasive technique for CS involves the endoscope assisted removal of the affected suture followed by cranial moulding with an orthotic helmet, which is similar to helmets utilised in cases of severe DP. Since its introduction approximately 20 years ago, this technique has been shown to be very effective in treating all types of single suture CS and its application to multiple suture or syndromic cases has also been explored.

Present and future research will further characterise outcome differences for various surgical techniques. At this time, many neurosurgical CS specialists consider minimally invasive techniques the treatment of choice for majority of patients with CS: those with single suture CS who are diagnosed before four to six months of age. Minimally invasive techniques have also shown promising results when applied to patients with syndromic or multiple suture CS, but craniofacial surgical teams still treat these patients, as well as those diagnosed later in life, with traditional open operations.

References available request.

Innovations in Surgical Management of Gastroesophageal Reflux Disease

Article-Innovations in Surgical Management of Gastroesophageal Reflux Disease

graphics of a stomach

Gastroesophageal reflux symptoms are common in infancy, childhood, and adolescence. In one study, 2-7 per cent of parents of 3 to 9-year-olds report their child experienced heartburn, epigastric pain or regurgitation within the previous week, whereas 5-8 per cent of adolescents reported similar symptoms.

Most children respond well to changes in their diet, as well as medical management for these symptoms. Gastroesophageal reflux disease, (GERD), is a more serious condition and has an incidence of 1.5 cases per 1,000 person-years in infants, declining until 12 years of age, and then peaking at 16 to 17 years of age (2.26 cases in girls and 1.75 cases in boys per 1,000 person-years in 16- to 17-year-olds). Overall, the childhood prevalence of GERD is estimated at 1.25 to 3.3 per cent, compared with 5 per cent among adults.

GERD can affect a child’s growth and development, and can lead to more serious complications, such as vomiting and damage to the oesophagus. At Children’s Mercy Kansas City, the Division of Pediatric Gastroenterology is performing cutting-edge research into the pharmacological management of GERD, but some children do not respond well to medical treatment. 

In refractory cases, surgery may be the best treatment option. With nearly 20 years of experience in the use of laparoscopic fundoplication for the management of gastrointestinal reflux, the general surgeons at Children’s Mercy have published a number of articles on this technique. 

Specifically, the Nissen fundoplication is our preferred operative approach to treating GERD. This procedure was named after Dr. Rudolf Nissen, the surgeon who developed it in the 1950s. Since that time, the surgery has evolved from an open procedure that required large incisions to a laparoscopic, or minimally invasive, procedure. 

One of the areas of focus among the general surgeons at Children’s Mercy is how to prevent transmigration of the fundoplication wrap following performance of a Nissen fundoplication. We have studied this problem carefully and scientifically, initially through a retrospective study, which was followed by two prospective clinical trials evaluating differences in the operative technique. 

In the last prospective clinical trial, which was published in the January 2018 issue of the Journal of Pediatric Surgery (53:25-29, 2018), the Children’s Mercy surgeons found that limited dissection of the oesophagocrural junction and limited mobilisation of the oesophagus resulted in none of the 120 patients enrolled in the study developing transmigration of the fundoplication wrap in the postoperative period with a median follow-up of four years. 

The goal for this, and all, research performed by the Department of General Surgery at Children’s Mercy, is to determine the effectiveness and practical application of utilising this specific surgical technique to improve outcomes for patients. This study concluded when minimal pharyngoesophageal dissection is performed, oesophagocrural (EC) sutures offer no advantages and increase operating time. Thus, our surgeons confirm that the pharyngoesophageal membrane should be kept intact, which results in minimal dissection around the gastroesophageal junction.

Paving The Way

The paediatric general surgeons at Children’s Mercy Kansas City have been early adopters of minimally invasive technology and techniques. In 1999, Children’s Mercy established the Center for Minimally Invasive Surgery, designed to make state-of-the-art minimally invasive surgeries available to paediatric patients across the globe. 

Center for Prospective Clinical Trials Investigates Paediatric Surgical Questions 

The Center for Prospective Clinical Trials within the Department of Surgery at Children’s Mercy Kansas City was e0stablished in 2006 to perform randomised studies investigating variables that do not allow the patient’s course to vary from normal daily practice. The centre also performs prospective observational studies. 

All of the studies performed in the centre are protocolised care based on evidence and outcomes, which are institution-specific. The hospital’s randomised data are exactly what the provider can expect in terms of outcomes for patients they refer for surgery at Children’s Mercy. The centre’s goal is to address the many questions common to paediatric surgery. 

Paediatric Surgical Care

Children’s Mercy is one of only 10 centres in the U.S. to be verified as a Level 1 Children’s Surgery Center, the highest rating possible from the American College of Surgeons, which has set the highest standard of care in the U.S. By joining this group of Level 1 Children’s Surgery Centers, the hospital is contributing to the innovation of paediatric surgery, which impacts the lives of children around the world. 

The review process to become verified is rigorous and stringent, including a thorough site visit by an ACS team of surveyors who review the hospital’s structure, process, and clinical outcomes. The team, which consists of experienced paediatric surgeons, anaesthesiologists and nurses, visit all areas of the hospital to make sure the people, resources, the culture of safety, and administrative support ensure patients receive the highest level of care. 

An important component of providing this level of surgical care is expertise. At Children’s Mercy, only experienced paediatric anaesthesiologists care for each child. This ensures the patient has a safe and smooth anaesthetic experience. 

In fact, the paediatric anaesthesiologists at Children’s Mercy administer anaesthesia for more than 27,000 children each year — that’s 74 each day. Most adult hospitals only treat about 200 children each year–less than one a day. 

Surgical Expertise

At Children’s Mercy, 20,144 surgeries were performed in fiscal year 2018. This team’s surgical expertise extends to a number of conditions commonly seen in the paediatric population. These include: 

  • Center for Pectus Excavatum and Pectus Carinatum, which offers minimally invasive surgery for pectus excavatum, and the largest experience in the U.S. with the dynamic compression device bracing system utilised for pectus carinatum. 
  • Same-day surgery for non-perforated appendectomy. 
  • Laparoscopic inguinal hernia repairs performed on an outpatient basis at Children’s Mercy Hospital Kansas or at the Children’s Mercy Kansas City Adele Hall campus.  

Better Outcomes for Colon and Appendiceal Cancers

Article-Better Outcomes for Colon and Appendiceal Cancers

colon with cancer cells

When patients with colorectal and appendiceal cancers develop metastatic disease to the peritoneum (the lining of the abdominal cavity), the treatment options are limited, and the survival rate is poor. Data suggest that up to 25 per cent of patients with colorectal cancer (CRC) can develop peritoneal disease.

Depending on the type of colorectal or appendiceal cancer, systemic chemotherapy is usually one of the treatment options. Another option, which can be offered to select patients, is cytoreductive surgery either with or without heated intraperitoneal chemotherapy (HIPEC). The goal of this treatment is to improve overall and disease-free survival without detracting from quality of life.  

Cytoreductive surgery consists of removing all of the visible disease in the peritoneal cavity, and depending on the location of the disease, could include bowel resection, liver resection, or removal of other organs including the spleen or gallbladder. It is very important to achieve complete cytoreduction, leaving no disease behind. Completeness of cytoreduction is one of the main factors impacting the patient’s prognosis after surgery. 

Once the resections are complete, and all of the disease visible to the eye is removed, HIPEC treatment is performed using a chemotherapeutic agent that has been heated to 42 degree C, which is infused into the abdomen via catheters. The solution is constantly circulated around the abdominal cavity to ensure all surfaces are exposed for 90 minutes. Both the chemotherapy agent and heat are cytotoxic to any cells that might have been left behind. 

There are multiple factors that predict whether cytoreductive surgery could be beneficial. The main factor is the disease histology/behaviour. Patients with aggressive tumours that display poor differentiation and/or signet cells are less likely to benefit. For those patients who do undergo surgery, as previously stated, completeness of cytoreduction is key and can be judged by the completeness of cytoreduction score. Patients with no or minimal visible disease left behind (score of 0 or 1) have improved survival. 

Frequently, laparoscopic exploration is done before the cytoreduction to assess for resectability. This allows assessment using the peritoneal carcinomatosis index (PCI), which is calculated based on the size and distribution of the tumours in the abdominal cavity. A high PCI score carries a worse prognosis and predicts lower likelihood of complete cytoreduction. 

When determining which patients are candidates for this surgery, the patient’s performance status should not be underestimated. It has been shown again and again that patients with an Eastern Cooperative Oncology Group (ECOG) performance status under 2 have improved survival after cytoreduction/ HIPEC. Preoperative nutrition status is of paramount importance as well, as it correlates to postoperative complications. If the patient is malnourished before the surgery, total parenteral nutrition preoperatively can improve this. 

Multidisciplinary tumour board discussions and recommendations are also extremely important when managing patients with peritoneal metastasis. Review by an expert pathologist is needed to confirm the histology both in appendiceal and colorectal cancer. The disease is often very heterogeneous with no standard algorithms for care. Shared decision-making should be emphasised, and careful counselling of the patient is needed. 

At the Program in Peritoneal Malignancy at Brigham and Women’s Hospital, every patient is reviewed by a dedicated, multidisciplinary tumour board and the underlying pathology is reviewed by an expert gastrointestinal pathologist. Like many high-volume treatment centres that carry better outcomes, the BWH Program in Peritoneal Malignancy utilises enhanced recovery pathways to minimise the risk of post-surgical complications, improving both survival and quality of life for patients who undergo HIPEC, which is bringing many patients hope for a better prognosis.

Surgery in a Pill: Potential New Treatment for Type 2 Diabetes

Article-Surgery in a Pill: Potential New Treatment for Type 2 Diabetes

surgery and pills

The obesity epidemic is driving a parallel epidemic in type 2 diabetes that now affects more than 400 million people worldwide. The prevalence of diabetes is particularly high in the Middle East, where close to 20 per cent of the population has type 2 diabetes. Multiple randomised clinical studies have now shown that bariatric operations, namely sleeve gastrectomy and gastric bypass surgery, are the best available treatment for obese type 2 diabetics, with many patients experiencing diabetes remission and coming off their diabetic medications.

This has created significant interest in the field of metabolic surgery, where surgeries such as gastric bypass (GBP) are performed with a primary focus of helping patients improve their type 2 diabetes.

Despite the clear benefits of surgery — improvement in diabetes, weight loss, reduction in cancer risk, and extended life expectancy — uptake of surgery amongst patients who qualify remains low.

Furthermore, many diabetic patients do not fulfil the current surgical criteria and therefore continue to struggle with their diabetes, a chronic disabling disease that is one of the most common causes of blindness, renal failure, and limb amputation worldwide. There has therefore been significant interest in trying to understand the mechanisms of diabetes resolution after GBP, with the goal of developing less invasive alternatives.

During gastric bypass, the surgeon creates a small pouch at the top of the stomach to reduce the capacity for food intake. The small intestine is then reconstructed, and the new stomach pouch is connected to the lower section of the small intestine. During digestion, food now bypasses most of the stomach and the first part of the intestine, modulating the amount of nutrients, including glucose, and calories that are absorbed. Interestingly, in most patients, the return to normal insulin levels occurs just a few days after surgery, long before significant weight loss takes place.

A team at Brigham and Women’s Hospital (BWH) in Boston, Massachusetts, U.S., has been studying the underlying mechanisms responsible for this rapid improvement in diabetes, with the goal of developing novel drugs, devices, and less invasive surgical procedures that can replicate the metabolic benefits of surgery.

The team, led by the author, in collaboration with Yuhan Lee, PhD, a materials scientist at BWH, and Jeffrey Karp, PhD, a Professor of Medicine at Harvard Medical School and a biomedical engineer and researcher at BWH, recently presented the results of work they’ve done on developing a sticky, gut-coating powder that provides a barrier on the first part of the intestine and mimics the effect of gastric bypass surgery in a non-invasive way. The team hopes that the new compound named LuCI (for Luminal Coating of the Intestine), by delivering medication directly to the upper GI tract, may one day be offered in pill form as an alternative option to surgery.

LuCI is able to coat healthy tissue and form a transient physical barrier on the luminal, or inside, surface of the intestine so that nutrients, including sugar, are not absorbed. Bypassing the upper part of the gastrointestinal (GI) tract appears to be integral to the anti-diabetic effects of gastric bypass surgery. By emulating a critical aspect of bariatric surgery in a non-invasive way, the research team believes that “Surgery in a Pill” could one day be an alternative to an invasive procedure. 

As reported in a paper published in the June 2018 issue of the journal Nature Materials, LuCI significantly reduced glucose levels in animals after a meal. One hour after ingesting LuCI, the increase in glucose was lowered by 47 per cent, and this effect completely dissipated within a few hours. Histological analyses showed that the coating had no adverse effect on the lining of the small intestine, and the treatment did not cause the animals to develop diarrhoea or lose weight.

Additional Therapeutic Value

LuCI has also shown promise as a vehicle for site-specific drug delivery to the GI tract. For example, so-called protein drugs are important in the treatment of patients with inflammatory bowel disease, which affects the lining of the lower intestinal tract, or colon, but delivery is challenging. Oral intestinal-targeted protein drugs need protection from the gastric acid and enzymes in the upper GI tract that can degrade these medications. As part of the team’s preclinical animal studies, they tested the ability of LuCI to provide a platform for protein delivery. Using a simple protein, they demonstrated LuCI’s promise in performing this function. These results were published in the June 2018 Nature Materials article previously referenced. 

Given the growing diabetes epidemic, there is an urgent need for safe, non-invasive, and effective treatment. Through bioengineering, the team has replicated the anti-diabetic effects seen in patients who undergo gastric bypass surgery, developing a novel approach that can potentially extend this benefit to a much wider patient population. Dozens of medications are available to treat diabetes, but many patients are unable to achieve appropriate blood sugar control while on them. The results with LuCI have been very encouraging. LuCI may prove to be a tremendous asset in treating and improving quality of life for many diabetic patients.

References available on request.

Progress in the Management of Acute Cholecystitis and the Difficult Cholecystectomy

Article-Progress in the Management of Acute Cholecystitis and the Difficult Cholecystectomy

pretend surgery using puzzle pieces

The management of acute cholecystitis (AC) continues to evolve. When possible, laparoscopic cholecystectomy (LC), introduced nearly three decades ago, is the treatment of choice. Early in the disease course, LC is a relatively simple procedure. However, as the disease evolves, the operation becomes increasingly difficult and can require truly advanced laparoscopic skills and may be most appropriately performed at specialised centres.

Today, nearly a quarter million cholecystectomies are performed for acute cholecystits each year in the U.S. About 85 per cent are started laparoscopically although around 10 per cent of these are converted to open cholecystomy because of technical difficulties. Open cholecystectomy is associated with a 1.3-fold increase in operative morbidity.

Although we continue to make progress in managing this relatively common disease, several important questions remain unanswered. A recent consensus conference on preventing bile duct injury (BDI), organised under the auspices of the Society of American Gastrointestinal Endoscopic Surgeons (SAGES), and endorsed by the Society for Surgery of the Alimentary Tract and the American and International Hepato-Pancreato-Biliary Associations, assessed our progress and developed, when possible, consensus guidelines or recommended further study of unresolved issues. 

Although these recommendations have not yet been finalised, the areas of discussion are informative.

Establishing the Diagnosis and Grading the Severity of Cholecystitis

Establishing the diagnosis and the severity of the cholecystitis was one area of discussion. In particular, we still have no objective diagnostic or grading system that, at least in the U.S., accurately establishes the diagnosis or predicts the difficult cholecystectomy when alternative treatment options should be considered.

Ideally, based on the characteristics of the disease at presentation, management decisions would be tailored to the needs of the patient. A number of algorithms have been devised to accomplish this; perhaps most widely known is the Tokyo Guidelines, which were first developed in 2007 and subsequently revised in 2013 and 2018. These provide treatment criteria for diagnosis (No diagnosis, Suspected, Definitive) and establish the AC severity (Grade I mild, Grade II moderate, and Grade III severe). Primary drivers of higher grade include the presence of organ dysfunction, increased local inflammation, elevated white blood cell count, longer duration of symptoms, and the presence of a palpable, tender gallbladder. The 2018 version incorporates the Charlson Comorbidity Index and American Society of Anesthesiologists Physical Status Classification into the grading to distinguish between candidates for immediate LC, conservative management, and percutaneous drainage.

While these criteria have been validated in Japan and provide a framework to emulate, they have not proved as useful in the U.S. For example, one recent study from the University of Arizona analysed a three-year prospective database of 857 patients with suspected AC. Comparing Tokyo Guideline Criteria with the gallbladder pathology, they found that 45 per cent of the patients with severe local inflammation, including gangrenous cholecystitis, did not meet the Tokyo criteria for diagnosis. The overall sensitivity of the Tokyo Guidelines for cholecytitis was only 53.4 per cent. These results suggest that we need a set of diagnostic and grading criteria that is validated for the specific population being treated. 

Operation versus Conservative Therapy 

Another area of discussion at the Consensus Conference was the role of conservative therapy with antibiotics. With time from the onset of illness, the inflammatory process increases, creating an adhesive mass that can present a formidable challenge to approach with the laparoscope and even using open techniques. There is data to suggest that, with time after the onset of illness, the morbidity, mortality, and cost of LC all increase. 

It has been the standard to operate early if the patient presents within 72 hours but, if the disease has progressed longer, to treat the patient with intravenous antibiotics and allow the inflammatory process to subside, the surgery should be delayed for at least four to six weeks. Multiple randomised trials and meta-analyses have suggested that, comparing these approaches, early surgery is associated with a shorter overall hospitalisation and hospital cost and nearly 20 per cent of patients who are managed conservatively develop persistent or recurrent symptoms prior to surgical intervention. There do not appear to be differences between the approaches in conversion rate to open cholecystectomy or morbidity and mortality including BDI. At least one randomised trial examined outcomes for patients with symptom onset of greater than 72 hours and found that, compared with conservative therapy, LC was associated with significantly lower overall morbidity, hospital stay and cost with no significant differences in conversion rates or in the incidence of BDI. 

This issue deserves further study and perhaps a rethinking of the traditional 72- hour cut-off for proceeding with LC. 

Role of Percutaneous Cholecystostomy 

The appropriate use of percutaneous cholecystostomy (PC) is another area that needs better definition. In patients with severe disease undergoing PC, the acute symptoms and inflammatory signs resolve in most patients although gangrenous cholecystitis is a contraindication and the drain must usually be left in place until cholecystectomy. LC after percutaneous cholecystostomy is still a more difficult procedure with high conversion rates. The general consensus has been that PC is best reserved for high risk elderly and critically ill patients in whom PC has been suggested to reduce the morbidity and mortality of LC. 

However, recent data have raised questions about the wisdom of such an approach. For example, a review of Medicare data from 1996-2010 in patients with cholecystitis and organ failure found that patients who underwent PC were less likely to ever undergo cholecystectomy and had higher readmission and mortality rates than propensity matched patients undergoing LC. Likewise, a randomised trial of LC versus PC from the Netherlands in patients with APACHE II scores of greater than 7 was abandoned after patients undergoing PC were found to have higher morbidity, need for reintervention, and recurrence rates. 

Again, the role of PC deserves further study and perhaps should only be reserved for patients who are not candidates for LC. 

The Difficult Cholecystectomy

The major focus of the Consensus Conference was the difficult cholecystectomy. Operation in this group of patients is associated with the need for conversion to open operation and the highest risk of BDI. For example, a recent prospective multicentre study from Belgium found that 11.4 per cent of patients required conversion and, in this group, there were biliary complications in 13.7 per cent.

The known risk factors that portend a complicated operation include those criteria defined by Grade II of the Tokyo Guidelines: symptoms of greater than 72 to 96 hours, a WBC greater than 18,000/mm3, and/or a palpable or gangrenous gallbladder. However, it has also been shown that severe pathology may be encountered in the absence of such findings. 

The Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) has developed a six step Safe Cholecystectomy Program that includes: 1) achieving the critical view of safety (CVS), 2) recognising aberrant anatomy, 3) performing an intra-operative time out before clipping or cutting ductal structures, 4) liberal use of intraoperative cholangiogram (IOC), 5) having bail-out options, and 6) asking for help in difficult cases. 

Most critical is to achieve the CVS, which is defined by three criteria: 1) the hepatocystic triangle is cleared of fat and fibrous tissue, 2) the lower one third of the gallbladder is separated from the liver to expose the cystic plate, and 3) two and only two structures should be seen entering the gallbladder. When the CVS is not achieved, there is a danger of BDI; however, there seems to be some misunderstanding of the criteria among surgeons. For example, in one recent study from the Netherlands, when surgical videos of cases with complications were reviewed in detail, although operative notes indicated that the CVS was achieved in 80 per cent, video review suggested that it was achieved in only 10.8 per cent. 

If the inflammation is so significant that further dissection is deemed inappropriate, there are other options. IOC can be pursued to delineate the biliary anatomy; the use of infrared fluorescence is being evaluated in this setting. If this does not sufficiently define the anatomy, conversion to an open procedure can be pursued. Thoughtful consideration is needed to judge if the exposure of an open approach will significantly facilitate the dissection. 

All surgeons should be familiar with bailout options when the CVS cannot be achieved. Although removing the gallbladder from the top down has been employed, this may also be associated with significant risk. If only the dome of the GB can be safely exposed, operative cholecystostomy may be pursued. If the hepatocystic triangle cannot be safely dissected, the surgeon can pursue a subtotal fenestrating cholecystectomy, leaving the posterior wall on the liver. At least 2cm of GB neck is preserved and any impacted stones can be removed. The neck can be either left open (fenestrating) or oversewn (reconstituting). A drain is left in the gallbladder fossa. 

Conclusions 

The surgical treatment of AC is complex and nuanced. Astute clinical judgment is required to make subtle decisions regarding both the type of surgical intervention and the timing of that intervention.

The surgeon who commits to a LC should be aware of the various techniques and options for abandoning the original procedure. SAGES has recently developed a set of Safe Cholecystectomy Web-Based Educational Modules that should become a resource not only for surgeons in training but for all practitioners caring for patients with AC. The final recommendations from the Consensus Conference should be a significant addition to our armamentarium.

References available on request.

Surgical Ethics in the Era of Technical Advancements

Article-Surgical Ethics in the Era of Technical Advancements

surgery tools

Why do surgical ethics matter? The short answer is that surgery is not just a purely technical discipline. Technical mastery is absolutely necessary, but it is not sufficient in and of itself to bring complete benefit or comfort to our patients. Surgical Ethics (SE) is part of the core of surgical professionalism and as such significantly impacts the everyday life of surgeons and the care they provide to their patients. As Charles Bosk noted in Forgive and Remember (The University of Chicago Press, 1979), “when the patient of an internist dies, the natural question his colleagues ask is “What happened?”, while when the patient of a surgeon dies his colleagues ask, “What did you do?”. By the nature of their craft and beliefs about it, the surgeon is more accountable than other physicians and they also have much more to account for.”

The central question to surgeons has changed. It is not just “What can we do for this patient?”, but today’s question is, “What should we do for this patient?” And this question is the challenge of SE.

The encounter between a patient and their surgeon is unique for several reasons. The surgeon inflicts pain upon a patient for the patient’s own good. An operative intervention is irreducibly personal, such that the decisions about and performance of operations are inseparable from the idiosyncrasies of the individual surgeon. Furthermore, there is a chasm of knowledge between the patient and surgeon that is difficult to cross. Hence, training in the discipline of surgery includes the inculcation of certain virtues and practices to safeguard against abuses of this relationship and to make sure that the best interests of the patient are prioritised. The stories in this issue are evidence that in contemporary practice this is not quite enough, as surgeons reflect on instances, they felt were ethically challenging. Common themes include the difficulty in communicating surgical uncertainty, patient-surgeon relationships, ethical issues in surgical training, and the impact of the technological imperative on caring for dying patients.

Ethical challenges in surgery include crafting an adequate informed consent process for patients who are often distressed and anxious about making decisions with serious health and personal consequences, working with family members serving as surrogate decision makers for patients who lack the capacity to take part in the informed consent process, and responding to requests from patients or family members for futile surgical intervention.

Additionally, the work of surgeons generally encompasses such things as: the provision of palliative surgical management for patients in the end-stages of terminal illnesses; protecting patients from incompetent surgeons and other healthcare professionals; recruiting one’s own patients for surgical clinical trials; obtaining informed consent for the involvement of trainees in surgical procedures; responsibly managing conflicts of interest and conflicts of commitment; engaging in serendipitous and planned innovation; running a practice on a sound business basis; dealing honestly with private and public payers; protecting the integrity of clinical judgment and practice from intrusions by managers of healthcare organisations and payers; and helping to shape healthcare policy that is evidence-based and responsive to the increasing costs of surgical care. The ethical issues that arise for surgeons are, therefore, many and varied.

Tools of Ethical Analysis

Surgical ethics uses the tools of ethical analysis and argument to provide practical guidance to surgeons. Ethical analysis requires one to become clear about clinically relevant and applicable concepts and use them with consistent meaning. Ethical argument requires one to use clearly formulated ideas to formulate reasons that together support a conclusion that should then guide clinical judgment, decision making, and behaviour. The discipline and clinical value of ethical reasoning in surgery, as in other clinical specialties, comes from following arguments where they take one. Submitting to the discipline of ethical reasoning gives one’s clinical ethical judgments intellectual and moral authority that they lack when they emanate from mere opinion, “gut” feeling, or the arbitrary exercise of power by those with institutional authority to wield power.

The history of medical ethics provides clinically relevant and applicable ideas and reveals how surgeons have made contributions to the repository of our concepts of clinical ethics. British surgeons, for example, pioneered consent processes as early as the 17th century, when they fashioned contracts without patients for operations. On the other hand, 19th century surgeons in the U.S. transformed this rudimentary notion of informed consent into the more clinically sophisticated version with which surgeons are now familiar. From a historical perspective, the commonly held view that common law invented informed consent in the early 20th century and imposed it on reluctant surgeons becomes suspect. Perhaps common law simply codified ethical best practices that had already been brought to considerable ethical and clinical sophistication by practising surgeons in clinical practice. Recent astonishing advances in medical technology have opened up new frontiers and created options for surgical treatment that have often led to vigorous debate about what constitutes right and wrong. What is achievable has to be limited by what is acceptable.

I believe that the primary challenge for each of us in the future is to become a complete surgeon. For a complete surgeon, technical expertise is necessary but not sufficient. The complete surgeon must be an excellent technician and even more importantly a great doctor that is, someone who can communicate well with patients and who is adept at engendering trust.

Increasingly, in the future, surgeons will have to withstand the temptation to become purely technicians because if we allow ourselves to be purely technicians, we will cease to be true physicians. We should never let that happen. We should never let anyone push us to be purely technicians. If anyone says, “We will work up all the patients, work up all the pre-ops, and see all post-ops. You can just operate all day, every day,” we should withstand the temptation to go along. In an environment in which Relative Value Units (RVUs) are becoming the measure of achievement and where the focus on finances seems ever present, we must withstand the temptation to become pure revenue-generating technicians.

Another essential challenge for surgeons will be to ensure that informed consent for surgery continues to be a meaningful exchange. Surgeons today face the challenge of overcoming the impediments to the surgeon–patient relationship and engendering the patient’s trust. Only by becoming adept at engendering the trust of our patients can we achieve success as surgeons. This is perhaps something on which we do not focus enough in our training programmes, but it will be increasingly critical to succeed in a career in surgery. We must make a concerted effort to train our medical students and residents to become good communicators and give them tools to engender trust.

Apart from the challenges to the surgeon–patient relationship, I think the central question in surgery has changed. The central question for surgeons in the past was, “What can we do for this patient?” This was the central question asked for centuries when the therapeutic options that surgeons could offer their patients were quite limited. 

In contrast, as the options for what we can offer even critically ill patients has expanded, the question today, and increasingly in the future will be, “What should we do for this patient?” This is a very different question. This question of what “should” we do for a patient is really a question of surgical ethics. To answer what should be done, surgeons must take into account not only the surgical problem at hand, but also the patient’s goals and values. “What should be done?” always requires us to attend to the ethical dimension of care in order to provide an answer.