Omnia Health is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Improving Patient Care and Reducing Costs in the Laboratory by Optimising Test Utilisation

Article-Improving Patient Care and Reducing Costs in the Laboratory by Optimising Test Utilisation

blood inside a test tube being held

Proper utilisation of laboratory testing is an important factor in improving patient care outcomes. Clinically useful laboratory tests provide timely diagnosis, identify at-risk patients in need of intervention, accurately predict prognosis, guide therapy, and effectively monitor treatment. In contrast, inappropriate laboratory test utilisation reduces the value of laboratory testing and may result in diagnostic errors or adverse patient care outcomes. While implementation of appropriate test utilisation practices in the laboratory is an important factor in improving patient care outcomes, it can also reduce costs.

Table 1. Benefits of Optimising Laboratory Test Utilisation

Reprinted with permission from Clinical and Laboratory Standards Institute from: CLSI. Developing and Managing a Medical Laboratory (Test) Utilization Management Program1st ed. CLSI document GP49. Wayne, PA: Clinical and Laboratory Standards Institute; 2017.

Changes in healthcare reimbursement models are another reason laboratory test utilisation has gained interest. Highly specialised esoteric tests are more readily available and demand for their use has increased. The overutilisation of these and other laboratory tests contributes to high healthcare costs without adding value for the patient or the healthcare system.

The five main reasons for inappropriate laboratory testing are:

  • Inappropriate test selection on the part of the clinician
  • Incorrect test procedure performed by the laboratory
  • Misinterpretation of results
  • Omission of clinically appropriate testing (underutilisation)
  • Failure of the healthcare provider to follow up on test results with the patient

An organisation can develop solutions for test utilisation improvement by understanding the reasons for inappropriate use of laboratory tests. Selecting the proper diagnostic test in a particular clinical situation has become more difficult as the number and complexity of tests expand. While clinician and patient education is helpful, alone it is insufficient to ensure optimal test utilisation.

What is Test Overutilisation?

Test “overutilisation” is the performance of a test for which the result has little to no effect on the patient’s care. There are many causes of test overutilisation, most of which originate during the pre-examination or pre-analytic phase of testing.

One of the causes of test overutilisation is that a significant portion of laboratory test orders are “redundant” or “duplicate,” in that the test is repeated too soon to provide useful information. There are several factors that contribute to duplicate testing. One is that large organisations with many healthcare providers face the challenge that one clinician may not be aware that a test was already ordered by another clinician. Which tests have already been ordered and are pending is not always readily available to the busy clinician. Electronic notifications and stopping unnecessary duplicate orders at the point of order entry is a proven way to decrease this type of overutilisation.

Another type of test overutilisation occurs when tests are ordered more frequently than necessary. Unnecessary, repeat testing typically involves routine, high-volume tests. Causes include standing orders, inpatient phlebotomy scheduling, or uncertainty about reasons for retesting.

Testing redundancy occurs when two or more tests ordered together are expected to provide the same information. Some tests provide essentially the same information and, when ordered at the same time, provide little if any additional information. In some cases, one test is preferred over another. Working with providers to develop and implement best-practice test algorithms can substantially reduce this type of redundancy.

Technical problems also contribute to testing overutilisation. Misorders are one type of technical problem, and may be caused by marking the wrong test(s) order forms or by the laboratory processing the order incorrectly. A second type of technical problem is that a test may be inappropriately selected to evaluate a medical condition for which it is not indicated.   Pathologists and other laboratorians should work closely with their information technology personnel to remove obsolete tests from the test menu, and make it easy for providers to order the correct test and difficult to order the wrong, sound-alike test.

Test Underutilisation

The omission of clinically useful testing is one instance of test underutilisation. Working with providers to implement appropriate testing algorithms is an excellent way to work to assure necessary testing is performed at the appropriate time. These algorithms, particularly if automated, could prove very helpful for population health management. Some reasons for test underutilisation include incomplete initial or reflex testing, insufficient laboratory monitoring of a disorder or treatment, and lack of expected or recommended testing needed to assess a clinical condition. Similarly, best-practice algorithms that include appropriate reflex testing are helpful to ensure important tests are performed as necessary. Although it is not studied as often as overutilisation, underutilisation of laboratory tests may cause unfavourable clinical consequences. Addressing underutilisation holds great promise for disease prevention and the optimal management of chronic diseases.

Implementing a Test Utilisation Programme

It is important that the organisation’s medical and administrative leadership understands the importance of managing utilisation throughout the organisation. Leadership can show support for a test utilisation management programme by establishing and participating in the programme.

Participation from many areas of the healthcare organisation is essential to the proper implementation and success of a test utilisation programme. Multispeciality representation and participation is an important consideration in the formation of a team. The following individuals and/or groups add value to a laboratory utilisation programme:

  • Pathologists and doctoral-level laboratory scientists
  • Physicians, clinicians, and nurses
  • Administrators, managers, and supervisors
  • IT and data services
  • Quality and/or continual improvement professionals
  • Financial analysts and accountants
  • Statisticians and public health experts
  • Medical laboratory scientists

Utilisation committees help manage change and provide guidance on test utilisation. In some organisations, the utilisation committee reviews and approves requests for existing and new referred tests (eg., expensive genetic and molecular tests) submitted to referral laboratories. Others review the scope, use, and effectiveness (both cost and clinical) of current laboratory services, as well as any new laboratory services and associated technology requested by clinical stakeholders.

Although the advantages are significant, a formal committee structure is not always suitable for every organisation. Some organisations may institute a less formal working group or, depending on the resources and expertise available, smaller subcommittees and task forces made up of personnel with expertise appropriate to a specific utilisation issue.

Utilisation priorities vary based on the organisation’s size, the types of patients served, and the resources available for analysis. Many laboratories begin with projects involving higher-cost referral laboratory tests. The financial effect of the proper utilisation of these tests is relatively easy to track, because the costs are captured separately from the laboratory’s in-house testing costs.

Laboratory utilisation management programmes often focus their efforts on:

  • Clinical and anatomic pathology services
  • Referred testing
  • Esoteric testing, especially genetic and molecular
  • Point-of-care testing
  • General laboratory quality improvement
    • Sample suitability
    • Turnaround times
    • Frequency of testing, including duplicate testing

 
The Importance of Measuring Outcomes of a Test Utilisation Programme

Statistical and decision support analyses, as well as financial information (for actual testing and effects of test utilisation on other healthcare expenses or revenues), are valuable in assessing appropriate and inappropriate test utilisation and for monitoring the effectiveness of the programme. Information is available that suggests what to measure, how to measure selected metrics, and how to construct meaningful reports.

Conclusion

Implementation of a test utilisation programme is an effective way to reduce costs and improve patient outcomes. CLSI has recently published Developing and Managing a Medical Laboratory (Test) Utilization Management Program, 1st ed. (GP49), guidance for the initiation, development, and maintenance of an effective test utilisation programme in the laboratory.

References available on request.

The Impact of Laboratory Accreditation on Patient Care and the Health System

Article-The Impact of Laboratory Accreditation on Patient Care and the Health System

medical lab glass containers

Accreditation is a procedure by which an authoritative body gives formal recognition that an organisation is competent to carry out specific tasks. As efforts to expand services continue, increased attention to quality, efficiency and cost effectiveness is paramount.

International accreditation services for clinical laboratories can be obtained from the College of American Pathologists (CAP), Joint Commission International (JCI), United Kingdom Accreditation Services (UKAS), and National Association of Testing Authorities in Australia (NATA), among others.

The College of American Pathologists (CAP) founded its accreditation programme 55 years ago in 1961. It includes more than 7,945 laboratories in the United States and over 430 laboratories in 50 different countries. The College’s Laboratory Accreditation Program (LAP) accredits the entire spectrum of laboratory test disciplines with the most scientifically rigorous, customised and objective checklist requirements.

The international standard ISO 15189 addresses the requirements for quality and competence in medical laboratories. It has its origins in the competency requirements of International Organization for Standardization/International Electro-technical Commission (ISO/ IEC) 17025 and the quality management system (QMS) requirements of ISO 9001. Fulfilling those requirements is objective evidence that the laboratory meets both the technical competence as well as the management system requirements that are necessary for it to consistently deliver technically valid results.

ISO 15189 was first introduced in 2005 with rapidly growing international adoption. In the Emirate of Abu Dhabi, ISO 15189 has been made mandatory by Abu Dhabi’s main health insurance company as well as Abu Dhabi Quality and Conformity Council (AD-QCC) for medical laboratories as of January 2017.  In the UAE, Emirates International Accreditation Centre (EIAC), formerly known as Dubai Accreditation Department (DAC), is the signatory to International Laboratory Accreditation Cooperation (ILAC) in the UAE and offers accreditation services for clinical laboratories in the country.

The Impact of Laboratory Errors on Patient Care
Diagnostic testing typically involves multistep complex processes subject to multiple sources of error.  According to Trevor Peter et al, studies in the United States and Europe have demonstrated that errors occur throughout the testing process, including the pre-analytical stage (sample collection, labelling, and transport); analytical stage (testing in the laboratory); and post-analytical stage (data management and reporting results). The majority of errors occur outside of the laboratory in the pre-analytical (46%–68%) and post-analytical stages (18%–47%). This does not include clinic-based errors that occur in deciding which tests to order and in the interpretation of test results, both areas of high error risk. The frequency of errors during the analytical stage is lower but remains significant, estimated to be between 7% and 12%, despite years of quality management regulation. In the United States, it is estimated that 6% to 12% of laboratory errors put patients at risk of inappropriate care and potentially of adverse events, whereas 26% to 30% of errors have a negative impact on other aspects of patient care.

Sources of errors include: lack of operator competence or failure to adhere to standard test procedures; incorrect reagent storage or expiration and instrument inaccuracy. Errors can also be introduced during sample collection, labelling and transportation, registration at the laboratory, or the transcription and delivery of results. Combined, these errors can lead to significant variance in the accuracy of the reported result, potentially leading, in some cases, to incorrect diagnosis, inappropriate treatment, or withholding of lifesaving therapy.

The Impact of Laboratory Accreditation on Patient Care
I still recall the day, 20 years ago, when I started working as a junior pathologist in the United Arab Emirates. At that time, no laboratory in the country had any sort of accreditation. In those days travelling abroad to seek medical treatment was not an uncommon practice. Fast forward to the present day, there are now 45 CAP accredited laboratories in the UAE, and many more accredited to the ISO 15189:2012 standard and the list is growing.

The latest report of the Institute of Medicine (IOM) titled “Improving Diagnostics in Health Care” is entirely focused on diagnostic errors and is intended to improve diagnosis in healthcare. Three of the eight goals provided by the committee creating the report have direct links to laboratory accreditation, namely goal number 2, enhance healthcare professional education and training in the diagnostic process; goal number 4, develop and deploy approaches to identify, learn from, and reduce diagnostic errors and near misses in clinical practice; and goal number 5, establish a work system and culture that supports the diagnostic process and improvements in diagnostic performance. Accreditation provides the perfect means to address these goals.

The variability of test results and the frequency of errors can be reduced by implementing and monitoring a comprehensive laboratory quality management system. This has included participation in regular proficiency testing (PT) which is a pre-requisite to any accreditation system for clinical laboratories. Accreditation provides verification that laboratories are adhering to established quality and competence standards deemed necessary for accurate and reliable patient testing and the safety of staff and the environment.

The impact of accreditation on patient care is paramount.  As such these include: 1) increase in patient confidence and trust in the health care system; 2) decrease in the number of patients travelling abroad to seek medical treatment and accurate diagnosis; 3) increase in the rigour for licensing, competencies, training and education of laboratory professionals; 4) stimulation of innovation, research and novel discoveries. On that basis, health authorities in the United Arab Emirates are now insisting on accreditation as a means to improve patient care. With the introduction of accreditation there are more demands to hire well-trained and qualified technologists, invest in a solid infrastructure and push the bar to a higher level.

I am quoting below what Trevor Peter et al opine on the impact of accreditation:   “Adherence to quality standards—and participation in accreditation programs that certify this adherence—can improve operational efficiency and customer service and reduce rates of laboratory errors. While there are limited published data that directly link accreditation to reduced laboratory errors and patient outcomes, studies have clearly shown that participation in PT programs, a key component of accreditation, leads to more accurate test results. For example, participation in just 3 rounds of an external CD4 PT program resulted in a 26% to 38% reduction in errors in the CD4 count among laboratories in resource-limited settings. When PT participation became a standard requirement in the United States, PT failures among laboratories were noted to decrease with successive PT challenges, as has the percentage of laboratories cited for deficiencies during successive inspection cycles”.

Having an international benchmark is another huge advantage of accreditation that can impact patient care. Take for example a simple quality metric such as specimen rejection and how tracking and addressing this single metric can affect timely and effective patient care. Many other indicators that are used to objectively measure the effectiveness of the quality management programme will have similar if not more significant effects on patient care. Cytological examination of cervical smears is another example where improving the quality of reporting are very much linked to better patient outcomes and benefits from early detection of cervical cancer.  

Trevor Peter stressed on the fact that accreditation provides a mechanism by which patients, healthcare organisations, and governments can measure the performance of laboratories against international standards. Accreditation promotes trust in laboratories and confidence among authorities, healthcare providers and patients that laboratories and the results they produce are accurate and reliable. Successful laboratories can justify the resources they need to maintain quality. Increased resources, in turn, help improve laboratory capacity and may boost performance. The challenge for many laboratory networks today is to visibly improve their performance to a level at which this type of positive reinforcement begins to take effect.

Finally, cultivating a quality culture in any organisation requires time. The advantage of accreditation lies in its utility as a transformative tool for any laboratory or organisation seeking performance excellence.  Continuous improvement follows as a natural progression leading to stepwise and steady improvement in quality, performance and outcomes. Trevor Peter et al stated that accreditation is likely to have spillover effects on the performance of other areas in the healthcare system. Laboratory-driven improvements can help improve healthcare management more broadly. For example, improved supply chain requires improved forecasting skills and better inventory management and consumption-tracking systems. The process of upgrading these systems at a national level could also benefit systems for drug supply chain. Furthermore, the example of laboratory accreditation, with its established and structured processes, defined standards, and accrediting bodies, can demonstrate the benefits of systematic performance evaluation and ongoing quality improvement and could catalyse the impetus to improve patient care across the entire health care system.

Conclusions
Accreditation provides perfect means to building quality medical laboratories, and helps in improving diagnosis in healthcare and enhances patient safety. Health authorities are encouraged to consider accreditation as one of the means to improve patient safety and patient care. Accreditation programmes can help drive improvements in the management of individual laboratories and laboratory networks and may also have positive spillover effects on the performance in other sectors of the healthcare system.

Point-of-Care Diagnostic Testing: Empowering the Clinicians

Article-Point-of-Care Diagnostic Testing: Empowering the Clinicians

stetoschope on a uniform

Point-of-care (POC) testing, otherwise known as near-patient testing or remote testing, is a quick and convenient way to test a patient outside of a laboratory – be it in a GP’s clinic, an ambulance, the home, the field, or in the hospital. According to the National Institutes of Health (NIH) in the United States (US), the largest biomedical research agency in the world, POC testing results in care that is timely, and allows rapid treatment to the patient. The NIH believes that “empowering clinicians to make decisions at the “point-of-care” has the potential to significantly impact healthcare delivery and to address the challenges of health disparities” and that the ongoing development of portable diagnostic and monitoring devices for POC testing could result in “the success of a potential shift from curative medicine, to predictive, personalised, and preemptive medicine”.

The latest report from Markets & Markets estimates that the global point-of-care diagnostics market is expected to reach $36.96 billion by 2021, at CAGR of 9.8% from 2016 to 2021. The report identifies key factors such as the prevalence of lifestyle and infectious diseases and the increasing inclination toward home healthcare across the globe as driving the growth of the POC diagnostics market, globally. In addition, private investments and venture funding towards the development of new products along with the growing government support to increase the adoption of POC devices are boosting market growth. On the other hand, the report finds that reluctance among patients to change existing treatment practices is one of the key factors hampering the growth of the point-of-care diagnostics market.

Yesterday, Today & The Future

In years gone by, healthcare was delivered through home visits by doctors until medical discoveries allowed care to shift to the hospital setting. This was essentially the earliest form of POC testing. It wasn’t until centralised laboratories became commonplace and samples were analysed using automated systems that making the right diagnosis became more efficient. Eventually, POC testing was introduced in hospitals or doctor’s offices for simple testing, such as pregnancy tests.

Today, prevention and early detection of disease is a central focus for caregivers. The concern is all about how to deliver optimal care, quickly and efficiently. POC testing using portable devices, instruments, and scans enable non-clinical staff to screen, diagnose and manage many of the communicable and non-communicable diseases around today. The development of sensors and low-cost imaging systems provide rapid analysis of blood samples and scans that can be immediately reported on for quick diagnosis.

The future of POC testing lies in the personalisation of medicine. Wireless technology, the Internet of Things (IoT) and big data will not only allow patients to take more control of their own healthcare, but it will hugely improve the way information is transmitted and interpreted making healthcare delivery more efficient, precise and ultimately, more affordable.

POC Testing in Diabetes

In a recent study on POC testing for diabetes, there is new evidence that HbA1c may be the most effective method to identify patients with undiagnosed prediabetes and diabetes, and point-of-care testing further enhances that screening ability in primary-care settings. The new research, which was conducted by Heather P Whitley, PharmD, of Auburn University Harrison School of Pharmacy, Montgomery, Alabama, and colleagues, and published findings in the Annals of Family Medicine, suggests that the data suggest that HbA1c is a better test than is a fasting blood glucose because postmeal glucose spikes happen sooner in the course of developing type 2 diabetes than does a high fasting.

This prospective longitudinal study compares diabetes screenings between standard practices vs systematically offered point-of-care (POC) hemoglobin A1c (HbA1c) tests in patients aged 45 years or older. Systematically screened participants (n = 164) identified 63% (n = 104) with unknown hyperglycemia and 53% (n = 88) in prediabetes. The standard practice (n = 324) screened 22% (n = 73), most commonly by blood glucose (96%); 8% (n = 6) and 33% (n = 24) were found to have diabetes and prediabetes, respectively. The association between screening outcome and screening method was statistically significant (P = 0.005) in favour of HbA1c. HbA1c may be the most effective method to identify patients unknowingly living in hyperglycemia. Point-of-care tests further facilitate screening evaluation in a timely and feasible fashion.

POC Testing in Cardiac Care

According to findings from a National Heart, Lung, and Blood Institute Working Group in the US, POC testing has tremendous potential to advance precision cardiovascular (CV) care. The prevention and management of cardiovascular disease increasingly demands effective diagnostic testing.

For example, for patients presenting with chest pain, the working group found that in ambulances and emergency rooms, POC testing can improve the efficiency of care by enabling rapid assessment and triage of patients with chest discomfort. Cardiac troponin (cTn), a highly sensitive and specific biomarker of myocardial injury, guides triage and management of patients presenting with symptoms suggestive of acute coronary syndrome. ERs already use commercial POC cTn assays, but parallel efforts are exploring whether central laboratory cTn assays can perform serial measurements at progressively shorter intervals to discriminate cardiac from noncardiac causes of chest discomfort and enable rapid patient triage. Historically, stable serial measurements of cTn taken at 6- to 12-h intervals served to “rule out” cardiac injury. More recently, high-sensitivity cTn assays, available only in the central laboratory, permit exclusion of clinically important myocardial injury with high confidence at initial sampling as well as after only 2 serial measurements performed at 1- to 2-h intervals. POC devices that can match this performance without sending samples to a central laboratory may become mainstream frontline CV diagnostics.

POC Testing in Liver Disease

In a paper titled “Point-of-Care Testing in Liver Disease and Liver Surgery” by Lasitha Abeysundara et. al., published in Seminars in Thrombosis and Hemostasis, the authors looked at the role of POC tests in the management of liver disease.The alterations in coagulation and hemostasis that accompany liver disease are complex, and while patients with this disease have traditionally been perceived as having a bleeding diathesis, it is now understood that in stable patients hemostasis is "re-balanced." Hepatic surgery, and particularly liver transplantation, can be associated with large fluid shifts, massive bleeding, and coagulopathy, as well as postoperative thrombosis. The authors found that POC tests of coagulation facilitate goal-directed treatments and hemostatic monitoring in dynamic environments where the coagulation status can alter rapidly and often unpredictably. POC tests reflect more accurately the re-balanced hemostatic system than do conventional coagulation tests (CCTs). Viscoelastic POCT-guided transfusion algorithms permit a reduction in blood product administration and are a key component of patient blood management programmes. Moreover, viscoelastic POC tests are better able to identify patients with hypercoagulability than CCTs. With thrombosis increasingly recognised to be a problem in patients with liver disease, the authorities believe that POC tests hold promise for both individualised bleeding and thrombosis management.

Whose data is it anyway?

The ethical and regulatory issues surround POC testing remain under consideration across the world as decentralisation of diagnostic testing brings up new issues.The Food and Drug Administration (FDA), in its role as overseeing regulation, has provided related guidance such as the “regulatory oversight framework for laboratory developed tests” and the “mobile medical application policy”. Because POC devices are mobile, the protection of health information of patients also remains an issue. Who gets to benefit from the huge amout of data collected? There are multiple players involved – manufacturers, management providers, the practitioners, even the patients themselves – and sensitive patient data must be protected.

The boom in POC testing today is mostly being driven by wearable microsensors and lab-on-a-chip technology. The impact of these technological advancements will be mostly be seen in low-income countries where healthcare infrastructure is weak and there is a clear lack of lab-based testing available. However, today the US dominates the market in terms of the volume of testing outside the laboratory setting, with Middle East and Africa (MEA) and Asia-Pacific (APAC) regions emerging as potential areas for growth.

Economic challenges for hospital laboratories

Article-Economic challenges for hospital laboratories

iamge of Prof Dr med Kai Gutensohn

In industrial nations expenditures in healthcare systems are continuously rising. Due to technical and medical improvements, a growing awareness of claims of patients and most of all, as the obvious result of a disproportionally ageing population, costs are increasing. On the other hand, financial resources are decreasing due to declining public and private revenues. In general, determinants of health and illness, which can be classified into individual, medical, demographic, socioeconomic, cultural and structural factors, are stressing the fine-tuned balance of this system. As a consequence, a progressively accretive budget disparity develops. Solutions are needed to face current and future challenges.

Medical laboratories provide services as a secondary level support. While in the first level direct interaction with patients occurs, analytical activity takes place in the background. As a most prominent supplement, however, laboratory activity renders significant support for medical decision making and treatment. Therefore, the solid functionality of such analytical units is not only relevant for medical results but also critical for revenues of medical institutions.

To circumvent the drift between gains and expenses, modern laboratory units require a rising demand for management. In a functional manner, organisation, task management, shaping of processes, planning of technical and human resources, as well as leadership, represent relevant cross-sectional topics. Here, in order to understand the scope of these challenges three major characteristics are crystallised as key competences. To these count a modest degree of technical capability to transfer knowledge, technique and methodology to a given context. Second, social competence is essential which particularly comprises intersectoral comprehension, intelligent skills of communication, and readiness for cooperation. The third main skill is the conceptual expertise by which complex issues are simplified, structured and transformed into precise instructions.

Diagnostics is the link for a sustainable healthcare system, providing immediate opportunities to reduce costs, bringing value to the decision making process, to result in high quality of care. Focusing in more detail, several modules can be identified to achieve a pronounced leverage. First, identification of in-house costs must be analysed. Expenditure on materials and staff per analyte and benchmarking with comparable institutions show possible savings. Second, the outsourced analytical portfolio has to be screened. Current pricing agreements and services and the comparison with market conditions have to be analysed including aspects of additional services and quality. Subsequently, an alignment of internal and external content and performance must follow the initial efforts. The final step within this rightsourcing process is the interdisciplinary exchange to control and adapt the volume, content and behaviour of laboratory requests.

The equivalent consideration of management function with the focus on planning, organisation and control on the one hand, and functional matters with purchase, production and sales on the other hand, create the solid fundament and solution for the economic “yin and yang” in clinical hospital landscape. Its professional integration within the hospital laboratory network structure results in high productivity, a gain of efficacy and efficiency. Such gains have to be carefully balanced with quality aspects to maintain a value-based healthcare delivery. In this context, internal quality control, external proficiency testing, a well-established quality management system, and a profound supervision are mandatory. As a consequence, laboratory providers that are adaptive and nimble in the operational and economic considerations mentioned will pull ahead.

The world of healthcare delivery is changing rapidly. Organisations, hospitals and hospital laboratories need to permanently transform their business strategies to keep up with a metaphoric environment. Though, it is the patient’s outcome that matters the most and will continue to be the centre point and sum of all activities within the complex treatment process chain. Today, the mostly fragmented systems usually incentivise the delivered services of laboratories and dictate the payment for healthcare support. However, it is the patient-centred outcome system that must represent the overall strategy when looking at economic optimisation. This perception must never be disregarded as value-based care is the common goal and the integral approach for the solution based on patient needs. As an overall strategy must consider the overall performance, one may summarise this reflection with an analogy to current politics by saying “patient first”.

How TrueProfile.io is using Ethereum Blockchain to empower healthcare professionals to own their data again

Article-How TrueProfile.io is using Ethereum Blockchain to empower healthcare professionals to own their data again

TrueProfile.io® is the new standard of document verification for diplomas, employers references, licenses and other trust-based objects from its issuing source. It puts the individual in control of safe-guarding their data and allowing it to be shared in a variety of ways with whomever they choose.

As one major element, TrueProfile.io stores this verification on the Ethereum Blockchain so that customers can access proof of their verification independently of the service - even if TrueProfile.io should cease to exist. In essence, this concept similarly supports identity services both in a traditional sense and in a self-sovereign one.

In a nutshell, the validated data belongs to the user and not to the company validating it.

The basic concept of TrueProfile.io

TrueProfile.io is the industry leader for the verification of applicant qualifications such as diplomas or employers references. TrueProfile. io, powered by the DataFlow Group (founded 2006), conducts Primary Source Verification (PSV) for every submitted document by reaching out to the Issuing Authority (IA) like a university, an employer or a licensing body to verify the authenticity of the document. All healthcare professionals in the GCC will be particularly familiar with the process of verifying their credentials as part of pre-employment background screening. 

Furthermore, during verification, the DataFlow Group manually checks that the Authority is accredited and legitimate. If these conditions are met, TrueProfile.io issues a so-called TrueProof®  - the basic building block of its service which is a single positively verified document. TrueProfile.io uses Blockchain technology to store a TrueProof so that customers can access their TrueProof independently of TrueProfile.io and even if at some point in time TrueProfile.io should no longer be in operation.

An introduction to Blockchain

In 2009, the Bitcoin white paper was published by a person or group called Satoshi Nakamoto. Bitcoin is a decentralized transaction system based on Blockchain technology. It is decentralized in a way which means that no central third party has control over the transactions written on the Blockchain.

Furthermore, transactions accepted by the network are immutable as the network protects them from manipulation. To achieve this, all transactions need to be ordered in relation to the time they occurred. These transactions are bundled in blocks that are chained one after another; the following blocks secure the previous blocks and this process secures the integrity of the system. For all TrueProfile.io users, this means they can be safe in the knowledge that validated TrueProofs cannot be edited or amended in any way. 

In Bitcoin, one block is written in approximately ten minutes.. Ordering transactions by time and chaining them together is perfectly usable for timestamping and is a method already implemented by Satoshi Nakamoto. For example, the following text is written in the first block of Bitcoin (Genesis block): “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks”. This data is stored forever and can not be changed without changing all blocks which are built on the Genesis block. That’s why a Blockchain is the perfect way to carbonate data forever. 

At the end of 2013, Vitalik Buterin took the idea of Bitcoin and introduced the concept of smart contracts. For Bitcoin, the software code that is executed for one transaction is defined in a limited set of codes, known as OPcodes. Each of these OPcodes performs a specific command or function on the Bitcoin Blockchain. Buterin suggested to use a ‘Turing complete’ programming language instead of the limited set of Bitcoin OPcodes. This makes it possible to bind any arbitrary software code to an Ethereum address - now known as a smart contract. 

In contrast to Bitcoin, this leads to more flexibility as smart contracts can be added over time without any changes to the underlying Blockchain software structure. 

The strength of data on a blockchain

As the Blockchain is replicated across many thousands of computers, it would be a waste of storage to store the full documents on all replicated nodes. It would also become critical for privacy if CV data was stored on a publicly accessible Blockchain like Ethereum. That is why TrueProfile.io proves the authenticity of a document by storing the documents fingerprint, but not the document itself on the Blockchain. 

The fingerprint is comparable to the human fingerprint. If a fingerprint is found, it ensures that the fingerprint came exactly from this document as no other document can create the same fingerprint. Fingerprints also solve the privacy concerns, as the fingerprint of a document does not reveal any information about the document’s content. For example, in the same way that a human fingerprint does not give any information about the owner’s hair colour. In computer science those fingerprints are called hashes. 

Pragmatic standard with a hybrid approach 

Blockchain purists, obviously, will argue that the sheer existence of a centralized unit like TrueProfile.io/the DataFlow Group makes a travesty out of a radically decentralized set-up without any mediator whatsoever. On a theoretical level, they might be right. Yet, TrueProfile.io has purposefully chosen this path as it sees the world in pragmatic terms as what it is today and how it can serve users who intend to present themselves as trusted candidates - especially in an international, professional context. Instead of trying to onboard every university, every employer and every licensing body around the world to hold and safeguard their private keys for signature, TrueProfile.io is completely technology agnostic. 

When it comes to obtaining its verification from the issuing source, the methods can range from writing, calls or even personal visits to the site. Based on this (positive) input, TrueProfile.io will issue a TrueProof. All the TrueProofs belonging to a user will be collected on their myTrueProfile page on TrueProfile.io.

This hybrid approach extends even further: TrueProfile.io puts the user in control of their verified data and enables them to expose this information to third parties of their choice:

User-triggered as a TrueProfile.io Member

  • Members can choose to send a TrueProof PDF - which is stored as a hash on blockchain - to any 3rd party who can then check its validity against the Blockchain.
  • TrueProof JSON (pure data object) brings the idea of hashing the PDF to the next level and store the hash of the plain data object on the Blockchain which is forever accessible to the individual.
  • Members can make the entire myTrueProfile public on a randomized link and share it with 3rd parties of the user’s choice - such as authorities and regulators.

Employer-induced

A Business Partner sends pre-paid voucher codes to an applicant who in turn signs up, redeems the voucher and conducts PSV. Via their log in, the Business Partner is presented with the TrueProofs and the myTrueProfile of the applicant while the member safeguards their TrueProofs on their myTrueProfile for future utilization. 

Platform-driven

Bringing trust to 3rd party profile-based services by integrating the attribute “verified” using the TrueProof JSON.

Through all of these means, centralized and decentralized, TrueProfile.io aims to further the aspiration of becoming the standard for document verification. But what does that “standard” mean and how do we get there? Let’s first declare what it will not be or rather how it will not be achieved: It is very unlikely that any supra-national body like the United Nations, The Hague Convention or any similar consortium will all of a sudden and solemnly declare something to be “The Standard for Document Verification”.

By contrast, the standard will be achieved initially through adoption from a few key actors, ideally in one geography - and/or industry cluster, before spreading out further horizontally and vertically until it will accelerate momentum towards broad usage and acceptance. 

In order to reach this place, is TrueProfile.io not subject to the archetypical chicken-and-egg problem where individuals will want to have a verification only if it’s sufficiently accepted by employers, immigration authorities, regulators and vice versa, employers, immigration authorities and regulators will only accept individuals with TrueProofs if there is a critical mass of them? Yes, indeed, therefore TrueProfile.io strives to drive both utilization and acceptance of TrueProfile.io from both sides of the market starting with present and past applicants from the DataFlow Group. From there it extends to the “outer world” of individuals who need to present trusted documents through efforts in sales, business development and partnerships.

Once a certain critical mass is reached, TrueProfile.io will start engaging strongly via lobbying and direct conversations with governmental bodies, universities and employers to seek acceptance for its standard. Narrating these points of acceptance back to the B2C side of document owners will let them be safely convinced that TrueProofs are being broadly accepted and hence will embark upon utilizing them.

The extension of TrueProfile.io 

In a subsequent step, once creating and administering cryptographic keys has evolved further into mainstream, the Issuing Authority of documents would require a profile (aka identity on the Blockchain) themselves. In this scenario, a service like TrueProfile.io would sign initially whereas the Issuing Authority would cosign the data as well. In a final stage of full decentralization, Issuing Authorities could be enabled to sign the TrueProofs directly. Another direction where TrueProfile.io is perfectly lending itself towards is a hybrid approach of building a digital identity of users who possess the property of being portable.

Combining an “Identity TrueProof” based on banking KYC-robust online verification of government issued documents (ID or passport) with one or several self-sovereign identity initiatives like uPort (which integrates with the TrueProfile.io platform) would establish an interoperable framework for further network distribution. Looking from another perspective, opening up the export and inclusion of document TrueProofs into these self-sovereign identities under the full control of the individual should be made available. 

By choosing Ethereum blockchain to store hashes of our Members information, we have taken the necessary steps to assure our Members that their data is safe, secure and forever accessible - meaning that they will always be able to access their TrueProofs at any stage in their career. Even if TrueProfile.io should cease to exist, users are empowered to own their data and choose on a very granular level who they wish to share access with. 

For the full whitepaper and additional technical commentary around the ways that TrueProfile.io uses Ethereum blockchain for the management of blockchain data, please visit: https://www.trueprofile.io/blockchain-whitepaper.pdf.

Prof. Carlo Pappone – Director of the Department of Clinical Arrhythmology and Electrophysiology Policlinico San Donato Research Hospital

Article-Prof. Carlo Pappone – Director of the Department of Clinical Arrhythmology and Electrophysiology Policlinico San Donato Research Hospital

Prof. Carlo Pappone

San Donato Hospital Group

Four thousand procedures per year, of which 3,000 ablations and 1,000 device implantations, performed at the Policlinico San Donato Research Hospital and San Rocco Clinical Institute (San Donato Hospital Group) by Prof. Pappone’s team, that is, the largest number in the world for an electrophysiology laboratory. But it is not just the number of cases treated that makes the Department of Clinical Arrhythmology and Electrophysiology of Policlinico San Donato (San Donato Milanese, Milan) one of the leading centres on an international level. The quality of the procedures and the predisposition for innovation attract patients from all over Italy and overseas, and doctors from all corners of the world, who visit the hospital to learn how to use the state-of-the-art techniques devised by Prof. Pappone. A pioneer in the field, he created what is known as the “Pappone approach”, a method that describes how to identify and eliminate atrial fibrillation circuits and establishes the criteria for validating its efficacy. Today, circumferential ablation of the pulmonary veins, described by the “Pappone approach”, is the most widely applied technique in the world for the treatment of persistent and chronic atrial fibrillation.

Atrial fibrillation is an acquired form of arrhythmia, a disease that strikes an increasingly large proportion of the population aged over 50 years, due to ageing of the heart and, in particular, its electrical system. “Arrhythmia upsets a magical balance in our lives. Every day our heart beats some hundred thousand times, and the magic lies in the fact that we are not aware of it. On the contrary, if we have arrhythmia we are well aware of it as it beats too quickly or slowly. This sense of malaise, called palpitation, may have different causes. We should worry about it if the heartbeat is too fast while we are resting or if it is too slow while we are exercising – explains Prof. Pappone. – Atrial fibrillation, together with ventricular tachycardia, is a form of arrhythmia that can be acquired during the course of our lifetime, together with the typical aging of the heart. As the hearts gets older the electrical system may conduct the impulses more quickly or slowly. These are the two types of arrhythmia with the highest incidence rate in the world population, following the ageing of the population and the prolongation of life”. There are also other types of arrhythmia, present from birth or of genetic origin: “We are talking about paroxysmal tachycardia, abnormalities that develop in the embryo, causing irregularities in the conduction of the heart’s electrical system. These individuals may suffer accelerations in their heartbeat up to 200-250 beats per minute, but these forms of arrhythmia are benign and have no clinical consequences. The genetic forms are much more serious: it is written in their DNA that they could suffer arrhythmia episodes at a certain point of their lives, predisposing them to the onset of complex ventricular arrhythmia and even a risk of sudden death” – illustrates Prof. Pappone.

The various forms of arrhythmia are not all treated in the same way, but with specific techniques according to the mechanism that causes them. “Radiofrequency ablation is the most widely adopted technique in the world and, in most cases, actually cures the disease. It is applied both to congenital arrhythmia and to ventricular tachycardia, atrial fibrillation and atrial tachycardia. Through special mapping systems, the arrhythmic circuits – that is, the points where the electric pulses are conducted more or less rapidly - are located and defined and the normal electrical stability of the system is restored using radiofrequency waves” – explains Prof. Pappone. Excellent results have been obtained in patients suffering from Wolff-Parkinson-White syndrome, in which Prof. Pappone – together with his team – is the historical pioneer in its treatment by ablation. Through observation of the patients in the electrophysiological study and the elimination of the short-circuits responsible for the arrhythmia, which is often asymptomatic, the recovery rate has reached levels of about 95-99%. There are, however, forms of arrhythmia that cannot be eliminated but have to be managed by implanting a device: “This is the case of patients suffering from ventricular fibrillation, in whom we implant a defibrillator, a small device that detects arrhythmia and automatically eliminates it, thus preventing a heart attack. On the contrary, for atrioventricular blocks, the only possibility is to implant a pacemaker to restore normal cardiac conduction”.

The progress made in arrhythmology has led to a cure being found for the most widespread forms of arrhythmia and now gives some hope even to patients suffering from rarer diseases, often not diagnosable and a cause of sudden death. The scientific studies in progress at Policlinico San Donato trace out the challenges for the future: “We have discovered that many genetically encoded forms of arrhythmia are expressed at certain points of the cardiac muscle, and that they can be identified using advanced mapping systems. These electric abnormalities do not give any signs of their existence for most of the patient’s life and only manifest themselves once: they cannot be prevented or treated because many of the patients affected are not aware of them. This is why, today, all over the world, 3 million people die of a sudden heart attack, 60,000 in Italy alone. We are convinced that we can identify these patients before the disease is manifested because we have discovered how to diagnose the condition. We believe this to be an epoch-making discovery: we could offer a cure to many young people otherwise destined to die early and suddenly” – illustrates Prof. Pappone. On Brugada syndrome: “Our group has opened Pandora’s box on potentially lethal forms of arrhythmia, which strike the population aged between 15 and 35 years and it was discovered for the first time at Policlinico San Donato that these genetic diseases – until now considered simple electrical disorders – are fully-fledged diseases in which an anomalous gene encodes a substrate situated at a specific point of the heart, in its outer part, that is, the epicardial region of the left or right ventricle. For the first time in the world and in the history of medicine, a technique has been devised to identify these substrates and eliminate them with radiofrequency waves, thus curing a patient suffering from this genetic disease. Until 2 years ago, this was considered an impossible objective for science: now, at Policlinico San Donato, after about 3 years, over 250 patients suffering from Brugada Syndrome have been cured. Numerous doctors coming above all from Asia – a continent on which this disease is widespread –now visit our laboratories to learn the technique”.  

The next objective seems to be closer to science fiction than reality: to penetrate into the “magic” of the heartbeat to predict and prolong a person’s life span. “We believe that, as time passes, areas with impaired functions of the cardiac muscle can develop due to electrical abnormalities and that these regions can become larger over the years, thus causing our heartbeat to slow down progressively or stop suddenly. By studying the genetic expression of the electrical activity of the epicardium, we can map the outer part of the heart, identify the damaged regions and, through the application of extremely superficial electromagnetic waves, teach the cells how to communicate with one another correctly, so as to prolong their lives. It is a target that we have set ourselves over the next three years”

It appears evident that the integration between specialized medical skills and the use of advanced technology is the winning combination for obtaining new clinical and scientific results. “The arrhythmologist of the future will certainly be a heart specialist with in-depth knowledge of cardiology. Unlike all other organs, the heart is the only organ that functions with an electrical current and, to think about it, the only one that moves autonomously inside the body. An electrical machine must be understood and treated by an electrician, and this is why the figure of the arrhythmologist is crucial for the treatment of diseases that have this origin” – explains Prof. Pappone. And the ideas that stem from the experience of arrhythmologists provide the input multinational companies need to develop increasingly advanced, precise and personalized technologies and instruments. From the mapping systems for studying the heart to the instruments used for the treatments (catheters, devices and software): most of the technologies currently used in electrophysiology laboratories all over the world are said to have originated from collaboration between Prof. Pappone’s Arrhythmology and Electrophysiology group and the leading companies. “Today, we are talking above all about technologies aimed at determining the mechanisms that lie at the basis of cardiac arrhythmia, a change in policy with respect to the traditional orientation towards interventionism. Knowledge of the mechanisms that cause diseases lie at the basis of progress in every field and will enable us, in the near future, to obtain more effective treatments, applicable in the short term, which can reach all the patients in the world”. In fact, the high costs of electrophysiology determine an inhomogeneous distribution of the treatments: “For arrhythmia, there is no scientific democracy, but a monopoly of the richer countries. In some developing countries, the discipline does not exist and many children die of forms of arrhythmia that would be extremely easy to cure in Europe. Policlinico San Donato has a robotic system capable of performing operations at a remote location: today, an operation could be performed on a child in Africa simply from a desk using a mouse. The child, in a lorry fitted out and set up as a laboratory, could receive the electrical impulses transmitted directly from our hospital. I sincerely hope that this project will be implemented over the next few years because I firmly believe that everyone, wherever they are born, should have the same opportunities in terms of longevity and quality of life”.

Promises and Drawbacks of Health Technology Assessment (HTA) in Laboratory Medicine

Article-Promises and Drawbacks of Health Technology Assessment (HTA) in Laboratory Medicine

doctor dropping liquid in a test tube

The introduction of diagnostic tests in clinical practice has been for long based on the work of so-called “expert groups” or “expert panels”, which were in duty of revising applications and updating the list of clinically available tests according to recent biological discoveries, technological developments or changes of disease epidemiology. This has led the way to generating highly specific panels of tests, dependent upon local epidemiology, economic resources, healthcare sustainability, reimbursement policies and, last but not least, patients’ demands. The use of arbitrary criteria for evaluating test performance, costs and healthcare outcomes is the major weakness of this strategy, which has often led to approving diagnostics tests for clinical use in some geographical areas, which have been then overlooked in other, even neighbouring regions.

Basic concepts of Heath Technology Assessment
The process of establishing whether or not a certain diagnostic test may be considered both sustainable and clinically useful is not an easy task. This is mostly due to the fact that translating basic research assays in daily practice is a challenging enterprise, needing many years and involving many sequential processes, such as assay development and validation, commercialisation of reagents and/or diagnostic platforms by in vitro diagnostics (IVD) companies, thoughtful analysis of analytical and clinical performance, positioning the test at the right place within an appropriate care pathway, monitoring outcomes, performance and failures. Throughout these steps, the vast majority of tests may be literally “lost in translation” (Figure 1).

To overcome some of these shortcomings, most healthcare models around the world are increasingly committing to health technology assessment (HTA), which can be conventionally defined as the accurate evaluation of medical and diagnostic technology for evidence of safety, efficacy, cost-effectiveness, ethical and legal implications. HTA is hence mainly aimed to provide an evidence-based, reliable and possibly foolproof input into a healthcare strategy or policy decision. In a broader sense, HTA in laboratory medicine should hence be considered a multi-professional and multidisciplinary enterprise aimed at assessing laboratory technology from basic reagents and instrumentation, up to rearranging care pathways and redesigning complex healthcare structures around the new diagnostic test.

The main operating paradigms of HTA include partnership, scientific credibility, independence, accountability, responsiveness, effectiveness, visibility and accessibility. Therefore, HTA domains are many (i.e., clinical, social, organisational, economic, ethical, legal), as highly multifaceted are the outcomes (e.g., assessment of efficacy, safety, costs, social and organisational impact). There are many essential criteria characterising a HTA process. First, the evaluation of health technology, and thereby the multidisciplinary team, shall involve all stakeholders, thus including the patients and their families, healthcare professionals and scientific organisations, citizens representatives and volunteering associations, health and social-sanitary facilities, commercial and no-profit partners, industries, universities. The process of HTA should then consider a number of aspects that contribute to care delivery, thus including material sanitary technologies (e.g., building structures such as hospitals, outpatient clinics and patients’ homes), instrumentation and technological systems, reagents and consumables, test complexity, workload, but should also be designed around organisational and care models, clinical guidelines and regulatory systems. The systematic literature analysis should then be conducted using validated methodologies, such as that described in the Cochrane Handbook for Systematic Reviews of Interventions.

Aims, scope and advantages of Health Technology Assessment in Laboratory Medicine
The concept of HTA can be summarised as a valuable tool for evaluating innovative medical technologies, better allocating resources, improving both efficiency and efficacy of care delivery by national healthcare systems. To put it simply, the HTA process shall actually answer to four major questions, which are: Does the test work? Is it sustainable for the healthcare system? Is it cost-effective? Does it favorably impact existing care pathways? When all four answers are expectantly “yes”, then the test can be introduced in clinical practice (Figure 2).

Importantly, HTA shall cover all management levels of healthcare systems and their structures. Therefore, the outcome of HTA is aimed to impact political and clinical choices taken at different decision-making levels: (i) general level (i.e., legislative decisions and choices of national and regional regulatory bodies including the Ministry of Health, Medicines Agencies, Regional Health Service Agencies, Health Institutes, Regional Health Departments and Regional Agencies); (ii) intermediate level (i.e., management choices specific to individual healthcare facilities such as purchasing equipment, structuring care pathways, adopting specific organisational arrangements, granting assistance or not; and (iii) professional level (i.e., choices made by individuals in daily care practice involving the use of a diagnostic test, the need for medical and/or genetic counselling, the impact on care plans and organisation).

Then, the entire HTA process shall entail assessment before introduction into clinical practice (i.e., for evaluating efficacy and efficiency), but also afterward (i.e., for ensuring that the positive impact will continue despite scientific, technical, organisation and epidemiologic changes). The final purpose is providing a reliable decision-making process about IVD technologies to policymakers, healthcare administrators and laboratory professionals. Importantly, the process shall also clearly define the healthcare settings where the test may express higher efficiency and efficacy, the categories of patients to whom it may be offered, its precise positioning within a network of laboratories, the potential impact on laboratory organisation, as well as prices and reimbursement policies.

The potential drawbacks
Besides the many predictable positive outcomes that HTA may have on diagnostics testing, yet some important shortcomings emerge (table 1). First, HTA has now been widely used in many healthcare settings, but its popularity in laboratory diagnostics remains relatively limited. If one enters the keyword “health technology assessment” in one popular scientific search engine such as PubMed, over 4490 documents can be found so far. However, when both keywords “health technology assessment” AND “laboratory medicine” (OR “laboratory diagnostics”) are entered in PubMed, then the number of items dramatically falls to 20 (i.e., 0.4%). Although HTA is an element that more than others may provide an add value to innovation, reliable or published evaluations of diagnostic tests can hardly be found.

The ever growing impact of personalised medicine in laboratory diagnostics is a second important aspect. Inherently to the concept of HTA, the introduction of a new test is based upon solid scientific evidence, which has often been gathered from studies in populations of hundreds, thousands, occasionally millions of patients. With the awareness that medicine in not an exact science, what works in the single patient may not be equally straightforward in the general population. Therefore, when the process of introducing diagnostic tests in clinical practice only relies on HTA, there is a significant risk that a minority of patients, in whom that same test may be helpful, may lose chances and potential benefits.

Another critical issue especially concerns the rules of this game, which can be simply summarised in the concept “who is nominated by whom?”. It may seem awkward or even paradoxical, but written rules on how the members of the HTA multidisciplinary teams shall be selected are not so frequent. Interestingly, a recent survey endorsed by the World Health Organisation (WHO) revealed that although may countries around the world frequently gather and summarise significant information and scientific evidence to support technology assessment, fewer of them developed juridical issues to define how HTA should be incorporated in healthcare decision making, and only half of them have developed guidelines on how HTA should be conducted. Therefore, in a not so unlikely scenario, policymakers or healthcare administrators may arbitrarily decide to nominate members who may finally support decisions based on healthcare economy but not relying on evidence-based scientific information. Laboratory professionals may even be excluded in the evaluation of diagnostics tests, since no standards have been settled about the composition of HTA multidisciplinary teams for IVD technologies.

Finally, clear rules should be defined for unmistakably identifying potential conflicts of interests, thus avoiding that the final decisions of some HTA team members may be biased by personal interests in diagnostic tests or technology being evaluated.

Conclusions
Although it is undeniable that the larger use of HTA in laboratory medicine will bring many clinical and economic benefits for both the national healthcare systems and the general population, some potential drawbacks should be clearly recognised. The establishment of “working groups” or “task forces” for HTA in laboratory medicine by accredited scientific organisations such as the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) shall now be seen as a compelling need for making the assessment of diagnostic tests a liable, reliable and clinically useful endeavour.

Figure 1.
Translational (laboratory) medicine

Figure 2.
The pipeline of health technology assessment in laboratory diagnostics

Prof Giuseppe Lippi is a Speaker at the Laboratory Management Conference held as part of MEDLAB Europe in Barcelona, Spain, on 14th September, 2017.

Massive new generation sequencing (NGS) applied to clinical microbiology: reality or fiction? Direct experiences in the study of hepatitis C infection

Article-Massive new generation sequencing (NGS) applied to clinical microbiology: reality or fiction? Direct experiences in the study of hepatitis C infection

microbiology

The Vall d’Hebron Clinical Laboratories, located in the Vall d’Hebron University Hospital, centralize the in vitro diagnostic studies of both hospital activity and primary care for the 1.2 million people population of the cosmopolitan city of Barcelona. This intense activity allows us to keep a very broad view of every aspect of in vitro diagnosis, including Clinical Microbiology. I will rely on this activity, and the experience that it entails, to answer the question posed in the title regarding how real or fictitious is the application of new generation mass sequencing technologies (NGS) to Clinical Microbiology, which I will exemplify using our own experience in the study of hepatitis C.

Researchers in our centre have been studying this pathology since 1990, almost immediately after the characterisation of its causative agent, hepatitis C virus (HCV), in 1989. To give an idea of the magnitude of our care activity, we conduct approximately 70,000 HCV serology studies every year, of which 10% are positive, and perform around 7,000 viral loads, 45% of them with detectable levels of viral RNA. In addition, we also conduct more than 1,000 studies of HCV subtyping by NGS per year, which we usually call “High-resolution hepatitis C virus subtyping”. As I will discuss later, this is required to start the treatment of the infection. Up to this day, our hospital has treated more than 1,000 patients.

Introduction to HCV infection

Viral hepatitis represents a major global health problem. Around 500 million people worldwide are actively infected by one of the five viruses responsible for these infections (viruses A, B, C, D, and E). Of particular relevance are B (HBV) and HCV viruses with 250 and 100 million infected respectively. These five viruses present different transmission pathways (enteral or parenteral) and virological characteristics (e.g., they belong to different families, or their genomes might be RNA or DNA). Nevertheless, to study their genomic characteristics, which are of great clinical relevance, the same parameters are considered: phylogenetic classification to establish their genotypes/subtypes, quantification of the presence and proportions of possible variants associated with treatment failure, and studies to discern possible transmissions. All these possible studies can exploit completely analogous technologies.

In fact, this similarity is ideal in Clinical Microbiology studies, given the intensity of the care activity and the need for obtaining responses for these determinations as fast as possible, for example to decide a treatment strategy. This is precisely the case of HCV that we will detail later. At this point, we must take into account the enormous variability and complexity of the viral populations of these agents (quasispecies), which requires population studies by clonal sequencing. This fact alone already seems to justify the application of NGS techniques, as they allow to obtain thousands of clonal sequences of infectious agent genomes in the same sample, in contrast to Sanger direct sequencing, which provides the average (consensus) sequence of the population.

To get an idea of the impact of HCV infection, the most recent data published by the World Health Organization as of July 2017 states that around 70 million patients suffer chronic infection, reaching almost 400,000 deaths per year. In only 15-45% of acute infections (e.g., influenced by IL28b genotype), the virus is spontaneously eliminated. In the remaining 55-85%, the infection becomes chronic, with a 15-30% risk of progressing to cirrhosis after 20 years of infection, and 2-7% risk of progression to hepatocellular carcinoma. This infection has a pandemic character with a prevalence of 1.5-2.3% in Europe (in Spain, recent preliminary studies indicate 1.1%), but it is higher in some areas such as Egypt or Pakistan.

HCV is an enveloped virus belonging to the Flavivirus family. It contains a single-stranded RNA genome of 9.9 kb in length, which encodes a single protein that once translated is proteolytically processed to provide different structural and non-structural components of the virus. From the N-terminal to C-terminal region, its structural components are: E1 and E2 (components of the envelope), C (core, viral capsid component), followed by the nonstructural or functional components of the virus, p7 (ionic channel), NS2 (self protease), NS3 (helicase serinprotease), NS4 (NS3 cofactor), NS5A (replication regulatory phosphoprotein), and NS5B (polymerase: RNA-dependent RNA polymerase). The NS3, NS5A, and NS5B proteins are the therapeutic targets of the direct-action antiviral drugs (DAA) currently used against this infection. The virus travels with associated lipoproteins and is replicated in the cellular cytoplasm in an induced membranous web. Therefore, unlike other agents, such as hepatitis B virus (HBV) or human immunodeficiency virus (HIV), HCV does not have a nuclear reservoir.

HCV has a very high mutation rate due to the lack of error correction capability of the viral polymerase (1.5x10-3 substitutions/base/replication cycle). This way, up to 6 mutations are produced in each replicative cycle, causing all new viral genomes to become different from the previous ones. Therefore, the viral population that infects a patient will consist of a very complex mixture of different but related genomes known as "quasispecies". Viral populations constituting the quasispecies differ by amino acid polymorphisms that arise by mutation during replication, and are subsequently selected on the basis of their effects on viral fitness (replicative capacity). Among these polymorphisms, we may find some which confer reduced susceptibility to antiviral treatments, such as DAA, which we refer to as Resistance Associated Substitutions (RAS).  These are often present in minority populations with lower "fitness" than wild type or major variants. When a DAA is administered, these variants with reduced susceptibility are positively selected, resulting in viral resistance, i.e., treatment failure.

We say sustained virological response (SVR12/SVR24) occurs when plasma levels of viral RNA remain undetectable (by ultrasensitive real-time PCR <15 IU / mL) for 12/24 weeks after the end of treatment. In the context of an HCV infection, this is generally assumed to mean that the HCV infection has been successfully cured. This situation, however, is exceptional among large viral infections, such as HBV or HIV, in which undetectable levels of viral genomes (virological response) only indicate the inhibition of viral activity, and not necessarily cure from the infection.

The standard of care for hepatitis C is rapidly changing after the DAA introduction. These drugs are inhibitors of the NS3 proteases (referred to with the suffix -previr), the regulatory protein NS5A (suffix -asvir), and the NS5B polymerase (suffix -buvir). They reach cure rates above 95% in very short treatments (generally 12 weeks, and even in 8 weeks). Although the cost of production of DAA is low, these drugs are still expensive (15,000 euros/12 weeks), which restricts their universal application. Although access to treatment for HCV is improving, it remains limited. By 2015, of the 71 million people living with HCV infection worldwide, 20% (14 million) were aware of their diagnosis, of which only 7.4% (1.1 million) had initiated treatment before 2015. In Spain, it is estimated that 300,000 individuals suffer chronic HCV infection, of whom only 40% have been diagnosed. Among them, about 70,000 have already been treated, representing almost 40% treatment coverage, well above the world average.

Applications of NGS techniques to the study of HCV infection: a clear advantage over conventional techniques

In spite of the great effectiveness of the treatments with DAA (in Spain an average SVR of 95%), the number of patients for whom the treatment fails should not be ignored. These patients must be correctly classified by HCV genotype/subtype, and it is necessary to detect in them possible resistant variants (RAS) to inform new treatment strategies, at least in some specific regions such as NS5A, as the International Guidelines (EASL) indicate: “Physicians who have easy access to reliable test assessing HCV resistance to NS5A inhibitors (spanning amino acids 24 to 93) can use these results to guide their decisions”.

This motivates the study of these variants through clonal population sequencing methodologies, such as NGS. In this sense, the same international guidelines remark that the test “should be based on population sequencing (‘Sanger’) reporting RASs as ‘present’ or ‘absent’), or deep sequencing (NGS) with a cut-off of 15% (RASs that are present in more than 15% of the sequences generated must be considered)”. No clear evidence, however, supports this 15% proportion, which was previously recommended by JP Pawlotsky. In this respect, from a sample of 1000 individuals treated with DAA and analyzed by NGS, Sarrazin et al reported a 93.3% SVR12 among patients with baseline RAS at 1% level, dropping to 88.2% among those whose baseline RAS exceeded 15%. In patients who did not present baseline RAS, however, they report SVR12 of 98.4%. This suggests that the presence of RAS in proportions <15% influences SVR (at least 5% worse), and such low proportions can only be detected by NGS.

The potential usefulness of population sequencing (‘Sanger’) indicated in the EASL guidelines seems to be merely based on considering this 15% as the sensitivity limit of direct sequencing. Here, we should keep in mind that population sequencing is not a quantitative methodology, and the lowest level of detection is highly dependent on the observer. In contrast, NGS (or deep sequencing) is quantitative. Even though the 15% “cut off” emerges from expert recommendation, we should remember that “medicine must be based on evidence and not on eminence”.

Taking into account that viral genotype influences SVR, to optimise treatment success and avoid therapeutic failures by RAS selection, HCV genotype/subtype should be correctly determined, as indicated in international guidelines like those from EASL: “The HCV genotype and genotype 1 subtype (1a or 1b) must be assessed prior to treatment initiation and will determine the choice of therapy”. In this sense, the genotyping techniques available in the market present substantial error rates, whereas the NGS in the NS5B region of the viral genome seems to be free of these errors.

In fact, in our centre, where almost a thousand viral genotype/subtype studies are carried out by NGS every year, after more than 1000 treatments, SVR is 98.4% or 3% higher than the average in Spain. It seems clear that these figures support the suitability of NGS techniques for genotyping and resistant variants analysis studies. But, are they also suitable for our daily work? In our laboratory, we have developed NGS protocols for optimising and automating the initially complex process of creating amplicon libraries, their purification, titration, etc.

Our NGS procedure was initially developed for the “ultradeep pyrosequencing” methodology (UDPS) on platform 454 (Roche), in collaboration with the Vall d’Hebron Institute of Research and Roche company itself. After the discontinuation of 454 platform, however, these methods have been easily adapted to the “sequencing by synthesis” methodology (SBS MiSeq-Illumina). The automated extraction of viral genomes, as well as the use of universal molecular adapters and the incorporation of molecular identifiers in preloaded plates, all through robotic systems, allows us to apply this technology for care tasks. We have recently reported our experience with more than 1400 HCV subtyping clinical trials.

This methodology is based on the computerised phylogenetic analysis of a fragment of the NS5B region, the one recommended for the classification of HCV. Moreover, it currently supports the characterisation of all the 67 HCV subtypes recognised to date, very easily allowing the incorporation of any future discovery, such as a new subtype of genotype 1 identified in our own laboratory. The sensitivity and quantitative nature of this technology allows us to detect mixtures of subtypes and quantify their relative proportions. These tasks would be challenging, or even impossible in the latter case, using any other methodology.

On the other hand, our NGS procedure, although applicable to care studies, still requires a level of complexity not feasible in many laboratories. For this reason, we have recently validated a new real time system based on the Roche Cobas 4800 analyser. This methodology achieves good performance in carrying out the genotype 1a/1b subtyping, as well as the other genotypes (obviously not subtypes), with only a 4% of indetermined cases that should be processed through sequencing. The continuous evolution of the treatments with DAA announces the appearance of pangenotypic treatments, hence not making necessary the genotype/subtype assessment. This same announcement, however, has already been performed before in relation to some of the currently available DAA, and experience has shown that SVR still differs between distinct genotypes, as recognised by the international guidelines.

The RAS study, the most recent incorporation into our care activity (only 150 care studies this year, although more than 300 are in the research and development phase), is based on the analysis of four amplicons that cover the therapeutic target regions (NS3, NS5A and NS5B). Its processing practically matches that of the subtyping study, although including more amplicons per patient. To optimise these type of studies, and to avoid biases due to the amplification process, we use subtype-specific primers, so that the HCV subtype is determined before the RAS study. Most of the samples we process come from other centres, located in the rest of Catalonia or other Spanish regions, where genotyping has been performed by conventional techniques. With this information, we have been able to confirm our previous results regarding the high error rate obtained by these conventional techniques (>10%). Our RAS care study data also confirms that most treatment failures are associated with variants of the NS5A region, which have been detected in 72% of cases, followed by variants of the NS3 region, in 52% of cases. Furthermore, 36% of failures appear with combined RAS from several regions, since treatments usually rely on combinations of DAA.

A further application of these NGS technologies may be the study of infection transmission between patients, as is the case of nosocomial infections. In this application, a simple alignment and phylogenetic study of the quasispecies obtained in the possible source patient and the recipient(s) allows to infer with greater certainty the shared origin of the infections through the observation of common haplotypes.

In conclusion, our experience in the clinical study of HCV infection allows us to assert the immense usefulness of NGS techniques in Clinical Microbiology, as any of the three applications in which we have incorporated them are perfectly applicable to every other pathogen.

References available on request.
Dr Francisco Rodriguez Frias is a Speaker at the Clinical Microbiology Conference held as part of MEDLAB Europe in Barcelona, Spain, on 15th September, 2017.

Biomarkers for neurodegeneration – diagnosis of Alzheimer’s and more

Article-Biomarkers for neurodegeneration – diagnosis of Alzheimer’s and more

neurodegeneration graphics

Overview

Neurodegeneration is the bête noire behind diseases such as Alzheimer’s disease (AD), Parkinson’s disease and motor neuron diseases (MND), as well as traumatic brain injury, depression and stroke. Neurodegenerative diseases can be difficult to diagnose, and reliable biomarkers are required to support the clinical evaluation and accelerate diagnosis. Beta amyloid and tau are classical biomarkers for AD, and their analysis in cerebrospinal fluid (CSF) is already an established component of diagnosis. Further biomarkers focussed on synaptic integrity are in the pipeline, while genetic risk assessment of APOE also plays a role in diagnosis. In MND diagnostics, a fledgling biomarker promises a faster route to diagnosis.

Neurodegenerative processes

Neurodegeneration is an umbrella term for the progressive loss of the structure or function of neurons. The process of neurodegeneration can be regarded as a continuum (Figure 1). It starts with the misfolding of proteins, caused for example by hyperphosphorylation, which leads to the formation of oligomers. These form aggregates such as plaques, tangles and Lewy bodies. Plaques, which are deposited extracellularly next to the nerve cell ends in AD, consist of the protein beta-amyloid 1-42. Neurofibrillary tangles, located inside the nerve cells, consist of tau proteins. Lewy bodies are found in Parkinson’s disease and Lewy body dementia and consist of the protein alpha-synuclein. All of these aggregates cause damage to the neurons. This in turn results in loss of synaptic integrity or degeneration of the synapses, leading to cognitive decline and other neurological symptoms.

Biomarkers for neurodegeneration correspond to different stages of the neurodegerative process. For example, classical AD assays measure aggregates in CSF, which reflect the neuropathological changes in the brain. Synaptic protein biomarkers, on the other hand, provide a measurement of synaptic integrity and are thus assumed to be a more direct indicator of cognitive impairment.

Alzheimer’s disease

AD is characterised by loss of neurons and synapses and is the most common cause of dementia in old age. The risk for developing AD doubles for around every five years after age 65, with 30% of persons over 90 suffering from the disease. The disease is divided into three consecutive phases, the preclinical stage, the mild cognitive impairment (MCI) stage and the dementia stage. As the disease progresses, patients become increasingly frail and suffer from confusion and hallucinations among other things. On average, the life expectancy after onset of symptoms is seven to ten years.

Definitive diagnosis of AD is challenging and requires evidence of the neuropathological alterations in the brain. A diagnosis of probable AD is based on the clinical signs of memory loss and behavioural changes and the exclusion of possible reversible causes. Imaging techniques such as MRT, SPECT or PET (amyloid detection) are used to support differential diagnostics. Analysis of biomarkers in CSF aids diagnosis, especially in the early stages, and helps to discriminate AD from non-AD patients. CSF biomarkers employed in routine AD diagnostics include the beta amyloid proteins 1-42 and 1-40, and total tau (T-tau) or phosphorylated tau (P-tau).

Beta amyloid

Beta amyloid 1-42 is the classical indicator of amyloid pathology in the brain. Patients with AD show a significantly decreased level of beta amyloid 1-42 which is detectable already 5 to 10 years before the start of cognitive changes. In contrast, beta amyloid 1-40 remains unchanged in AD patients and provides a marker of the individual amyloid level. The most reliable measure of amyloid pathology is provided by the ratio of beta amyloid 1-42 to 1-40, as it takes into account the patient’s individual amyloid synthesis. A ratio of under 0.1 indicates amyloid pathology. Figure 2 shows a case example of an AD patient with high basal expression of beta amyloids. By evaluating only beta amyloid 1-42 the patient cannot be easily classified. However, with the ratio of beta amyloid 1-42 to 1-40 the patient can be definitively diagnosed. Further studies have demonstrated that the beta amyloid 1-42 to 1-40 ratio provides a higher analytical stability than beta amyloid 1-42 alone with respect to confounding factors such as the material of the sample collection tube or the number of sample freeze-thaw cycles. The beta amyloid 1-42 to 1-40 ratio also yields a higher correlation to PET imaging than beta amyloid 1-42 alone (93% compared to 83%).

Tau

Concentrations of T-tau and P-tau increase when patients show advanced neurodegeneration and cognitive impairment. Diagnostic guidelines for AD recommend determination of T-tau or P-tau alongside beta amyloid 1-42. T-tau is a marker of unspecific neuronal damage, which occurs in AD but also in other conditions such as stroke. P-tau on the other hand is an AD-specific marker of tau pathology. P-tau assays measure tau protein phosphorylated at specific positions, for example at the amino acid threonine at position 181. Measurement of this analyte yields, for example, a positive predictive value for AD of 88% and a high negative predictive value of 91% (Table 1).

Synaptic proteins

Synaptic proteins represent a new area of diagnostics in neurodegeneration. Determination of synaptic proteins can indicate when synaptic integrity, which is essential for cognitive function, is compromised. Two promising new biomarkers are the mainly pre-synaptic protein BACE1 and the post-synaptic protein neurogranin. Increased levels of BACE1 and neurogranin (truncated p75 form) in the CSF appear to correlate with cognitive decline. Assays for these biomarkers are currently used only in a research capacity. Studies are underway to elucidate the diagnostic and prognostic value of these parameters in a clinical setting.

Genetic risk factors

Analysis of the genetic risk factor APOE aids differential diagnosis and early identification of late-onset sporadic AD. There are three alleles of the APOE gene, namely ɛ2, ɛ3 and ɛ4. Carriers of the ɛ4 allele have a higher risk of developing AD, while the ɛ2 variant is associated with a lower risk. Moreover, the risk of developing AD and the average age of disease onset depend strongly on the ɛ4 gene dose, with homozygous carriers at highest risk.

Motor neuron diseases

MND are rare but devastating diseases characterised by degeneration of the upper and lower motor neurons. The most frequent form is amyotropic lateral sclerosis (ALS), also known as Lou Gehring syndrome, which has a prevalence of 2 in 100,000 persons. The cause of MND is still unknown. Damage occurs directly to the axons, and manifestations include muscle weakness, cramps in the arms and legs, dysphagia and dysarthria. As the disease progresses, symptoms spread and become more intense, finally leading to complete loss of autonomy and ability to communicate. The typical survival is 3 years after disease onset, with the cause of death usually respiratory failure. Due to the subtle initial symptoms, MND have a long time to diagnosis, usually over 12 months. Thus, laboratory diagnostic tests are urgently needed.

Neurofilament

An up-and-coming biomarker for MND is phosphorylated neurofilament (pNf), which shows an increased level in MND, especially in ALS. Determination of pNf in CSF is helpful for diagnosis of MND and differentiation from MND mimics such as polyneuropathy, myopathy and sporadic inclusion body myositis. pNf parameters which are relevant for diagnostics are the heavy (pNf-H) and the light (pHf-L) subunits. A study on patients with MND, MND mimics or non-MND revealed that pNf-H and pNf-L are able to differentiate MND from AD, whereas the markers T-tau and P-tau were not able to achieve this distinction. The pNf parameter is currently at the gateway between research and routine diagnostics and will soon be added to diagnostic guidelines for MND. It is anticipated that routine determination of pNf will enable faster diagnosis of MND and may also prove useful for prognosis.

Antigen determination by ELISA

Protein biomarkers such as beta amyloid 1-42, beta amyloid 1-40, T-tau or P-tau, BACE-1, neurogranin (truncated p75 form) and pNf-H can be determined in patient CSF using a panel of aligned ELISAs, which were developed by EUROIMMUN AG in collaboration with ADx NeuroSciences. The ELISAs are based on a sandwich principle, whereby the respective analyte is bound by a highly specific capture antibody and then detected using a labelled secondary antibody. This matrix-independent approach ensures high consistency in results. The protocols for the different ELISAs are aligned and require only 4 hours, so analyses can easily be completed within one working day. Lyophilised calibrators and controls provide convenient test performance, high precision and clinical accuracy. The ELISAs are automatable, allowing CSF diagnostics to be easily integrated into the automated routine operations of a diagnostic laboratory.

Genotyping by DNA microarray

Genotyping in AD can be carried out using DNA microarrays such as the EUROArray APOE Direct. This assay provides fast and simple determination of the three APOE gene variants ɛ2, ɛ3 and ɛ4 in a single test with fully automated data evaluation. The procedure utilises whole blood samples, eliminating the need for time-consuming and costly DNA isolation, while integrated controls ensure reliability of results. Thus, molecular genetic analysis can be easily incorporated into laboratory diagnostics for AD.

Perspectives

The global burden of neurodegenerative diseases is increasing continuously, particularly in countries with aging populations. The number of cases of AD worldwide is expected to increase from the current 44 million to 66 million in 2030 and 115 million in 2050. Similarly, cases of Parkinson’s disease will grow from 11 million cases today to 35 million by 2050. Due to the devastating nature of these diseases, early diagnosis is critical to enable therapeutic intervention and organisation of adequate care. Alongside classical CSF biomarkers for AD, novel markers for synapse pathology are set to enhance diagnosis and prognosis. Biomarkers for Parkinson’s disease (e.g. alpha-synuclein) are also on the horizon. As neurodegeneration remains at the forefront of cutting-edge research, the portfolio of biomarkers for neurodegeneration will continue to grow, enriching the diagnosis of AD, MND and Parkinson’s disease, as well as further conditions such as traumatic brain injury, depression and stroke.

Clinical Utility of LDL Particle Number to Optimise Management of LDL-Related Cardiovascular Risk

Article-Clinical Utility of LDL Particle Number to Optimise Management of LDL-Related Cardiovascular Risk

droping medicine to test tube

Managing low-density lipoprotein (LDL) cholesterol is an integral part of clinical practice. Recently, recommendations have shied from targeting specific LDL levels to emphasising the use of therapies shown to reduce atherosclerotic cardiovascular disease (ASCVD) events. As a result, moderate- and high-dose statin therapy is now emphasised for use in several defined patient groups. What remains controversial is how physicians should evaluate individual LDL response to statin therapy and whether LDL-guided adjustments in treatment can lead to further reduction in ASCVD events.

This article reviews:

  • Data demonstrating a more reliable measure of LDL quantity—LDL particle number (LDL-P)—that can identify statin-treated individuals with continued LDL-related ASCVD risk and guide therapy adjustment likely to result in additional reduction in ASCVD events
  • Expert society recommendations endorsing LDL-P measures for management of high-risk populations
  • Outcome data demonstrating attainment of low LDL-P vs low measure of cholesterol (LDL-C) in the management of high-risk populations resulting in a number needed to treat (NNT) of 23
  • An integrated, step-wise model used to put these data and recommendations into practice.

LDL MANAGEMENT IN CLINICAL PRACTICE: CURRENT STATE OF AFFAIRS

The causal role of LDL particles in the development and progression of ASCVD is well known. LDL particles move into the arterial wall via a gradient-driven process—the greater the circulating concentration of LDL particles, the greater the rate of movement into the arterial wall. Once inside the intima, LDL particles that bind to arterial wall proteoglycans are retained, oxidised, and subsequently taken up by macrophages to form foam cells. The greater the circulating levels of LDL over time, the greater the acceleration of this process and the higher the risk for ASCVD events.

Due to its ubiquitous accessibility, LDL-C (the measurement of cholesterol carried in LDL particles) has become the customary measure used to estimate LDL quantity in clinical practice. However, there are 2 widely available, cost-effective, FDA-cleared measures of LDL quantity that do not rely on cholesterol carried in LDL particles: (1) LDL-P by nuclear magnetic resonance (NMR)—a direct measurement of the LDL particle number (NMR LDL-P), and (2) apolipoprotein B (Apo B)—an estimate of the LDL-P. Neither method relies on the variable cholesterol content in LDL particles.

Similar to the accepted use of LDL-C measurement in clinical practice is the prescribed treatment of patients with HMG Co-A reductase inhibitor (statin) therapy and the resulting improvement in ASCVD events. As a consequence of decreased de novo cholesterol synthesis caused by statins, LDL receptors are up-regulated resulting in increased clearance of circulating LDL particles. A meta-analysis of statin intervention trials demonstrates, at a population level, the greater the LDL reduction, the greater the reduction in ASCVD risk among statin-treated groups.

Over the past 3 decades successive national and international guidelines have advocated strategies to lower LDL levels. Historically, the guiding principle adopted by groups such as the National Cholesterol Education Program (NCEP) Adult Treatment Panel (ATP) has been to link LDL-C goals with ASCVD risk; the higher the patient’s ASCVD risk, the lower the LDL-C goals advocated to mitigate that risk.

In 2013 the American College of Cardiology (ACC) and American Heart Association (AHA) jointly issued the ACC/AHA Guideline on the Treatment of Blood Cholesterol to Reduce Atherosclerotic Cardiovascular Disease in Adults which advocated a different approach to managing LDL-related ASCVD risk. In contrast to prior guidelines that focused on attaining discrete LDL targets, the ACC/AHA guideline focused on randomised controlled trial (RCT) data to determine treatment strategies most consistent with cardiovascular outcome improvement. From this perspective, initiation of moderate- or high-dose statin therapy was advocated for defined groups of patients that demonstrated significant outcome improvement following statin therapy.

Important limitations may exist for individual care if management is viewed as complete following the initiation of statin therapy. Although RCT data allow therapies to be prioritised by virtue of proven benefit observed in treated populations, individual response is variable. Some patients receiving statins experience fewer ASCVD events while others fail to benefit despite treatment. Optimising individual care requires the ability to identify statin-treated patients who continue to harbor increased ASCVD risk, as well as identify the incremental reduction in ASCVD risk following adjustment of therapy.

The degree to which on-treatment LDL measures can guide adjustment in care leading to improved ASCVD outcomes has become controversial. Because RCTs were not designed to assess optimal LDL levels associated with ASCVD risk reduction, the 2013 ACC/AHA guideline made no recommendations regarding LDL levels as lipid-lowering treatment targets. Although silent on LDL targets, the ACC/AHA guideline did advise measuring on-treatment LDL values to assess adherence, judge individual response to therapy, and serve as part of a conversation between physician and patient regarding further adjustments in care, including statin therapy intensification or statin combination therapy.

In contrast to the ACC/AHA position, numerous guidelines and expert panel recommendations endorse various on-treatment LDL targets (LDL-C, non-high-density lipoprotein cholesterol [HDL-C], or LDL-P measures) to adjudicate individual response and guide therapy adjustment. We believe these varying recommendations represent a two-step approach to patient care. First, based on RCT data, physicians should use outcome-proven therapy in groups with established benefit. Second, LDL-P should be used to evaluate individual response to therapy, guide therapy adjustment, and optimise opportunities for outcome improvement.

LDL MEASUREMENTS: LDL-C VERSUS LDL-P

LDL-C has been used for decades to estimate circulating LDL concentration. However, the cholesterol content of LDL varies widely among individuals and is often dependent on existing metabolic conditions (eg, insulin resistance, metabolic syndrome, type 2 diabetes mellitus), as well as the presence of lipid-altering medications.

Due to varying amounts of cholesterol carried in LDL, frequent disagreement (discordance) is noted between measures of cholesterol (LDL-C) and particle number quantity (NMR LDL-P, Apo B). In the Quebec Cardiovascular Study—a community study of 2103 men ages 45 to 76 years without ischemic heart disease—51% of subjects showed discordance (> ± 10% difference) between Apo B and LDL-C. Similarly, in the Multi-Ethnic Study of Atherosclerosis (MESA), discordance (> ± 12% difference) between LDL-P and LDL-C was noted in 50% of 6814 healthy, ethnically diverse men and women ages 45 to 84 years not on lipid-lowering medication. Additionally, split-sample measurements of LDL-C vs NMR LDL-P obtained from 2355 subjects with type 2 diabetes and LDL-C <100 mg/dl (<20th percentile value) showed only 25% of subjects had concordantly low LDL-P <1000 nmol/L (<20th percentile value).

LET OUTCOMES BE OUR GUIDE: RELATIONSHIP OF LDL-P AND ASCVD EVENTS

To determine the potential utility of alternate LDL measures (LDL-P, Apo B) in guiding management of CV risk, outcomes must be evaluated in 2 specific ways. First, differences in CV events associated with the traditional measure (LDL-C) and alternate measures (LDL-P, Apo B) must be determined when these measures are discordant. When traditional and alternate LDL measures are discordant, CV events track with measures of LDL-P rather than LDL-C. When LDL-C and LDL-P measures agree (are concordant), CV outcomes are similarly associated with each measure. Since populations are composed of patients for whom alternate LDL measures may or may not be discordant, it is now understood that discordance analysis is essential to assess meaningful outcome differences between alternate and traditional measures of an actionable risk factor.

Second, improvement in CV events should be evaluated in patients who are managed to similarly low values of the alternate measure (LDL-P) vs those of the traditional measure (LDL-C). Toth et al analysed data from the HealthCore Integrated Research Database to assess the impact of attaining low LDL-P vs low LDL-C on incident CV events among individuals at high ASCVD risk (eg, established coronary heart disease, stroke, transient ischemic attack, peripheral arterial disease, diabetes mellitus). In response to more intensive therapy (eg, higher potency statin, greater use of statin combinations with ezetimibe, colesevelam, niacin), patients achieving LDL-P <1000 nmol/L (mean 860 nmol/L) during the course of their normal medical care experienced a significant 22% to 25% reduction in risk of CV events (eg, myocardial infarction, revascularization, angina, stroke) vs patients managed to LDL-C <100 mg/dL (mean 79 mg/dL) at 12, 24, and 36 months of follow up. Importantly, due to significant CVD event reduction at each time point, only 23 individuals needed to be treated to LDL-P <1000 nmol/L to prevent one CVD event at 36 months of follow up compared to patients attaining a mean LDL-C of 79 mg/dL (70% on statin therapy).

MAKING SENSE OF DIFFERENT TREATMENT RECOMMENDATIONS: IS USE OF LDL-P CONSISTENT WITH CURRENT GUIDELINES?

A variety of approaches are advocated by various guidelines and expert panels to evaluate individual response to therapy following initiation of statin therapy. Because the RCTs which served as a basis for the 2013 ACC/AHA cholesterol treatment guideline did not incorporate an assessment of optimal LDL-C levels associated with ASCVD risk reduction, the guideline made no recommendations regarding LDL-C targets for lipid-lowering therapy. Additionally, ACC/AHA resource limitations precluded the review of Apo B and other lipid or lipoprotein measures for guiding lipid therapy.

In contrast, the American Association of Clinical Endocrinologists (AACE), the National Lipid Association (NLA), the American Diabetes Association (ADA) in conjunction with the American College of Cardiology (ACC), and the American Association for Clinical Chemistry (AACC) have endorsed the use of LDL-P to evaluate individual LDL response and guide adjustment of therapy in high-risk patients with acceptable LDL-C and non-HDL-C values. A summary of expert society recommendations is shown in TABLE 1.

PUTTING IT TOGETHER: INTEGRATION OF LDL-P IN CLINICAL PRACTICE

In an effort to harmonise the aforementioned outcome data, guidelines, and expert recommendations, we developed an algorithm utilised at Scripps Green Hospital and the Lipoprotein and Metabolic Disorders Institute, respectively (TABLE 2).

STEP 1: Assess ASCVD risk (10-year and lifetime)

ASCVD risk status can be established by clinical history of ASCVD, presence of subclinical ASCVD, presence of comorbid conditions with high ASCVD risk (eg, stage III-IV chronic kidney disease, type 1 or type 2 diabetes with known ASCVD or the presence of >1 major risk factor, metabolic syndrome, organ transplant, coronary calcium score >300, abdominal aortic aneurysm), LDL-C levels >190 mg/dL (ie, >95th percentile), or use of validated 10-year or lifetime ASCVD risk calculators. Given these multiple points of reference, we assign a patient’s risk category based on the highest risk level identified by any of these approaches (TABLE 2).

TABLE 1. Recommendations for Using LDL-P Measures as Targets of Therapy

STEP 2: Institute appropriate course of therapy

After evaluating secondary causes of dyslipoproteinemia (eg, hypothyroidism, diabetes mellitus, kidney disease, medications), initial therapy consists of therapeutic life-style management and treatment of comorbid conditions identified. As outlined in the 2013 ACC/AHA cholesterol treatment guideline, use of moderate- or high-dose statins is preferred as initial therapy for patients identified as candidates for therapy. These agents include:

  • Moderate Dose Statins: atorvastatin 10 to 20 mg, uvastatin 80 mg, lovastatin 40 mg, pitavastatin 2 to 4 mg, pravastatin 40 mg, rosuvastatin 5 to 10 mg, simvastatin 20 to 40 mg

TABLE 2. Management of LDL to Reduce ASCVD

TABLE 3. Effect of Lipid-Lowering Therapies on LDL-P
 
High Dose Statins: atorvastatin 40 to 80 mg, rosuvastatin 20 to 40 mg.

If patients are statin-intolerant, alternative therapy may include ezetimibe, bile acid resins, or niacin, based on clinical judgment. For patients with triglyceride (TG) levels >500 mg/dL, clinical judgment should be used to consider marine omega-3, fibrates or niacin as initial therapy.

STEP 3: Assess LDL-P response with an outcome-proven measure of LDL-P 12 weeks after starting therapy

To evaluate individual response to statin therapy, a measure of LDL-P should be performed approximately 12 weeks after treatment initiation or adjustment. While both NMR LDL-P and Apo B are supported outcome-proven measures of LDL-P, it should be noted that better assay precision has been documented for NMR LDL-P than Apo B. Additionally, available data demonstrate NMR LDL-P is significantly more predictive of ASCVD risk than Apo B in instances where these 2 measures report differences in outcome association. Target values for LDL-P are listed for high- and moderate-risk patients. Based on clinical judgment, physicians may feel more intensive therapy is needed, especially for patients with progressive ASCVD. Factors independently predictive of increased residual ASCVD risk despite statin therapy include high-sensitivity C-reactive protein (hs-CRP), lipo- protein (a) (Lp[a]), and NMR HDL particle number (HDL-P). When clinical or laboratory factors indicate increased residual risk, physicians should use clinical judgment in determining the value of more intensive LDL-P-lowering therapy.

STEP 4: Use clinical judgment and adjust therapy as indicated (eg, suboptimal LDL-P response)

If the patient is above target for LDL-P, use clinical judgement to modify therapy to further lower LDL-P. Options include increased efforts at therapeutic lifestyle changes (weight loss and dietary modification), statin therapy intensification, and/or addition of combination LDL-P-lowering agents to statins (eg, ezetimibe, colesevelam, niacin). The effect of various lipid-lowering agents on LDL-P is shown in TABLE 3. LDL particle excess is more frequently encountered among patients with type 2 diabetes and when one or more criteria for metabolic syndrome are present (eg, increased waist circumference, elevated blood sugar, elevated blood pressure, elevated TG, low HDL-C). Population data show the greater the number of criteria for metabolic syndrome noted, the greater the increase in LDL-P.

STEP 5: Assess response with an outcome-proven measure of LDL-P and modify therapy as needed to achieve LDL-P goal

If therapeutic adjustments are made, LDL-P response should be followed (tested) approximately 12 weeks after change of therapy and annually thereafter once the patient has achieved the desired LDL-P response.

CONCLUSION

In light of outcome data and recommendations discussed above, measures of LDL particle number (Apo B, NMR LDL-P) occupy a unique position among CV biomarkers. These measures serve as analytic improvements in quantifying LDL—a causal risk factor for development and promotion of atherosclerosis. This is particularly important for patients with type 2 diabetes mellitus, metabolic syndrome, CVD risk-equivalents, and those on statins: patients in whom there is frequent discordance between measures of cholesterol (LDL-C) and particle number (Apo B, NMR LDL-P). In a discordant setting, CV risk tracks with particle number (Apo B, NMR LDL-P) rather than cholesterol (LDL-C). Moreover, LDL-P is independently predictive of CV events following adjustment for confounding factors and allows clinicians to better judge response to statin therapy. The impact of these factors is evident in data that demonstrated an NNT of 23 for high-risk individuals managed to low LDL-P as part of their usual care vs those managed to similarly low LDL-C on statin therapy. The algorithm previously outlined can assist physicians in using LDL-P measures to identify high-risk patients with persistently high LDL in the presence of acceptable levels of LDL-C or non-HDL-C and adjust therapy to achieve LDL-P levels likely to result in further ASCVD risk reduction. 

References:

References available on request.