11 April 2017

Thus far we’ve just been considering primarily treatment related aspects of healthcare and medicine, but what about preventing the need for an intervention or injury in the first place?

From my daily focus on the analytical realm of sports medicine and orthopedic rehabilitation, I am very interested in and have published on complex systems and nonlinear relationships in hospitals’ functioning and healthcare. With almost daily improvement and refinement of using big data sets and their being more available (see Registries below), we can move away from the prior orthodoxy of IFTTT (IF This Then That) approach of causality to a more sophisticated (and realistic) one that considers risk pattern recognition over that of solely risk factors. Initial work in this area is in sports injury understanding and prevention, but conceivably is scalable to public health and personalized medicine. 

Predictive Analytics

The authors of a paper, Complex Systems Approach for Sports Injuries, note that “Injury prediction is one of the most challenging issues in sports and a key component for injury prevention. Sports injuries aetiology investigations have assumed a reductionist view in which a phenomenon has been simplified into units and analyzed as the sum of its basic parts and causality has been seen in a linear and unidirectional way. This reductionist approach relies on correlation and regression analyses and, despite the vast effort to predict sports injuries, it has been limited in its ability to successfully identify predictive factors. The majority of human health conditions are complex. In this sense, the multifactorial complex nature of sports injuries arises not from the linear interaction between isolated and predictive factors, but from the complex interaction among a web of determinants.”

Other researchers looked at using predictive analytical tools to forecast injury likelihood in rugby players by informing as to a player’s training load “…in order to field the best possible team throughout the season. The analysis (were used to) predict the likelihood of a particular player being injured, which then enabled the coaching team to adapt and modify each player's personalised training program to maximise their training load and minimise their risk of injury.”

Sports Injury Predictor is a patent pending algorithm that determines the probability of an American football player being injured in a season. It applies machine learning and combined player injury data that includes “every injury that has taken place to skill position players in the NFL and college for the last 10 years. Includes type of injury, games missed, surgery required and more.” It combines that with player age, height, weight, “position, how many times players will touch the ball in a game, number of plays a player is on the field” and then runs an “injury correlation matrix to determine the statistical probability of an injury occurring based on previous injury.”

 

Machine Learning

It was just this past year that one of the most prestigious medical journals (Journal of the American Medical Association–JAMA) made mention of machine learning. In the somewhat landmark paper, the authors speak to the internet of things and quantified-self in that,

Sports Injury Predictor is a patent pending algorithm that determines the probability of an American football player being injured in a season. It applies machine learning and combined player injury data
Share this

“Global adoption of mobile and wearable technology has added yet another dimension to machine learning, allowing the uploading of large amounts of personal data into learning algorithms. Now, within closed-loop feedback systems, mobile technology (e.g., a smartphone) is not just a biometric device (e.g., measuring blood glucose levels) but ultimately could become a platform from which to deliver tailored interventions based on algorithms that continually optimize for personal information in real time. Available for many years, implantable cardioverter-defibrillators have saved lives by using algorithms to detect ventricular fibrillation and immediately deliver a defibrillating shock to the heart. Now, wearable devices promise to improve diabetes care—a small glucose meter adherent to the upper arm can regularly sample glucose levels, which are then wirelessly fed to the patient’s smartphone to inform the patient and treating physician” (page 551).

A more recent article appearing in the New England Journal of Medicine (NEJM) noted three key ways that machine learning will be transformative in medicine. The authors’ point is that impact will be less from big data, per se, but better algorithms. Basically, “Dr. Watson” is better than Dr. Kildare. This is due to the fact that machine learning enables the processing of millions of variables and weight their valence of influence in combination with other comorbidities and demographics. Thanks to ever-increasing computational horsepower, “…computers can look for anomalies at the pixel level of radiographs,” for example, something tough for a human, even if an expert. The three key, disruptive areas noted are:

  1. “Establishing a prognosis: Data drawn from electronic health records or claims databases can help refine these models. They say prognostic algorithms will be used within five years, though several more years of data will be needed for validation.
  2. Taking over much of the work of radiologists and anatomical pathologists. They also see algorithms used on streaming data taking over aspects of anesthesiology and critical care within years, not decades.
  3. Improving diagnostic accuracy, suggesting high-value tests, and reducing overuse of testing. This will happen more slowly, they say, because some conditions don’t present clear or binary standards like radiology--malignant or benign--which make it harder to train algorithms (because of the prevalence of unstructured data in EHRs and because each diagnosis would require its own model).”

Not everyone has a rosy acceptance or enthusiasm for medical applications of machine learning. Authors of the enjoyably titled paper, Voodoo Machine Learning for Clinical Predictions, examined two popular cross-validation methods and found that one approach massively overestimated the prediction accuracy of machine learning algorithms used to support clinical decision making. Furthermore, they also found that studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes tended to use the more error-prone approach to cross-validation. 

The Sharpies at DeepMind Health (a branch of Alphabet) are working in the UK, with the National Health Service, perhaps the largest healthcare system in the world, treating about a million patients every 36 hours! About 10 percent of those folks that go to a hospital will experience a a medical error or some iatrogenic harm. DeepMind Health aims “…to support clinicians by providing the technical expertise needed to build and scale technologies that help them provide the best possible care to their patients.” They are applying machine learning to medical research in the analysis of medical data with the goal to “improve how illnesses are diagnosed and treated…” and to “…help clinicians to give faster, better treatment to their patients…”

 

Big Data and Patient Registries

While I’m a big fan of evidence-based practice (I even published a book on the topic), there is an embedded problem vis-à-vis the process of publishing in peer reviewed scientific journals, that is, exclusion criteria. Understandably, a researcher needs to control as best as possible for extraneous causal or contaminating variables—so they often have to exclude patients from inquiry due to the presence of some condition that may get in the way of properly understanding what is being studied. So while the study’s findings do add to clinical/scientific knowledge and understanding, findings are often not generalizable (or scalable) to practice in the real world, or clinic.

"Dr. Watson" is better than Dr. Kildare. This is due to the fact that ML enables the processing of millions of variables and weight their valence of influence in combination with other comorbidities and demographics.
Share this

What’s to be done…?

Registries to the Rescue!

As I opined in a LinkedIn Influencer post, just as Google scooped the Centers for Disease Control and Prevention for flu prediction a few years ago, it may be universities and practice groups that become the go-to entities to best understand the real-world experiences of heterogeneous populations, in other words, benchmarks. For example, the University of Michigan plans to invest $100 million into a big data program. Furthermore, The University of Massachusetts Medical School developed the Function and Outcomes Research for Comparative Effectiveness in Total Joint Replacement and Quality Improvement (FORCE-TJR), a data system that guides total joint replacement practices. Finally, Nicklaus Children's Hospital implemented a $67 million health-record, replacing a paper process that had been used for nearly half a century, and is looking to include genetic and ethnic variants on treatment outcomes.

As I have mentioned in another geek.ly piece, I've established one of these registries in our company here at ATI Holdings that focuses on orthopedic cases. Each case goes through an application at ClinicalTrials.gov and is added to Agency for Healthcare Research and Quality's Registry of Patient Registries once it is approved. Then clinicians and researchers have access to and can benefit from the clinical trials performed by other groups, or they have visibility into outcomes of certain interventions conducted in more "real-world" clinical settings. This also allows for research to be leveraged much more broadly than ever before and for clinicians and researchers to test hypotheses without incurring the time and expense of conducting primary research or doing their own data collection.

While there are continuing issues of interoperability between EMRs/EHRs on the technological side of things, along with data-privacy concerns vis-à-vis PHI and HIPAA rules and regulations, I nevertheless envision the day in which such national patient outcome registries will be able to be more synthetically linked and accessible with aggregated big data that can lead to more applicable understandings of various treatment approaches, with various patients, having various presentations, and various outcomes.

I believe such potentials could scale to third-party payers, such as private insurance companies and Medicare. They in turn could scale back on case management needs (and costs and hassles), and provide more appropriate levels of reimbursement, based on more accurate value-based quality of care, while at the same time providing these insurers with cost savings.

 

 
Tap to read full article