01 September 2016
Big data seems to be the answer to everything these days. It's especially attractive as a way to disrupt large, traditional industries that were built before we had the ability to connect people and information. There may be no better example of a large, traditional industry that is ripe for disruption than healthcare. So, lots of experts and companies are working to marshal the tools needed to understand and develop solutions for how big data approaches can improve healthcare -- its delivery, affordability, profitability, and outcomes.
Other traditional industries have been transformed by big data, like finance and transportation. However, healthcare carries with it some unique considerations, chief among them the large number of disparate sources and forms of data that have to be synthesized. There are so many links in the healthcare chain -- physicians, nurses, pharmacists, therapists, insurance companies, hospitals, rehabilitation facilities, just to name a few - and each one of them captures patient data in different formats and via different systems. Creating a holistic view of a patient's case is incredibly challenging. Thankfully, there are companies that are working on this and moving us closer to Precision Healthcare.
It's certainly a challenge worth tackling. One of the primary, and arguably most important, uses of data in healthcare is to accurately prescribe treatment. The more knowledge a clinician has about available treatments and how well they have worked in similar cases, the higher the probability of the patient having an optimal outcome. A key source of knowledge in healthcare comes from the clinical trial, and the results of these trials are usually published in peer reviewed journals. While these publications are key to improving healthcare, the way we manage and process this information is complicated. Pre-Internet, much of it went unseen. Only a limited number of studies were published in a handful of print publications available to the medical community.
With access to the Internet came an avalanche of data, as well as tools we could use to access the data -- such as open source journals, online libraries and PubMed. This ability to publish data to a much broader audience prompted growth in funding for even more research. So, the number of studies getting published grew exponentially, which created a new challenge. It became impossible to sift through it all. Looking at cardiac literature alone, one would have to spend four hours every day for the rest of their life to get through all of the publications on that one topic.
There may be no better example of a large, traditional industry that is ripe for disruption than healthcare.
This "explosion" of published studies creates a continuous flow of new findings to keep up with. We are constantly discovering new medicines, new applications of old medicines, new precautions, and different approaches to patient care. Sydney Burwell, former Dean of Harvard Medical School, hit such a chord with the medical community when he said, "Half of what is taught in medical school will be wrong in 10 years' time. The problem is we don't know which half." So, the first step was getting access to all of the information. Second, was the need to identify which data would be most relevant to a specific case.
Specificity is important in healthcare. Understanding how a treatment plan will likely work on a certain population is key to its success. What works for a geriatric population will likely have a different effect in a pediatric population. Rural and urban populations may have different treatment options available to them. Ethnicities also carry significant implications for how effective treatments will be. For example, we now understand through pharmacogenomics that genetics dictate how individuals may metabolize certain drugs, which then can inform prescribed doses and affect drug efficacy and toxicity.
The same advances in technology that have revolutionized other traditional industries are now giving us access to tools that help us quickly cut through the noise and hone in on only the data we need. We can combine and make sense of large, unstructured data sets in ways that have never before been possible. Tools such as Hadoop and cloud platforms that enable us to store, manage and manipulate voluminous amounts of data are making this a reality.
This newfound data management ability has also opened a pathway to electronic medical records (EMRs), as well as patient registries that aggregate large datasets of specific groups of patients. Patient registries can aid in differential therapeutic decision making that leads to more accurate treatments, and we are able to continually refine them as datasets grow. Technology has also made it possible for us to take the necessary precautions to protect privacy within these registries. The information is public to researchers, because we are able to strip all records of any personal health information (PHI).
Half of what is taught in medical school will be wrong in 10 years' time. The problem is we don’t know which half.
I've established one of these registries in our company here at ATI Holdings that focuses on orthopedic cases. Each case goes through an application at ClinicalTrials.gov and is added to Agency for Healthcare Research and Quality's Registry of Patient Registries once it is approved. Then clinicians and researchers have access to and can benefit from the clinical trials performed by other groups, or they have visibility into outcomes of certain interventions conducted in more "real-world" clinical settings. This also allows for research to be leveraged much more broadly than ever before and for clinicians and researchers to test hypotheses without incurring the time and expense of conducing primary research or doing their own data collection.
My dream scenario is one where machine learning is employed to allow a healthcare worker to establish a new patient record that includes an integrative, holistic view of the person that blends demographic information, lifestyle factors, comorbidities, genomic information, medication history, procedure history, and activity level. This dataset would then be augmented with information such as payer/insurance benefits, availability of recommended care options, remote monitoring and reporting of patient compliance and activity. An evolving recommended course of treatment would be provided and updated. We'll basically create a bot that can find and sort through all of the relevant data in real-time and at high speed. It would also yield probabilistic success rates based on subsequent inputs/updates.
This isn't that far away, either. There are already mobile apps available, like Isabel and CrowdMed, which aggregate and sort through data to deliver possible diagnoses. Then products like DynaMed can take the differential diagnostic results from Isabel, for example, and then provide treatment recommendations. There are many more ways we will see technology -- and specifically data -- completely alter the way we deliver healthcare, including the quantified self, Internet of Things (IoT), and even how blockchain technology can be used in EMRs. These are all areas I'll cover here in subsequent geek.ly posts.
Tap to read full article