National Cancer Moonshot Shines Light on Challenges of Data Sharing

July 13, 2016 Jeff Kelly

 

sfeatured-36294-Aridhia-PivotalIn his State of the Union address in January, President Obama announced he was putting Vice President Biden in charge of a new “moonshot” initiative to finally eradicate the scourge of cancer, which kills over a half a million Americans every year. It’s an ambitious goal, to be sure, but one the President and much of the research community believes is possible.

“For the loved ones we’ve all lost, for the families that we can still save, let’s make America the country that cures cancer once and for all,” the President said.

The National Cancer Moonshot (NCM), as the initiative is formally known, has identified the ability for researchers and data scientists to share relevant data needed to identify a cure as one area in need of improvement. According to a press release by the NCM accompanying the President’s announcement;

“Data sharing can break down barriers between institutions, including those in the public and private sectors, to enable maximum knowledge gained and patients helped. The cancer initiative will encourage data sharing and support the development of new tools to leverage knowledge about genomic abnormalities, as well as the response to treatment and long-term outcomes.”

The NCM is right to zero in on data sharing. Sharing data not just between clinical organizations but even between departments and clinicians within the same organization is one of the bigger challenges facing the research community. This is true not just for sweeping initiatives like the NCM, but also for smaller projects at the regional and local levels. But sharing data is critical to the process of developing new and effective treatments to cancer and other major diseases.

Consider researchers at the Montreal Neurological Institute and Hospital (MNIH), who recently discovered that a decrease in bloodflow to the brain, not an increase in amyloid protein (as previously thought), is the first detectable sign of of late-onset Alzheimer’s disease. The study would not have been successful without patient data shared by over 30 healthcare institutions across the US and Canada, according to Dr. Alan Evans, a professor of neurology, neurosurgery and biomedical engineering at MNIH who led in the study.

“We have many ways to capture data about the brain, but what are you supposed to do with all this data?” Dr. Evans told Medical Xpress, an online medical and health news service. “Increasingly, neurology is limited by the ability to take all this information together and make sense of it. This creates complex mathematical and statistical challenges but that’s where the future of clinical research in the brain lies.”

Dr. Evans also noted that the clinical data used in his study can be used over and over again for additional studies, hopefully accelerating the timeline to a cure for Alzheimer’s. “That by itself is justification for … data sharing,” Evans said. “What goes around comes around. We benefit from the data put in by others, and we contribute our own data.”

Unfortunately, not all clinical institutions are as successful at sharing data as the ones involved in Dr. Evans’ Alzheimer’s study. Pivotal counts a number of clinical research institutions as customers, having helping them overcome their own data sharing and analytics challenges. Along the way, we’ve identified a number of reasons, not all of them technical, why data sharing in clinical scenarios is so challenging.

  • Privacy and other regulations, such as those stipulated in the Health Insurance Portability and Accountability Act of 1996, make many research institutions and health systems understandably cautious when it comes to sharing sensitive clinical data. This is particularly true when the data in question could be associated with a specific patient. Data governance and auditing protections need to be applied when sharing this type of data across organizations.
  • Over the years, clinical institutions and healthcare systems have adopted different databases and operational systems that create and store clinical data. As a result, data from different clinical institutions are often in different, incompatible formats. In order to make data from third parties useful, clinical institutions often need to perform significant data transformation first, a time consuming and often manual process.
  • People that oversee the systems that produce clinical data sometimes develop a sense of “ownership” over the data, believing that they get to decide who gets to access the data and for what purposes. In some cases, they grow to believe they are the only ones that really understands how to use the data, and shut down access for others, both inside and outside the organization. It is important to point out that this challenge is not unique to clinical research and healthcare scenarios.
  • Even when data is shared between institutions and researchers, the tools used by researchers and data scientists often make it difficult to collaborate and share analytics results in forms that are useful for further analysis. Data science is a collaborative discipline, but many analytics tools in use at clinical institutions don’t support a collaborative approach.

Making it easier for clinical institutions and researchers to share and analyze data is clearly an area in need of improvement. The good news is that initiatives like the National Cancer Moonshot are shining a light on the challenge, which hopefully spurs the development of better tools and methods for sharing data in ways that simultaneously safeguard patient privacy.

To learn more about this important topic, join me and Chris Roche, CEO of Aridhia, at 10am ET/7am PT as we discuss how Big Data and data science are being applied to clinical research, including how institutions are overcoming challenges like data sharing. Register here.

 

About the Author

Jeff Kelly

Jeff Kelly is a Principal Product Marketing Manager at Pivotal Software. He spends his time learning and writing about how leading enterprises are tapping the cloud, data and modern application development to transform how the world builds software. Prior to joining Pivotal, Jeff was the lead industry analyst covering Big Data analytics at Wikibon, an open source research and advisory firm. Before that, Jeff covered data warehousing, business analytics and other IT topics as a reporter and editor at TechTarget. He received his B.A. in American studies from Providence College and his M.A. in journalism from Northeastern University.

Follow on Google Plus Follow on Twitter More Content by Jeff Kelly
Previous
Data Science Deep Dive: Applying Machine Learning To Customer Churn
Data Science Deep Dive: Applying Machine Learning To Customer Churn

In this post, Esther Vasiete, from the Pivotal Data Science Team, explains how data science and machine lea...

Next
Encrypting Data-at-Rest and in-Motion for Pivotal Greenplum & Zettaset
Encrypting Data-at-Rest and in-Motion for Pivotal Greenplum & Zettaset

Pivotal has partnered with Zettaset to provide encryption of data-at-rest (DAR) and data-in-motion (DIM) fo...

×

Subscribe to our Newsletter

Thank you!
Error - something went wrong!