MSUToday
Published: Feb. 12, 2020

Big data targets deadly liver cancer

Contact(s): Geri Kelley College of Human Medicine office: (616) 233-1678 cell: (616) 350-7976 kelleyg3@msu.edu, Bin Chen College of Human Medicine office: 616-234-2819 chenbi12@msu.edu, Kim Ward Communication and Brand Strategy office: (517) 432-0117 cell: (734) 658-4250 kim.ward@cabs.msu.edu

Better treatments for a lethal form of liver cancer already might exist. The challenge is in finding the best treatment for each of the disease’s many variations. 

Highly advanced computer programs could sort through a massive amount of “big data” and match the genetic and molecular characteristics of each patient’s liver cancer with the most effective treatment among thousands of compounds, suggested a team of researchers led by Bin Chen, an assistant professor in the College of Human Medicine’s Departments of Pediatrics and Human Development, and Pharmacology and Toxicology.

Their paper, published in the journal “Nature Reviews Gastroenterology and Hepatology,” suggests that artificial intelligence and big genomic data could help find the most-effective treatment to target the unique characteristics of each patient’s cancer.

Since that study was published, Chen received an additional award from the National Institutes of Health to study the feasibility of creating a data base of drugs that target specific biomarkers – such as mutations, gene expressions and proteins – that are genetic and molecular characteristics of diseases. Much of that information already exists, but is scattered and locked away in different places, including FDA labels, clinical trial descriptions or publications.

If Chen’s team is able to use liver and breast cancers to create a prototype of a biomarker data base, it could receive additional funding to carry its research further.

“In precision medicine, we need to better define each disease,” Chen said. “They’re not all the same. We need to match each disease or its subtype to the specific drug. That’s the goal of this grant.”

The researchers already have spent two years reviewing publications and analyzing data about a particularly deadly form of liver cancer called hepatocellular carcinoma and thousands of drugs that already exist.

“From our survey, we found that the two paths (of research) progressed in parallel but never intersected,” Chen said. “On one path researchers generated large amounts of genomic data, while, on another path, clinicians designed clinical trials barely based on genomic data. How can we fill that gap? That’s the challenge we’re facing now.” 

Worldwide, 854,000 people were diagnosed with liver cancer in 2015 and 810,000 died. The American Cancer Society estimates that 42,030 people in the U.S. will have been diagnosed with liver cancer in 2019 and 31,780 will have died from the disease. While the incidence and deaths from most forms of cancer have declined in recent decades, liver cancer is among the fastest rising causes of cancer deaths in this country.

For nearly a decade, only one drug was approved for treating hepatocellular carcinoma, although some promising new ones have shown efficacy in clinical trials. For clinicians, the challenge is in figuring out which drug will work for each patient.

“Liver cancer is a very complicated disease,” Chen said.

The mutations and molecular characteristics involved in hepatocellular carcinoma and other forms of cancer can vary from patient to patient, suggesting that an effective treatment for one may be ineffective for another. An emerging form of artificial intelligence called “deep learning,” which mimics how the human brain works, might help analyze the biomarkers of existing drugs and identify the drugs likely to be most effective in treating each patient.

More research is needed to find those biomarkers, Chen said, which is the goal of his current study.

“We have to develop the biomarkers to predict whether a patient will respond to a drug, rather than treat them all equally,” he said.

“I would say we showed the great potential of using computers to identify therapeutic candidates,” Chen said. “To demonstrate the power of deep learning, we need a huge amount of data and we need huge computing power. That’s the future.”

But he added a note of caution: “I would say we’re not there yet.”

Research reported in this publication was supported by the National Institute of Environmental Health Sciences of the National Institutes of Health under Award Number K01ES028047 and the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number R21TR001743 (PI Chen). The next phase of the project is supported by the National Center For Advancing Translational Sciences of the National Institutes of Health under Award Number OT2TR003426. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Bin Chen, an assistant professor in the College of Human Medicine’s Departments of Pediatrics and Human Development and Pharmacology and Toxicology.