Parkinson’s disease is a neurological disorder that affects a half million people in the United States, with about 50,000 newly diagnosed cases each year. There is no cure and, until now, no reliable method for detecting the disease. But an MSU research team has developed an innovative detection method that is a major breakthrough in diagnosing Parkinson’s in early stages—the point at which treatment to control symptoms is most effective.
Parkinson's, a disorder of the nervous system that affects movement, occurs when nerve cells in the brain stop producing the chemical dopamine, which helps control muscle movement. Without dopamine, nerve cells can’t properly send messages, causing the loss of muscle function.
The method of detection developed in part by Rahul Shrivastav, professor and chair of MSU’s Department of Communicative Sciences and Disorders, involves monitoring a patient’s speech patterns, specifically movement patterns of the tongue and jaw. Shrivastav says Parkinson’s affects all patients’ speech and changes in speech patterns are detectable before other movement and muscles are affected by the disease.
The new early detection method has proved to be more than 90 percent effective and is noninvasive and inexpensive. Requiring as little as two seconds of speech, monitoring can be done remotely and in telemedicine applications. In addition, the new method has the potential to track the progression of Parkinson’s and measure the effectiveness of treatment.
Researcher profile: Rahul Shrivastav
Shaky hands, tremors, rigid muscles, slower movements—these are all recognizable symptoms of Parkinson’s disease, a neurological disorder affecting a half million people in the United States.
Rahul Shrivastav, professor and chair of MSU’s Department of Communicative Sciences and Disorders, is working on a way to detect the disease earlier in patients by measuring a lesser known symptom: a change in speech.
“I was looking at speech changes and Parkinson’s disease about five years ago as part of another team,” says Shrivastav. “That sort of evolved into this new project where my team and I said, ‘If we can measure small changes in speech, what could we use that measurement for?’ One of the things that came out was we could use it to detect the onset of the disease.”
There is no cure for Parkinson’s disease, so early detection is particularly important since the treatments currently available for controlling symptoms are most effective at that stage.
“It’s a pretty aggressive disease,” says Shrivastav. “It starts off gradually but has a very big impact on people’s lives eventually. There’s no formal diagnostic method. It’s mostly subjective. Speech is one of the things we know very well changes. In fact, there is enough data out there to know it’s one of the first things to change in a lot of people.”
Shrivastav hopes that by designing tools to capture those changes, which are very small, inaudible changes, neurologists and other health care providers will have a way to make a diagnostic decision that isn’t possible otherwise.
Those same tools might have applications in diagnosing and treating other diseases as well.
“Our goal is to be able to come up with ways to essentially find a fingerprint in the speech sample for a variety of different diseases,” says Shrivastav. “Parkinson’s is just the one that we are probably farthest along with. We’ve started looking at other conditions as well. They’re all things that affect part of the brain that impact speech.”
None of Shrivastav’s work is possible without collaboration between experts in different areas.
“Mark Skowronski is working with us. He’s an engineer,” Shrivastav says. “He does a lot of the signal processing side for us. We have people who are really engaged in the speech and speech disorders side. We have the neurologists who know the disease and the treatment and how it impacts patients. It’s all teamwork. It’s impossible for any one person to do this. It’s so enmeshed, it’s impossible to say that this came out of any one academic area.”
Shrivastav, who has been at MSU for a year, is hopeful that the work the team is doing will lead to invaluable tools that will change the diagnosis and treatment of this debilitating disease.
“With the aging population, the statistic is about 50,000 new diagnoses every year,” Shrivastav says. “We hope that we can really come up with a low-cost, highly sensitive method that could be used by as many people as possible to improve health care and the quality of life.”
Field note: An ear for diagnosing Parkinson’s disease
Mark Skowronski is an assistant professor in the Department of Communicative Sciences and Disorders, College of Communication Arts and Sciences at MSU
Sitting with Jay Rosenbek, a clinical expert in neurological abnormalities of speech and language from the University of Florida, and listening to recordings of disordered speech, I wondered what would come of our efforts to detect signs of Parkinson’s disease (PD) in speech.
While characteristics of Parkinson’s speech have been known since the 1960s, few studies have tried to differentiate it from normal speech experimentally, and none have used anything but sustained vowels—the ubiquitous “ahhhhh” that doctors prefer—for speech material.
We were listening to sentences and longer passages being read, and Jay, drawing from more than 40 years of experience in the field, was dissecting the material like a surgeon. For my part of our collaboration, as an electrical engineer with expertise in signal processing, I was weighing all of Jay’s observations against my own thoughts on how exactly we were going to implement any of his ideas in a robust and automated algorithm.
Previously, I had collaborated with a range of non-engineer experts, from OB-GYN doctors to bat biologists to audiologists and speech pathologists, and those experiences shaped the philosophy that it is wise to let the experts in the field establish the foundation for signal processing development. After several back-and-forth sessions and a few mini experiments, a strategy emerged.
Parkinson’s disease affects speech by disturbing muscle planning and control, interfering with the delicate coordination of the diaphragm, the larynx, and movements of the articulators: the tongue, jaw, lips, and velum. While other studies focused on the “ahhhhh” from the larynx, we focused on the articulators, partly because their imprecise movements were readily audible, according to Jay, and partly because of our experience capturing articulation for the purpose of automatic speech recognition.
Our articulation features, called the cepstrum—a play on the word “spectrum”—were borne from a mathematician’s desire to convert more complicated mathematical operations into addition and subtraction and have stood the test of time as automatic speech recognition features for more than 30 years. The cepstrum elegantly separates the contributions of the larynx and the articulators to speech, providing a quantified snapshot of articulator position and movement during any moment of speech production.
Finally, we converted the cepstrum snapshots into measures of variance of articulator position and movement over time (so-called range and rate measures). In a cross-validation experiment, our range and rate measures distinguished between PD and normal talkers with an accuracy of 93 percent.
The results are exciting, in part because they validate our expert-driven design philosophy and also because they demonstrate that the effects of neurological disorders may be quantified noninvasively using robust speech measures. One day, we may be using a telephone application to monitor our long-term health through our speech, providing early detection of health changes and leading to early treatment.