At the start of my PhD, a highly respected researcher told me that within ten years, we’d have some real treatment options for patients with Alzheimer’s disease (AD). Now, almost six years later, I’m nearing the end of my degree and these treatment options are nowhere in sight. Was this researcher wrong? What is happening? Why is this taking so long?
AD was first described in 1906 by Alois Alzheimer. He described a patient named Auguste Deter, who repeated “I have lost myself” during an examination. Upon her death, Alzheimer described the plaques and tangles found in her brain that became hallmarks of the disease. Only in the eighties, through modern biochemical techniques, was the composition of these plaques and tangles determined. The protein in the plaques was characterized and named beta-amyloid and the protein making up the tangles was identified as tau. The last – and arguably most important – hallmark of AD is cell death, which is widespread at advanced stages of the disease.
The first drugs used to treat AD were introduced in the nineties and worked by adjusting the levels of chemical messengers in the brain. The relief offered by the drugs is minimal, and is no permanent solution. And yet, though this treatment for AD has since fallen out of favour, these drugs are currently the only available options for treatment.
Since then, the field has been focused on the “amyloid cascade hypothesis,” based on the discovery that amyloid is toxic to neurons. There are very convincing lines of evidence that show that excess amyloid production is the cause of AD and yet, no amyloid-directed clinical trials have produced new treatment options for AD patients. Why?
My top three reasons are that we’re using the wrong models, we’re intervening too late, and we don’t know how the brain works.
Incorrect models
There are two types of Alzheimer’s disease. One type is caused by dominant mutations in genes coding for either the amyloid precursor protein (APP), presenilin-1, or presenilin-2, which all affect the same pathway and cause an overproduction of amyloid. This type of AD is often called “early-onset” since symptoms typically arise before the age of sixty, sometimes in people as young as thirty or forty. However, this form accounts for less than 1 per cent of cases. Almost all mouse models of the disease are based on early onset.
On the other hand, the second type of AD accounts for the majority of cases, has an unknown cause, and is referred to as “sporadic”. Although this form does have a genetic component, it has been more difficult to model in mice. The genetic risk factor for this type of AD is the presence of the ApoE4 gene, which can increase the risk of developing AD by up to 60 per cent. However, unlike dominant mutations, having this gene doesn’t guarantee AD development and conversely, not having the mutation is not a safeguard. Understanding the relationship between the ApoE4 genetic risk factor and AD development would likely improve our current understanding about the cause of the sporadic form of AD. In comparison to APP and amyloid, the ApoE4 genotype is under-represented among mouse models, which makes it difficult to study the underlying causes of AD.
In addition to modelling only the “early-onset” subset of AD cases, the vast majority of mouse models currently being studied do not display any cell death. This lack of cell death may be the reason why it turns out that AD is actually very easy to treat – in mice. Researchers do it every day. Dozens of compounds have been shown to improve memory and reduce amyloid pathology – in mice. But when the most promising compounds went on to clinical trials, they all had to be stopped due to lack of efficacy or safety concerns.
It’s not that these mouse models are bad – quite the opposite. These models have allowed researchers to perform detailed studies on the effects of amyloid pathology and dissect the molecular pathway of how amyloid is produced. The research generated from these models holds great promise for the identification of a therapy that targets the underlying molecular cause of AD.
These mice just need a bit of a re-model. We can’t reasonably expect these mouse models to recapitulate all aspects of the human disease and reliably predict drug efficacy and safety. An emerging theory suggests that researchers should consider the pathology of these mice as a model of early stage AD, before neuron cell death. If this theory holds weight, we should be able to use these mouse models to understand mechanisms that may contribute to AD pathology.
Late treatment
A recent study by Randall Bateman looked at people with a dominant AD mutation who, based on their family history, had a predictable age of symptom onset. The researchers found that they could detect changes in amyloid levels in the cerebrospinal fluid up to 25 years before the estimated age of symptom onset. This study and others indicate that AD develops insidiously over decades. By the time cognitive symptoms arise, neurological damage is extensive and hard to reverse. In the past, clinical trials invited participants who were already experiencing cognitive symptoms to take part. This may have been one of the reasons for the failure of these trials. Intervening as early as possible before there is extensive neurological damage, or even cognitive symptoms, would give the greatest chance of successful intervention.
Why not just treat people earlier? The cost of a clinical trial varies widely, but most estimates put the cost in the hundreds of millions to billions of dollars with phases 1 to 3 typically lasting five years. This cost is a major obstacle to conducting a trial that would last twenty or thirty years without the promise of a definitive therapeutic project.
However, the field has undergone a shift in thinking about preventing AD and starting treatment earlier. Clinical trials are now recruiting people who are cognitively healthy, but who may be at a higher risk of developing AD. These trials include the Anti-Amyloid Treatment in Asymptomatic Alzheimer’s (AA)and the Dominantly Inherited Alzheimer Network (DIAN) Trial that are recruiting patients as young as 65 and up to 15 years before the expected age of symptom onset, respectively. These studies are a major step toward intervening before it’s too late.
The unknown brain
It’s true. Bradley Voytek, assistant professor of Computational Cognitive Science and Neuroscience at UC San Diego, estimates that we know about how 2 per cent of the brain works. We know different brain regions are associated with different functions, but that is far from understanding how this complex organ produces our thoughts and personalities.
On top of the 86 billion neurons that can each connect with other cells at up to 10 000 sites called synapses, Voytek writes in an article in Nature that there are at least as many, if not up to ten times as many, glial cells. Named after the Greek word for “glue,” glial cells were initially thought to just hold neurons together. Now, growing evidence shows that glia are also involved in signalling and communication, adding another level of complexity to the brain.
If this is not staggering enough, try and map out the connections. The growing field of connectomics aims to map out synaptic connection networks and is facing unexpected challenges. In a recent article, Jeff Lichtman, professor of molecular and cell biology at Harvard University, and his colleagues point out that acquiring the data is “actually the (relatively) easy part.” Using electron microscopy, researchers can acquire an image of the brain on a nanometer scale that reveals all synaptic connections. Lichtman’s team converts these nanoscale images to a digital connectivity graph that is essentially a map of all the synapses. One of the challenges is the sheer volume of data generated. The article explains, “Acquiring images of a single cubic millimeter of a rat brain will generate about 2 million gigabytes or 2 petabytes of data.” To take larger brain regions than this, and we’re into unit prefixes that just sound made up. A complete rat brain will produce about an exabyte (1,000 petabytes) of data. This goes way beyond the ability of any storage system in existence today. Mapping a complete human brain in this manner “will require a zetabyte (1000 exabytes), an amount of data approaching that of all the information recorded globally today.”
A complete rat brain will produce about an exabyte (1,000 petabytes) of data.
The brain is astoundingly complicated. While a comprehensive map of the human brain is still a while away, a map of the mouse brain may not be that distant. In addition to microscopy, newly developed techniques, such as optogenetics, are gaining prominence. Optogenetics is a method that allows researchers to essentially shine a light on a specific area of the brain to activate those synapses and measure the downstream effects. The advances made to understand how the brain works would undoubtedly impact the way AD, and for that matter all neurological diseases, are studied and treated.
Despite these obstacles, researchers have made great progress toward understanding and treating AD. The old adage just happens to be true that the more you know, the more you know you don’t know. Nonetheless, we have dissected the amyloid pathway with the help of mouse models. We’ve shifted billions of dollars to clinical trials aiming to prevent AD. And the amount of knowledge about this final frontier is growing at an unprecedented rate. So when do I think real treatment options will be available to AD patients? Maybe in the next ten years.