Datics AI - Using AI To Diagnose Mental Illness

Using AI To Diagnose Mental Illness


40 min read

Teams of scientists and researchers use brain scans to help explore and discover how human beings react and respond to certain social situations. To do this, they compare healthy people’s brains to those who suffer from mental disorders. This information can possibly help discover the underlying causes of mental health disorders and help provide a correct diagnosis. However, the most crucial goal that remains is to find the most effective method for intervention in a given mental health disorder. The underlying idea is simply to use an algorithm and put data into feelings.


A large number of people suffer from mental health disorders at any given time. According to the World Health Organization, around 300 million people suffer from depression, which is one of the main causes of disability in the world. The organization estimates that 60 million people suffer from bipolar disorder and 23 million people suffer from schizophrenia.


Mental health diagnosis is a complex and often controversial process. The current model of diagnosing mental disorders, which is based on the display of symptoms and categorized in the Diagnostic and Statistical Manual of Mental Disorders (DSM), may not be the most accurate or effective method. Machine learning could provide a more accurate way to diagnose mental disorders by analyzing a person’s behavior and identifying patterns.


In order to get a better understanding of my brain, we will be going through two types of MRIs. The first is the structural MRI which works like a soft tissue X-ray. This process is very noisy and will take about five minutes. The second MRI is the functional MRI which will actually show the brain functioning. In order to do this, the subject will need to play a game while the MRI is taking place.


The scans from the subject would go into the mental health disorder category: bordeline personality disorder. After about 15 minutes of the subject playing the game, the brain has been imaged and the scans are ready to be observed. 


The Fralin Biomedical Research Institute at Virginia Tech Carilion is home to the Human Neuroimaging Laboratory, a cutting-edge facility dedicated to computational psychiatry. This new field applies the powerful tools of computer science to psychiatry in order to gain a more data-driven understanding of mental illness. Machine learning is providing valuable insights that could lead to major breakthroughs in the way we treat mental health conditions.


The science of fMRI imaging is relatively new, only having been invented in 1990. However, the algorithms used by Tech are much older, dating back several decades. With the recent advent of more powerful computers, these algorithms can finally be put to good use. Additionally, there is now a greater willingness amongst scientists to combine different disciplines in order to solve novel problems.


Psychiatry is a branch of medicine that deals with the diagnosis, treatment and prevention of mental health conditions. Mental health disorders can affect a person’s ability to function in daily life, and can be triggered by a variety of factors, including stress, trauma, genetic predisposition or chemical imbalances. While clinical diagnostic surveys are actually quite accurate, they are prone to some inaccuracies. What one person considers a 3 on a 1 to 10 sadness scale, for example, could be another person’s seven and yet another’s ten — and none of them are wrong. The language for accurately measuring pain just isn’t consistent.


There is a growing body of evidence that suggests mental health disorders are linked to physical symptoms in the body. Researchers believe that by combining neuroimaging with data from other sources, they may be able to develop a machine learning algorithm that can quickly and accurately diagnose these disorders. By understanding the physical symptoms of mental disorders, it may be possible to more effectively treat them using various interventions.


As a clinical psychologist, Pearl Chiu has a unique perspective on the potential of machine learning. Having worked directly with patients, she is all too aware of the limitations of traditional methods for understanding and treating mental illness. But she also believes that machine learning could be a powerful tool for detecting patterns and providing insights that could improve patient care.


It’s clear to Chu that things aren’t going well. All sorts of data – from survey responses and MRI scans to behavioural and speech patterns from interviews – are being fed into a machine learning algorithm. And soon, saliva and blood samples will be added too. Chiu’s lab is working hard to try and make sense of all this information and find the diagnostic signal amongst the noise.


The fMRI machine is a powerful tool that can provide insights into the brain. In particular, it can help to identify areas of the brain that are active in response to certain stimuli. This information can then be used to compare against healthy controls, and potentially find new patterns in social behavior. Additionally, fMRI can be used to see where and when a certain therapeutic intervention is effective. However, it is important to note that fMRI has its limitations – it is not perfect, and can sometimes give false positives. For example, in one famous case, a dead salmon showed brain activity on an fMRI scan.


For a person coming into the lab, they will first take a clinical survey after which their genetic information will be gathered. After all the data has been collected, it is run through algorithms which then produce a result. Quick results are available within minutes while detailed ones take up to weeks. Strong models make for faster data-crunching. A subject whose clinical interview points towards depression will be processed quicker if researchers make use of a depression model. 


Mental health disorders are often seen as shameful and taboo, but Chiu is working to change that. By using scans to map out the physical changes in the brain, she hopes to destigmatize mental illness and help people get the treatment they need. This method could also help identify patterns that clinicians may not notice. By making mental health disorders more physical, Chiu is working to break down the barriers surrounding these conditions. With the right tools, Chiu believes we could diagnose different types of depression more accurately. For example, she believes we could use data to know that one person’s type of depression regularly responds well to therapy, while another is better treated with medicine. With this knowledge, we could provide more tailored and effective treatments.


The Chiu lab is currently focusing on what they call “disorders of motivation.” This includes depression and addiction. The algorithms they are developing are designed to be diagnostic and therapeutic models that can be directly applied to patients’ lives. According to Chiu, the ultimate goal is to “take these kinds of things back into the clinic.” Machine learning is a powerful tool that can help us extract valuable insights from large data sets. Without these algorithms, it would be impossible to find the patterns hidden in all this information. Chiu and her team are using machine learning to develop new treatments for diseases, and this technology is crucial for getting their work out of the lab and into the hands of patients who need it.


In Chiu’s laboratory, the use of machine learning algorithms is essential for helping Brooks King-Casas, associate professor at the Fralin Biomedical Research Institute at VTC, to determine which combination of variables is most important out of the thousands that his lab is measuring. By using algorithms that learn through trial and error, King-Casas and his team are able to more accurately identify which combination of variables will have the greatest impact.


King-Casas is a vision in silver and black, his dark hair streaked with white and his glasses the color of a moonless night. He speaks with his hands, using them to emphasize his points. His lab is interested in social behaviors and the patterns that emerge from them. They study the nuances and feelings associated with interpersonal interaction, as well as the brain regions that are engaged during these interactions. The lab has a particular interest in the differences between people with mental health disorders and those without. They hope to better understand how disorders like borderline personality disorder can impact social relationships.


“I’m interested in dissecting how people make decisions, and the ways in which that varies across different psychiatric disorders,” – King-Casas. The lab is working on developing quantitative models that can analyze the different components of the decision-making process, in order to pinpoint where things go wrong. By breaking down human interaction into its smallest parts, King-Casas hopes to be able to put numbers to feelings and study social behavior in the same way we study cellular behavior. The data collected could potentially show us how someone with borderline personality disorder values the world, as compared to someone without the disorder.


According to King-Casas, “We need these reinforcement learning algorithms to take a hundred choices that you make and parse them into three numbers that capture all of that,” A distillation is not possible without the algorithms. King-cases further says, “Think about the brain as a model,” “What we do is we take everybody’s behavior and we say ‘okay, which model best captures the choices that you made?’” The lab is essentially trying to discover the algorithms of the computational brain. 


It is a common misconception that algorithms are impartial and unbiased. However, this is not the case. Algorithms are created by humans, who are themselves biased. Additionally, the data that algorithms use is often collected and shaped by people with their own biases. Even the tools used to collect data can be biased. As a result, it is important to be aware of the potential biases in algorithms. If a machine learning pattern led to a diagnosis, it would mean very little if the bias was in the programming. Psychiatry, in particular, has been biased against women throughout its history, and this continues to be a problem today: according to the World Health Organization, women are more likely to be prescribed psychotropic drugs than men.


Gender shapes our experience of pain in ways we may not even realize. A 2001 study published in The Journal of Law, Medicine & Ethics found that women report more pain, more frequent pain, and longer experiences of pain, yet are treated less aggressively than men. They are met with disbelief and hostility, the report concludes, until they essentially prove they are as sick as a male patient. It’s time to break the cycle of gender bias in pain management. Women should be treated with the same respect and care as men when it comes to managing pain. Let’s start making a difference today.

Racial disparities in healthcare are a well-documented and longstanding problem in the United States. Studies have shown that minorities are less likely to receive necessary medical treatment, and when they do receive care, it is often substandard. This problem is exacerbated by the fact that many medical professionals hold an unconscious bias against minority groups. A 2016 study by the University of Virginia found that medical students had ridiculous – and potentially dangerous – misconceptions about black people, believing that their nerve endings are less sensitive. This kind of thinking can lead to undertreatment of pain in black patients, which can have serious consequences. This problem is not limited to blacks – Latinx, Native American, and Asian and Pacific Islander patients also face inequitable treatment. It is clear that something needs to be done to address this issue. better training for medical professionals on cultural competency and unconscious bias could be a start. But ultimately, the only way to solve this problem is to increase diversity in the healthcare field so that minorities have equal representation. Only then will we be able to provide quality care for all Americans, regardless of race.


In order to ensure that their machine is not learning our biases, the researchers at VTCRI need to take several precautions. First, the interviewers should not know a subject’s mental health history or what treatments they may be receiving. Second, the data analyst should also be blind to this information. Essentially, everyone involved should be “blind to as many things as possible.” By taking these measures, the researchers can help avoid bias in their machine learning. Dr. Chiu’s presence is a boon to the team. With students, researchers, and scientists from a wide variety of backgrounds, her expertise is invaluable. She is all too aware of the stakes involved: if the diagnostic and treatment guidelines her lab’s algorithms discover are tainted with the same human biases that exist in society, they will merely reinforce those biases.


There is a need to carefully control the technical aspects of machine learning algorithm data along with biases which need to be accounted for as well. Chiu lab research programmer Jacob Lee helped explain that human biases can affect the data quality and hence needs to be considered through means effectively.


Brain imaging is a key tool for researchers studying the mind and brain. However, accurately measuring the response of the brain to different stimuli can be challenging. This is due largely to the fact that blood flow within the brain takes time to adjust to changes in activity. As a result, neuroimaging techniques must carefully account for this lag time in order to produce clean results. Lee explains the challenges of neuroimaging: “The machine gets a snapshot of the brain every two seconds. But getting the right window of time is crucial. To make sure that the researchers are measuring the response, they have to account for the lag time it takes for the blood to get to the correct part of the brain, which is what the machine is truly measuring. That limits neuroimaging and creates the intervals between the scans.” Despite these challenges, neuroimaging remains a valuable tool for researchers studying the brain. By carefully accounting for lag time, researchers can produce accurate results that provide insights into how the brain works.


Since different cultures perceive colors or numbers differently, the triggers need to be carefully thought out. Stimuli include images meant to grasp attention or emotion or instead subjects are asked to rate risks. 


A small number of subjects in fMRI studies can be misleading. This is why labs are trying to share data in order to increase size and diversity. The Human Neuroimaging Lab works and shares data with Peking University and Baylor College of Medicine. THey are also working in close collaboration with researchers at the University of Hawai’i. However, fMRI scanners are mostly located in developed countries. Consequently, the resulting data is even less indicative of the world as a sample. 


Although the fMRI is a powerful tool, it has its limitations. For example, scientists are not actually looking at the brain itself when they use this technique. Instead, they are looking at a software representation of the brain that is divided into small units called voxels. A recent study conducted by a team of Swedish researchers tested the three most popular statistical software packages for fMRI against a human data set. The results of the study, published in the Proceedings of the National Academy of Sciences of the United States of America in June 2016, showed that there was a higher than expected rate of false positives when different software packages were used. These findings highlight the importance of caution when using fMRI to interpret brain activity.


A recent paper has caused alarm among neuroscientists, with claims that over 40,000 research papers based on fMRI data could be invalidated. However, later corrections to the paper have reduced this number to closer to 3,500. Even so, as Vox explained, neuroscientists do not believe fMRI is a broken tool – rather, it merely needs continued refinement. Making scans more accessible and accurate will be key to the clinical applications of these techniques.


Another issue that comes into question is consent. Can a depressed person be considered eligible to consent? While creating models for mental health disorders, we are also creating one for normality. So who gets to choose and define what normal is?

Paul Humphreys, a professor of philosophy at the University of Virginia, has raised an interesting concern: the black box problem. Just as we cannot determine how the brain decides what a cat is, we cannot determine how a machine learning algorithm makes its decisions. This is a problem because we rely on these algorithms to make important decisions for us, but we do not understand how they work. This presents a point of miscommunication and confusion between what the scientists are understanding and what the machine learning model is trying to convey. 


While there are still many boundaries and definitions set to be defined and characterized, it is clear that AI has helped diagnoses come a long way in diagnosing underlying mental disorders and illnesses. With the passage of time advancement in technology as well as the progress of and removal of sociological biases will bound to produce an improved outlook and mechanism in using Artificial Intelligence to diagnose mental illnesses. 

Explore More View All

Using AI To Diagnose Mental Illness

Medical Diagnosis and AI Innovation



40 min read

Future of Voice-Driven Coding

Voice-Driven Coding and Customer Experience.



9 min read

What Is Web 3.0?

Its evolution and differences today.



15 min read

Get in Touch