Rubik's Cube

I was flipping through some archives, and found this Scope (a Stanford University, School of Medicine Publication) article that delineates a machine learning tool (link below, Scope calls it AI, the authors of the tool, in their paper, also linked below, rightfully categorize it as ML, a subset of AI as we generally describe it). I always love when you have access to the paper linked to a study. It always makes things easy.

So, the folks at McGill, trained an ML system using PET scans from people who demonstrate symptoms of mild cognitive impairment, to see who among them would develop Alzheimer’s, given that not all of them do. They taught the system to focus on the elevated protein expression in specific brain regions to train and make predictions.

Used on an independent set, the tool had an 84% prediction accuracy of dementia progression. Read more in the paper. I want to share a few thoughts below.

  1. I think tools like this will become the norm over time. However, right now, they lack the kind of standardization and maturity required for integration into clinical practice. I don’t mean to state that in a negative sense. Such efforts take time, effort and funding, of course.
  2. An 84% percent efficiency is not enough, not even for a supporting tool, not even when humans are completely in charge. This is also achieved through training with large data sets, the use of better algorithms and other improvement methodologies. This could also use some standardization, that can then be spread to all ML, DL and AI tools, which use imaging for diagnostics in healthcare.
  3. The future should consist of such tools passively (and when necessary, actively) siphoning your imaging and other data off your EHRs, and then parsing them to see if predictions can be made. This however, requires more groups such as the ADNI (Alzheimer’s Disease Neuroimaging Initiative), from whose participants the imaging and other data was used, collaborations from hospitals, insurance companies and governments.
  4. To improve diagnosis across ages, sexes, races and other discriminating factors, global co-operation would be required.
  5. Of course, we need to take various types of data, ranging from imaging to genetics, to epigenetics and other sources to make diagnosis quite efficient. Perhaps, this combination is one way to get around the 84% efficiency in this tool, till a time comes when imaging alone produces better results. At that point, say you make predictions based on imaging, genetics, lifestyle and other factors, and they all chime in. You can probably use whatever interventions are available (this is a key factor, missing in all the hype about machine learning. You learn something, yes, but what do you DO?) to delay, treat and cure patients.

When I find more such interesting studies, I will share similar and other thoughts on Machine Learning, Deep Learning and AI, and their impact on Healthcare.

Subscribe and Support, Please! 

Did you enjoy this post? Please subscribe for more updates, using the sidebar. Have ideas or blog posts you’d like to see here? Contact me at yamanoor at gmail dot com.

References: 

  1. The Stanford SCOPE Article: http://scopeblog.stanford.edu/2017/08/29/artificial-intelligence-can-help-predict-who-will-develop-dementia-a-new-study-finds/?imm_mid=0f5d75&cmp=em-data-na-na-newsltr_ai_20170904
  2. The McGill Paper: http://www.neurobiologyofaging.org/article/S0197-4580(17)30229-4/fulltext
  3. Image courtesy, Pexels: https://www.pexels.com/photo/brain-color-colorful-cube-19677/