Interesting AI & Immune System Initiative by Microsoft and Adaptive Biotechnologies that leaves behind some questions

It is no secret that the larger technology companies including Google, Microsoft, nVidia, IBM, Apple and others want to dominate AI, as well as healthcare, in an ever expanding competitive landscape. While it is anyone’s guess if they will succeed, or get upstaged by smaller, nimbler firms in either arena, the moves they make are definitely interesting to watch. A lot of the moves appear benign, but could lead to cannibalization, such as the “AI Contests” some of the organizations put up (more posts to come on this).

Partnerships can go both ways I suppose, and are probably a strategic way to externalize any risk of failure. In that sense, in the current example, both Microsoft and Adaptive Biotechnologies appear to want to play up their strengths. The premise of what they want to do with the partnership is also quite intriguing. You can read it from the horses’ mouths in the links provided below. I will summarize them and lay out a couple of thoughts that come to my mind.

Essentially, the project would turn the body’s immune system itself into the data source for diagnosis. Because every time the immune system responds to a disease, T-Cell receptor (TCR) proteins are expressed to combat antigens. Mapping the TCRs, through a simple blood test, as Microsoft and Adaptive postulate, can go a long way in early diagnoses of an array of diseases. To say the least, the project is ambitious, and here are a few thoughts:

  1. Accurate diagnosis and personalized therapy require knowledge of the state of the human body and its disease. Simply mapping the genetics of a person, and considering their epigenetics and their lifestyles, etc. is complex enough, but it still might not be enough! Thus, TCRs could be mapped and allow for quicker diagnoses, if the theory pans out on a large-scale. It is yet unclear to me that mapped TCRs can actually yield the necessary diagnostic clues, machine learning or not, for a larger variety of diseases. However, it might supplement diagnostic efforts alongside genetics, epigenetics and other health data sources.
  2. From a business angle, I also find this to be intriguingly different from the general bedlam of text processing a la Watson, and all the algorithms rushing to read and reinterpret imaging as with nVidia and others. Microsoft has appeared to have looked for and found a partner with a unique approach to the application of machine learning in healthcare.
  3. Any large set of unknown targets, powered by data might appear to be a classic problem for machine learning to solve. However, Microsoft and Adaptive (Microsoft has invested in them now, apparently) might have joined a Kool-Aid club that bridles the horses behind carts. What I mean when I array out those cliches is this: medicine already has a problem of knowledge paucity when it comes to diagnoses, until more clarity becomes available, by way of a progressed disease. This is fundamentally because disease precursors are poorly understood, from want of clinical research, not lack of intelligence. What is to say the TCRs won’t just set off an array of false and confounding alarms? Yes, with liquid biopsy and other such hyped up methods out there, the industry is in a rush for quick fixes. It might well turn out that this is much harder to resolve, with clinical studies and protocols that will require to demonstrate that TCR expression, their proportional presence, etc., do truly indicate the preliminary stages of a disease being present in the body. I am not convinced yet.
  4. There is also a maddening rush out the gate to define universal tests with pinpricks of blood. While I am not suggesting we practice good old, barbaric bloodletting (although there are all kinds of people out there “thirsting” for a comeback to this practice), I think this is an unnecessarily over-constrained problem definition, perhaps making titillating fodder for press releases and blog posts. There might also be an urge to combine these pinprick tests with diabetes monitoring and such. While it is tempting to fantasize about such possibilities, and at some point, these might come to fruition, there is no need to go to such extremes before solving fundamental problems in medicineaccurate diagnosis and targeted therapy. For example, when should a person’s blood be drawn? How frequently? Would the frequency vary when a certain set of TCRs are observed? There are so many things to worry about here. I would think companies would stop using overly broad terms such as “universal”.
  5. In my posts here, in my talks, daily discussions and so on, I always come back to a few bugaboos. Who will own the TCR mappings? Who owns the product of the machine learning algorithms? Will they be patented and bridled off? How will such diagnostic methods be regulated? Validated? Will Explainable AI, something I expect to be a fundamental principle that should be applied to healthcare be required (see explanation from DARPA linked below) and used judiciously? And on and on we go.
  6. Data has been walled off quite well in the healthcare industry up to this point. Yes, we got the human genome, but much much more sits behind curtains and masks and other cliches you can think off, that every new technology that promises to expose and dig through data always concerns me, surrounding ownership.
  7. The “Theranos” Effect: If you are like me, and know about the story of Theranos, you are still sitting up at nights, jaws dropped, wondering how in the hell, the company is still in vogue (I have written about this on my medical devices blog, in fact, using the same Pixabay image! See link below). I have also linked, one of several dozen well-written write ups that offer us a tale of caution, and I plan to call this, as I have named it, the “Theranos” Effect. In summary, this company went into the “over promise, and extreme under-delivery” (or never delivery, to date..) business. They engaged in egregious and unethical business practices, fooling the industry, investors, partners and more, along the way. How do we make sure that with all the promise of AI, companies don’t make such ugly incidents repeat themselves? Mind you, this is not me pointing fingers at Microsoft. I think this is indeed a great effort. I am just offering this up as an important tale of caution, for people in healthcare, and in any industry for that matter. I understand, as much as anyone else that businesses need hype to push their products. However, it would behoove you to make sure you don’t push things off a cliff…


In mankind’s march towards the goal of a healthy future for all, we have many strides to make. Naturally, we want to be as accurate and as thorough, yet economic as we can. Therefore, we rely on technological breakthroughs on one end, such as anything ranging from improvements in basic science, to sensors and AI, and on the economics of lower thresholds for test materials consumed, time to diagnosis and other aspects on the other end. What Microsoft and Adaptive aim to do with their (investment based) symbiotic looking partnership is commendable. It may take us one step closer to our goal, but it may not be the one to take us there at all. Only time can tell, and in the meanwhile, I hope commonsense and ethics prevail over hype and fantastic marketing materials.

Should you have something to add, please leave a comment below.

Subscribe and Support, Please! 

Did you enjoy this post? Please subscribe for more updates, using the sidebar. Have ideas or blog posts you’d like to see here? Contact me at yamanoor at gmail dot com.


  1. The Microsoft Blog Post:
  2. Adaptive’s Press Release:
  3. DARPA, on Explainable AI:
  4. Vanity Fair on Theranos:
  5. Myself, writing with incredulity on Theranos’s longevity:
  6. Image, Courtesy, Pexels+Pixabay:

The FDA Designates an Eye-Diagnostic System as a “Breakthrough Device”


Last year, in my talk at the San Jose BIOMEDevice Conference, I had postulated that one of the key factors in the rate and level of adoption of AI based devices and systems would depend on how the FDA would act on such submissions. It is good to see that one company seems have to have proceeded with taking the FDA to task. The result is positive, and good news for companies pushing AI based systems, but also carries a tale of caution, for the road ahead for everyone involved. I will lay out the case briefly in this blog, but this is not the last time you will hear me talk about the effect of regulatory agency response to such device submissions and their long-term implications.

I just reviewed news that IDx, a company that designed what is being described as “an AI based autonomous diabetic retinopathy” detection system has been given “breakthrough” status by the FDA for their system. This provides the system with an expedited review, and potentially quick approval. This is very encouraging, not only for the organization with the retinopathy system, but also for all companies vending AI systems.

Here are a couple of key thoughts I have on this:

  1. A very promising outlook: I am not aware of how much Scott Gottlieb had, in terms of a personal hand in this decision, but I am sure his leadership had a measurable impact on the decision to expedite review. He has taken a stance of modernizing some of the approval processes, especially in the area of Digital Health. The FDA’s backing for AI based systems, is a very positive thing for companies with an AI focus on healthcare.
  2. Good for competition in the healthcare industry: Once the FDA sets AI based apps and systems on the path to commercialization, there will be no excuse for organizations that do not adopt Machine Learning and Neural Networks. Of course watch for a good spike in M&A activity in this regard.
  3. The Hype Cycle is around the corner: While companies with real ML/DL products and services will be out there, helping patients, there will also be the fakes and the wannabes that we will have to contend with. And it is quite possible that the rotten apples will ruin it for everyone. This is something to watch for.
  4. Unrealistic expectations and unintended consequences: I know this is closely related to the point about the hype cycle, but I want to make a refined point here. Even the companies with good AI based tools might push the envelope on hyping up the utility and efficiency of their products. Over-promise and under-delivery lead to angry and disappointed customers and eventually will create problems for everyone in the industry.
  5. How will CMS and private insurance companies handle reimbursement? This is a big question and can be the topic (and probably will be) of several blog posts. However, I just wanted to highlight that while AI might improve efficiency and automate diagnoses and treatments, companies have to resolve how and where the reimbursement will come from.
  6. A word of caution on regulation: While it is commendable that the FDA is moving forward with initiatives on things such as AI. However, I do not always agree they are doing a great job on things. I was irked by the craze with which the orphan applications were being reviewed by the FDA. The FDA, in the past, has gotten too cozy with the industry and Congress, with terrible consequences. Those who remember the Menaflex incident (reference below) remember how quickly the public’s trust in the FDA eroded, and how the organization took years to recover from that. Therefore, there is cause for concern here, and one hopes there will be a balancing act in making sure that the approval processes remain rigorous enough.

Conclusion: The current “breakthrough” designation and fast track review show great promise, but one hopes the FDA will balance it out with an equal dose of caution.

Subscribe and Support, Please! 

Did you enjoy this post? Please subscribe for more updates, using the sidebar. Have ideas or blog posts you’d like to see here? Contact me at yamanoor at gmail dot com.


  1. The Press Release on IDx:
  2. On what happened with Menaflex:
  3. Image, Courtesy, Pexels:

Personalized Medicine might be the true ticket for AI’s march on Healthcare

I came across this interesting piece on siliconANGLE  focused on how big data is coming to cardiovascular medicine, via a precision medicine initiative. I found the read to be fascinating, and I recommend you read the article, provided in the references section below.

In summary, the American Heart Association (AHA) is collaborating with Amazon Web Services to build a data analytics platform. Details are thin, so I am unsure of how much is hype and how much is reality. I assume, as time progresses, it will indeed turn out to be very useful. The article describes the platform as a “marketplace” (yes, concerning, in more ways than one) for various forms of patient data:

  1. Personal Data
  2. Clinical Trial Data
  3. Pharmaceutical Data
  4. Hospital Data, and apparently, other data.

The goal here is to use this to personalize treatments for patients. Read more in the article. I believe that personalized medicine and AI share a deep connection. I want to share some of my own thoughts of how such efforts could give way to lasting influence by AI on healthcare:

  1. Personalized medicine needs knowledge, which comes through data. Similarly, learning requires data. Data helps you glean patterns, which is what learning is. Thus, AI and Personalized medicine are the two snakes of Caduceus, which in this case is the data itself. Thus, efforts to bring about personalized medicine should be expanded vastly.
  2. Knowledge is supreme. When it comes to personalized medicine and such, you hear all this hype of n=1. Unfortunately, for medicine to be effective, first you need large quantities of data, and with that the ability to glean actionable information from the data, to understand diseases, treatments, and their effectiveness. Whether you see this on a gross scale, or all the way down to a Single Nucleotide Polymorphism (SNP), this type of knowledge needs to be the horse in front of the cart. Without actionability, the patterns recognized through the data might not be useful. This knowledge doesn’t exist, so the hype cycle might cause enough damage and reduce funding levels later on. Realistic expectations ought to be set so that this doesn’t happen.
  3. There are two problems with healthcare data that come to the fore. First, we don’t really know that is being collected is appropriate. There is a lot of day dreaming about genetic data, racial data, physiological parameters (one company claims to be measuring 300 parameters!)  and other measurands and the hope that these will somehow magically transform themselves into diagnoses and treatments. With enough research, they will, but this will take expensive effort and time. Low hanging fruits such as breast cancer diagnoses, etc., are pushing people to imagine that the same level of rapid success will be seen throughout the healthcare continuum. There is no way to know if this will be the case.
  4. The second problem with healthcare data is about ownership and walls around the data. For example, will AHA share the data with the general public? If not, why not? Terms like “marketplace” give me pause. They might very well be just terms used to explain things colloquially. On the other hand, it is important to think about the issue of data sitting behind paywalls, or blocked of with HIPAA as a convenient excuse (more posts to come on all this). Data Democracy is a critical need for the progress of medicine and for Artificial Intelligence, eventually.
  5. What if the data is not free? What about the insights? Do we suddenly now start paying for the insights? Or, would CMS start reimbursing Doctors for personalized medicine? For the use of AI tools? There are several questions here, and only time will answer them.


Data will give us clinical insights. These insights will eventually fragment into meaningful personalized healthcare. The same data will lead to even more advances with AI. However, unanswered questions over data ownership and access remain.



  2. Image, Courtesy Pexels:


Machine Learning shows promise in Dementia prediction

Rubik's Cube

I was flipping through some archives, and found this Scope (a Stanford University, School of Medicine Publication) article that delineates a machine learning tool (link below, Scope calls it AI, the authors of the tool, in their paper, also linked below, rightfully categorize it as ML, a subset of AI as we generally describe it). I always love when you have access to the paper linked to a study. It always makes things easy.

So, the folks at McGill, trained an ML system using PET scans from people who demonstrate symptoms of mild cognitive impairment, to see who among them would develop Alzheimer’s, given that not all of them do. They taught the system to focus on the elevated protein expression in specific brain regions to train and make predictions.

Used on an independent set, the tool had an 84% prediction accuracy of dementia progression. Read more in the paper. I want to share a few thoughts below.

  1. I think tools like this will become the norm over time. However, right now, they lack the kind of standardization and maturity required for integration into clinical practice. I don’t mean to state that in a negative sense. Such efforts take time, effort and funding, of course.
  2. An 84% percent efficiency is not enough, not even for a supporting tool, not even when humans are completely in charge. This is also achieved through training with large data sets, the use of better algorithms and other improvement methodologies. This could also use some standardization, that can then be spread to all ML, DL and AI tools, which use imaging for diagnostics in healthcare.
  3. The future should consist of such tools passively (and when necessary, actively) siphoning your imaging and other data off your EHRs, and then parsing them to see if predictions can be made. This however, requires more groups such as the ADNI (Alzheimer’s Disease Neuroimaging Initiative), from whose participants the imaging and other data was used, collaborations from hospitals, insurance companies and governments.
  4. To improve diagnosis across ages, sexes, races and other discriminating factors, global co-operation would be required.
  5. Of course, we need to take various types of data, ranging from imaging to genetics, to epigenetics and other sources to make diagnosis quite efficient. Perhaps, this combination is one way to get around the 84% efficiency in this tool, till a time comes when imaging alone produces better results. At that point, say you make predictions based on imaging, genetics, lifestyle and other factors, and they all chime in. You can probably use whatever interventions are available (this is a key factor, missing in all the hype about machine learning. You learn something, yes, but what do you DO?) to delay, treat and cure patients.

When I find more such interesting studies, I will share similar and other thoughts on Machine Learning, Deep Learning and AI, and their impact on Healthcare.

Subscribe and Support, Please! 

Did you enjoy this post? Please subscribe for more updates, using the sidebar. Have ideas or blog posts you’d like to see here? Contact me at yamanoor at gmail dot com.


  1. The Stanford SCOPE Article:
  2. The McGill Paper:
  3. Image courtesy, Pexels:

Interesting AI+Health Imaging Investment and a few thoughts on where the field might be headed


This morning, a LinkedIn contact of mine shared an interesting MassDevice article about “Bay Labs”, a company focused on bringing Deep Learning to Cardiovascular Health Imaging (though they define themselves in broader terms as an algorithm company). It appears that Khosla Ventures (after the same much-hated Khosla from California), an existing investor (which must mean seed funding) has backed the organization with $6mn in series A funding.

What is interesting

  1. So far, what we have seen is, the really big companies, such as nVidia and G.E. working on backing up imaging with AI. They even recently announced a collaboration. This would have been a scary scenario for start-ups trying to break into the space. Even now, one should not take too much comfort. Bay Labs apparently won in an AI company contest conducted by nVidia. It clearly shows that big players are keeping an eye on the start up space, potentially looking to scoop up promising organizations.
  2. Though Bay Labs likes to talk about themselves in somewhat general terms; they state this on their website, “In order to serve the largest number of people, we aim to amplify the benefits of deep learning by providing high-performance algorithmic capabilities to assist with healthcare challenges on a global scale.”, they are at least currently focused on Deep Learning, as it relates to Imaging for Cardiovascular Health. Don’t get me wrong, I like both ends of the spectrum displayed here. They appear to have a broad vision of where they want to end up, and yet, as is important for a startup, they have a somewhat keen sense of focus.
    1. Given that, even with all the healthcare and lifestyle influences urged on by healthcare systems, governments, non-profits and others, cardiovascular diseases remain the cause behind the most deaths in the US and around the world. Thus, even within just this field, in imaging alone, the opportunities abound.
    2. Plus, cardiovascular health also happens to be one of the most expensive, and is a low hanging-fruit for startups, expecting to break into a new field and work towards financial success. The other field is probably oncology, which however is crowded with Watson and others. Still, I expect at least a handful of startups focused on various subsets of oncological diagnosis and treatment to emerge out of the woods as time progresses.
    3. It is also one of the fields where practitioners are prone to adopt cutting edge solutions.
  3. Bubbles: Bay Labs appears to be an organization founded on some sound principles. The next few should probably follow along the same path. One has to wonder though, at what point, in the not too far future  will we see organizations touting AI-this and AI-that, without any sound basis? This should be interesting to track.
  4. Algorithm Patents: The debate on whether software and algorithms can be patented, and under what conditions was never properly settled (what, ever is?). A new wave of companies touting algorithms will now come along, and generally speaking, take each other to court, giving headaches to Luddite Justices, such as in the US Supreme Court who are still hemming and hawing about allowing basic AV equipment within the courts (ironically AI has been trained to predict how individual US Supreme Court Judges would rule, with fairly good efficiency). This whole space is something to watch closely.
  5. Assets: Besides patents, there are other things to be concerned about here. Who ultimately owns the data? The results from what the Machine Learning and Deep Learning algorithms find? Are the findings marketable? If not, who pays to get this done? How do governments, healthcare systems and private organizations, all of which form the backbone of medicine share the costs and the benefits?
  6. Mergers & Acquisitions: Once the hype cycle is ridden and who owns what is also a bit more clear, comes the M&A song and dance. This also should be interesting to watch.
  7. Interoperability: Data interoperability in heatlhcare, is already a major headache. This is currently one of my core research areas. I am deeply interested in unraveling the challenges of how data should be cleaned and labeled to be used in AI applications. On the surface, it appears that all you have to do is collect the data, just transform it for various purposes – to upload to EHRs, to use in Machine Learning and so on and then get on with it. The underlying issues are much more complicated. Who collects the data? How is it blinded? Who pays for all this? How does the learning get applied? How do you glean precision medicine uses out of all this? And, many, many more…
  8. Regulatory Dilemmas: I have a series of blog posts planned on this topic, but let’s take a quick dive here. How do you get approvals for such AI systems? How do you fix problems? Should improvements, fed by data volume and algorithm changes require regulatory updates? What happens when an algorithm produces defective results? These and many others problems will need some type of resolution, and will create new problems and opportunities of their own.

Summary: In summary, I’d say, news of companies such as Bay Labs is good. However, headwinds lie ahead for them, and everyone in industry. We are looking at huge paradigm shifts and thus, not much is predictable. New issues will unfold while ones that seem dire right now, might become trivial. We are in for the ride of our lifetimes!


  1. The MassDevice Article:
  2. Bay Labs:
  3. Image, Courtesy Pexels:

Slides – How Artificial Intelligence is Changing Medical Devices

The main goal of this site is to air my thoughts on how new technologies and paradigms impact the practice of medicine. In relation to this, I made a presentation, just yesterday, December 6, 2017, on the topic of Artificial Intelligence, and the growing influence of this discipline on the field of medical devices.

The slides will be available to download this through the site.

I do have to caution you. I do not present a lot of text in the slides. I have written some notes on the slides, but together, they still will not be enough to understand all of what I spoke about.

However, do not worry. I will blog about each slide, concept and use-case presented on this presentation over the next few weeks and months. Subscribe for updates and stay tuned! In addition, if you feel you have a set of burning questions and need them answered as soon as possible, do feel free to contact me and I will answer them as early as possible.

The download link: