Share and discuss this blog



Tuesday, February 27, 2018

"you are dying of kidney failure" -- a lesson about AI BS





This is a story about AI, but its starts off as a personal story.

I returned to New York from a trip to the U.K. feeling very badly. I was guessing that I had a kidney stone but I had had those before and this seemed different. I didn’t have a doctor in New York, but a friend told me he could get someone to see me immediately. I saw him and then I got a call from him telling me I was very sick and I needed to see a kidney doctor right away. He set up the appointment for next day.  The kidney doctor examined me and told me that I was dying of kidney failure. Needless to say, I was upset.

His nurse came in and asked me what the kidney doctor (who was listed as one of the top 100 doctors in New York by the way) had said. I told her and she said she had seen a lot of patients who were dying of kidney failure and I didn’t look like them at all. 

The doctor gave me a diet to follow which he said might help. I started it only to realize that it pretty much described how I normally ate. I returned to Florida getting sicker every day. Finally I called a wealthy friend who had a “concierge doctor” who agreed to see me. The doctor told me to get on the next plane to New York and go to Columbia Presbyterian. 

I went and met a kidney doctor there (Jai Radhakrishnan) who did something very strange — he listened. He asked me about every detail of my time in the U.K. and in New York and then told me that I was not dying of kidney failure. He said I had a very high Creatinine number which confused the other doctor because it is strongly associated with kidney failure, but what had actually happened to me was that both kidneys were blocked, one by a kidney stone and one because I had managed to dehydrate myself which caused a perfect storm. “Remove the kidney stone and you’ll be fine” he said, and they did and I was, and still am six years later.

Why am I telling this story?

Today we hear endless AI BS as I often point out. Lately there has been a lot of BS about Ai and medicine. For example:





What is the problem with this nonsense? One consistent problem with reporting on AI is that they say AI when they mean “a statistical algorithm.”  Computer programs can be written to examine reams of data more easily than a person can. This a good thing. But data is easily misunderstood by people (especially when they don’t understand the context), and you can be sure these “AI’s” won’t have a clue. The first kidney doctor I saw looked at a creatinine number over 5 and simply went no further. The nurse looked at me and compared me to other patients she had seen. The Columbia kidney doctor listened and used his vast knowledge of prior cases to reason from.

Modern medicine is full of doctors who are already robotic. But good medicine is like any other subject that requires one to make judgements based on prior cases. You can follow rules, or you can think. The first kidney doctor (who I saw multiple times) never bothered to actually think.

Years ago, in the last AI BS movement of the 1980’s, rule-based systems were all the rage. Doctors were interviewed and rules were extracted from them and then computers were given these rules. Suddenly there would be ‘expert systems’ and they would rule the world. VC’s dove in head first.

I responded to this expert system nonsense with the idea of case-based reasoning. I said that people reason from past experience, and they do not use rules. (A doctor can glibly give you a rule when asked, but that doesn’t mean that they use that rule in decision making. People are notoriously bad at understanding how their own minds work.). 

You want a doctor who has seen a lot of cases similar to yours and can do what that nurse did: recognize something by really looking at the patient. The rule-based systems that caused the AI winter of the 1980’s and early 1990’s were promising to do something that people don’t do: to reason from rules. Now, modern day “AI” is proposing that it can reason from massive amounts of data that the computer really doesn’t understand (just like my NYC Top Doctor did).

Case-based reasoning proposes that people store and index cases based on many factors and then get reminded of old cases when various factors partially match. Then, and this is the important part, they think about the prior cases more deeply and try to draw the lessons that they learned from the earlier cases to reason from.

We had some success at building case-based reasoning programs and many others are still doing that, but when the funding dried up this became more difficult. This loss has been magnified by the current “AI” which has no way of reasoning from experience and can only promise conclusions based on data it really doesn’t understand. The current AI and medical programs would have had me dying as well.  

Dr. Radhakrishnan did something very weird. He asked and he listened carefully to my answers. Until computers can do that they will not replace human doctors (or anyone else who needs to think actually.)

Now, of course, I believe in AI. I worked in it my whole life. But the current AI fad promoted by the media and backed by VC’s can only lead to the very thing that people are afraid of: dumb computers replacing thinking people.  

Do I think a computer could ever make a good diagnosis? I think they could, after thousands of cases have been analyzed and indexed so that they pop up at just the right moment, first to help a doctor make a decision, and then  after we are confident that our case base is deep and that the cases has been indexed properly, to actually make that decision if no doctor is available.  But, this isn’t being worked on at all (well, I still get to work on it from time to time) because of the AI winter of the 80’s.

So, for now, we need to stop thinking AI will do anything important and ask instead if an algorithm might help us supply better data to doctors, (Of course, this still wouldn't  have helped in my case.) The algorithms need to be seen as data analytics not as “AI.”  In other words, current AI is only worthwhile if it can help people make better decisions.


I would love to work on building the AI doctor that is based on reasoning from cases, but alas, there is still no funding for that.(Too hard. No quick money to be made by investors.)

No comments: