Google's Medical AI Detects Lung Cancer With 94% Accuracy

Google's sharp-eyed, deep-learning medical AI was as good as or better than trained radiologists.
John Loeffler

In a new study published this week, Google's lung cancer-detecting AI was able to detect lung cancer as well as a trained radiologist, if not better.

Google's Neural Network Can Now Spot Cancer

Google joined up with medical researchers to train its deep learning AI to detect lung cancer in CT scans, performing as well as or better than trained radiologists, achieving just over 94% accuracy.

“We have some of the biggest computers in the world,” said Dr. Daniel Tse, a project manager at Google and a co-author of the two studies published Monday in the journal Nature Medicine. “We started wanting to push the boundaries of basic science to find interesting and cool applications to work on.”


Lung cancer kills almost 2 million people around the world every year, with 160,000 of those deaths last year happening in the US. Like all cancers, the best chance for a successful treatment relies on early detection by screening people at high risk for the disease, such as smokers. These screenings aren't perfect, and the subtle difference between a malignant tumor and a benign anomaly can be difficult to distinguish from a CT scan.

Google has been hoping that its deep learning algorithms can teach an AI what cancer looks like so that it could assist doctors and hospitals in diagnosing patients early enough to make a difference in their treatment outcomes. Pattern recognition is something that neural networks are exceptionally good at, and with enough data to sufficiently train an AI, Google hoped it could recognize what cancer looks like while it is in the earliest stages when intervention could be most successful.

In the pair of studies, the AI was trained on CT scans of people with lung cancer, people without lung cancer, and people whose CT scans showed nodules that would later go on to develop into cancer. In one study, the AI and the expert radiologists were given two different scans from a patient, and earlier scan and a later one, while in the second study, only one scan was available.

When an earlier scan was available, the AI and the radiologists performed equally well in detecting cancers, but in the second study, the AI outperformed the human doctors with fewer false positives and fewer false negatives. In total, the AI's accuracy was 94.4% in detecting lung cancers from the CT scans, an astonishingly high detection rate.

“The whole experimentation process is like a student in school,” said Tse. “We’re using a large data set for training, giving it lessons and pop quizzes so it can begin to learn for itself what is cancer, and what will or will not be cancer in the future. We gave it a final exam on data it’s never seen after we spent a lot of time training, and the result we saw on final exam — it got an A.”


That final exam amounted to 6,716 cases where the diagnosis was known, making the result of the study all the more significant. That said, it will be a long time before such a system could be rolled out into a clinical setting. For one, it may have had fewer false positives and false negatives, but it wasn't free of error entirely and errors in computer systems can have far-reaching consequences, especially in a medical context. Medical equipment that malfunctions can and has killed patients in the past, and while doctors can make mistakes as well as--and maybe even more than--any AI, relying on an AI to be the final arbiter of a medical diagnosis doesn't come without risk.

“We are collaborating with institutions around the world to get a sense of how the technology can be implemented into clinical practice in a productive way,” Tse said. “We don’t want to get ahead of ourselves.”

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board