A.I. Took a Test to Detect Lung Cancer. It Got an A.

Asked if artificial intelligence would put radiologists out of business, Dr. Topol said, “Gosh, no!”

The idea is to help doctors, not replace them.

“It will make their lives easier,” he said. “Across the board, there’s a 30 percent rate of false negatives, things missed. It shouldn’t be hard to bring that number down.”

There are potential hazards, though. A radiologist who misreads a scan may harm one patient, but a flawed A.I. system in widespread use could injure many, Dr. Topol warned. Before they are unleashed on the public, he said, the systems should be studied rigorously, with the results published in peer-reviewed journals, and tested in the real world to make sure they work as well there as they did in the lab.

And even if they pass those tests, they still have to be monitored to detect hacking or software glitches, he said.

Shravya Shetty, a software engineer at Google and an author of the study, said, “How do you present the results in a way that builds trust with radiologists?” The answer, she said, will be to “show them what’s under the hood.”

Another issue is: If an A.I. system is approved by the F.D.A., and then, as expected, keeps changing with experience and the processing of more data, will its maker need to apply for approval again? If so, how often?

The lung-screening neural network is not ready for the clinic yet.

“We are collaborating with institutions around the world to get a sense of how the technology can be implemented into clinical practice in a productive way,” Dr. Tse said. “We don’t want to get ahead of ourselves.”

Source link