AI Enables Paradigm Shift in Health-Care


By Patrick Bangert, VP of Artificial Intelligence, Samsung SDS

Artificial intelligence is a little like having a resident doctor in a hospital. Over time, the resident goes from having theoretical knowledge to gradually acquiring practical abilities. There comes the point, having been supervised by attending physicians, at which the resident is allowed to perform some tasks without supervision. If successful, the resident is given more autonomy until the resident graduates and becomes a full doctor.

The trick in teaching residents is to expose them to the right experiences at the right time. We don’t want to overwhelm them with too complex tasks or rare cases, too early. Neither do we want to repeat the same thing over and over again. It’s a matter of carefully selecting the training cases. This is, of course, what schools, universities, and corporate training programs are good at.

Teaching AI is a little different. There is currently no known way to “explain” general concepts to AI. We must present AI with examples of the task we want it to perform – generally, orders of magnitude more examples than any human would require. These examples must be clearly labeled, not only so that AI knows what it is but also for AI to figure out whether it has been successful. While a resident has contextual understanding and knows that the mistakes of telling a healthy person that they have cancer (false-positive) versus telling a cancer patient that they are healthy (false-negative) carry different risks, AI has no such context awareness and must be carefully taught about such consequences.

There are two primary obstacles when teaching an AI system: (1) Assembling enough data for AI to reach a reasonable accuracy, and (2) inserting sufficient domain knowledge in the form of labels into the data. The process of training the model is well understood and there are software frameworks and tools available for it. One might compete on the last few 0.1% by fine-tuning the model or algorithm, but the accuracy and generality of the models depend largely on the quality and diversity of the data.

Diversity, equity, and inclusion (DEI) apply to data as well as humans! If you duplicate a data point, you do not add information. Most real-world datasets over-emphasize the average case and we must make a concerted effort to include more examples of so-called edge cases so that AI has a chance to learn from them. We can do this by proactively generating such data points, if we are able. Most of the time, we cannot do this and must instead selectively sample such points from a large pool of data.

Active learning is the technical term for enabling DEI in AI. It’s a human-in-the-loop process by which the human labels a small number of examples and an AI-selection system intelligently chooses a few more examples for labeling, and so on. In this back-and-forth way, the human ends up labeling only 5-10% but covering nearly 100% of the information contained in the entire dataset. As labeling represents over 80% of the total human time taken to do an AI project from start to finish, this dramatically reduces the cost of doing AI, enabling models that were previously uneconomical or impractical.

After the data is labeled and the model is trained, we want to use it. The model is now a cog in a much larger workflow that begins with generating a new data point, transferring it to the modeling computer – usually in the cloud – and communicating the result into the human decision-making process. During the flow, various regulatory frameworks must be taken into accounts, such as HIPAA and GDPR, to protect data privacy. Software systems must be kept secure, hacking-proof, and continuously available.

The decision-making process in healthcare primarily concerns the diagnosis and treatment plan. Any system that enters into this process is regulated by the relevant government, such as the FDA. From a regulatory point of view, it is very different if the AI system says things like “look here,” “this is a higher priority,” “there is a chance of X% that this is Y,” and “this is Y.” Very roughly speaking, these four are unregulated and class 1, 2, and 3 medical devices.

In addition to the regulatory angle, the recipients of the output – both doctors and patients – want to understand it and so need an explanation. AI is often correctly accused of being a black-box, so some current AI research focuses on explainability. There are methods for extracting an explanation from a classic AI model and there are models that are inherently explainable (known as interpretable models). This helps to engender some trust for doctors and patients. Trust is perhaps the biggest obstacle to widespread AI adoption by the general public, especially in a topic as near and dear to our hearts as our own health. It is curious but true that our tolerance for mistakes is much higher in the case of human doctors than for AI-based systems.

The primary benefit of AI, in general, lies in automation. It cannot replace any one physician, but it can help by performing repetitive or rudimentary tasks that allow any physician to be less of a data analyst and more of a genuine doctor. This automation will lower the cost of the associated processes and simultaneously raise the accuracy of the results. In turn, the increased efficiency can be used to scan more frequently, spotting conditions earlier. All in all, medical outcomes will improve due to the increased accuracy and early detection.

______________________
Author Bio – Patrick heads the AI Division at Samsung SDSA. On the side of AI Engineering, he is responsible for Brightics AI Accelerator, a distributed ML training and automated ML product, and AutoLabel, an automatic image data annotation and modeling tool primarily targeted at the medical imaging community. On the side of AI Sciences, he leads the consulting group that makes AI models for our customers for many diverse use cases. Among his other responsibilities is to act as a visionary for the future of AI at Samsung.

Before joining Samsung, Patrick spent 15 years as CEO at Algorithmica Technologies, a machine learning software company serving the chemicals and oil and gas industries. Prior to that, he was assistant professor of applied mathematics at Jacobs University in Germany, as well as a researcher at Los Alamos National Laboratory and NASA’s Jet Propulsion Laboratory. Patrick obtained his machine learning PhD in mathematics and his Masters in theoretical physics from University College London. A German native, Patrick grew up in Malaysia and the Philippines, and later lived in the UK, Austria, Nepal and USA. He has done business in many countries and believes that AI must serve humanity beyond mere automation of routine tasks. An avid reader of books, Patrick lives in the San Francisco Bay Area with his wife and two children.