The Challenges Amongst Cybersecurity, Privacy, and Artificial Intelligence
By Shefali Mookencherry, Chief Information Security & Privacy Officer, University of Illinois Chicago
Just the other day, I was reading about how artificial intelligence (AI) will change the lives of people and jobs in ALL industries. Looking further into this hypothesis, I’ve noticed that the integration of AI in the healthcare, research, and academia environments has been challenging. It is understood that traditional human intelligence is driving artificial intelligence. Many people I spoke with in my cybersecurity community have reiterated that you have to have a “human in the loop.” Some individuals approach AI with open arms, while others cautiously watch to see what will happen in this artificial intelligence race. The big question is, “When will we (humans) trust AI?” Given this question, what follows is a high-level discussion on some of the challenges amongst cybersecurity, privacy and AI in the healthcare, research, and academia environments.
In summary, AI adoption could slow down due to the large amount of data in AI systems, which may raise the possibility of security and privacy incidents or breaches.
HEALTHCARE
Imagine going to the hospital and being greeted by a friendly looking robot that has AI and it says “Hello. Where can I guide you to?” Would you be afraid, skeptical, or just answer the robot? Healthcare organizations are approaching AI with caution but also looking for it to be innovative enough to see if cost savings could be realized, more efficient services could be provided, and brand recognition could be improved.
Implementing AI may mean that the organization is creative, adaptive, and a trendsetter. Healthcare organizations may implement AI for many purposes, such as medical imaging, medical coding, and digital medical assistance, providing faster healthcare and improving the quality of the healthcare provided. As an example, it will be a test of the patients to see if they will trust an AI-integrated application and/or system for a medical diagnosis versus the diagnosis coming from a human doctor or clinician and historical medical equipment. The good news is that right now, humans have a choice.
Once AI is fully integrated into healthcare operations, information systems, services, and communications, the human choice of being diagnosed by a human doctor or nurse may become outdated. Is there research that shows the error rate in a misdiagnosis is the same, better or worse if the medical diagnosis was provided by the AI versus a human? Some healthcare organizations may offer surgical robotics with AI as a way to perform a less invasive surgical procedure, which might otherwise have needed open surgery. This approach could provide clinical benefits to patients, resulting in reduced pain, faster recovery time, and lower infection risk after surgery.
On the other hand, what if AI vendors kept all of your health information besides the healthcare organization? Would this violate your privacy? Patient information may consist of sensitive and/or confidential Personally Identifiable Information (PII), which could include health history, identifiers, and payment data, which are protected by the General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act (HIPAA). Privacy is a concern when the consumer of healthcare has not been informed of where their information is stored, disclosed, or modified without their consent. The privacy and security of patient information will need to be assessed by the healthcare organization and AI vendor as part of fulfilling privacy and security requirements based on patient requests, operations, compliance, and regulations.
RESEARCH
What if researchers could identify the appropriate drugs to test for diseases with the help of AI applications and systems? Can researchers trust the results of the AI prompts they are using? What if the AI misbehaved? How can anyone tell the difference? Wouldn’t it be better if AI models were used for hypothesis testing? These questions increase the uncertainty in how cybersecurity and privacy concerns for AI use could be addressed. As an example, let’s review the use of AI in scientific writing. Most scientific writing could include sensitive, regulated, protected, and proprietary information. As Large Language Models (LLMs) are trained, the shift in language used in scientific writing may present literary limitations as LLMs may specifically choose word choices that may not represent the researcher’s penmanship. So, at this point, the researcher has the option to wordsmith the AI-generated writing or be able to produce quicker research documentation. What if the LLM pulled in patient information to justify the research and noted patient data in the scientific writings? Would the researcher know that they had to get the patient’s consent prior to the use of the patient’s data in their research? These days, we have the Institutional Review Boards (IRBs) to help discern this. Researchers should work with the IRB to evaluate how AI-generated content can be used, how it must be secured, and how privacy practices must be implemented.
ACADEMIA
Academic learning materials could disclose that AI linguistic discrimination is consistent with AI adoption. Disciplines like electrical engineering and system development are exploring strategies for teaching responsibly with AI. However, disciplines like mathematics, physics, or nature may feature more conservative shifts. AI can be a strong facilitator in speeding up learning processes. In addition, AI raises ethical issues when used in abstracts and scientific papers. Some publishers may consider it plagiarized and unethical material. With AI’s increasing influence in academic publications, the academic community needs to address solving security and privacy violation implications. AI is a useful tool that promotes and facilitates research activities but may compromise confidentiality and integrity.
In summary, AI adoption could slow down due to the large amount of data in AI systems, which may raise the possibility of security and privacy incidents or breaches. Humans will need to assess individually and socially if organizations should offer AI services with opt-in or opt-out considerations. At the minimum, healthcare, research, and academic organizations will need to address AI responsible and acceptable use and disclosures based on operational needs, regulations, ethics, compliance, and user/patient choices. As with any automation, the need to understand what we are gaining versus what we are losing will be a continued discussion with AI evolution.