By Sha Edathumparampil, Corporate Vice President – Digital & Data | Chief Data Officer, Baptist Health
Since its debut in late 2022, ChatGPT has been making headlines in the business and technology news, showcasing the potential for AI and Generative AI to transform how companies work and conduct their business. However, most of the coverage is focused on the future disruptive potential of these new technologies, rather than the tactical considerations that most companies should take into account today, when adopting this new technology.
This is not entirely surprising, considering that most currently available solutions are still in the beta phase, with new entrants joining the race almost daily. However, the rate of adoption and use of generative AI is growing at a pace far greater than any other technology in recent memory. In fact, ChatGPT crossed the 1 million user mark in just the first 5 days. Going by posts and stories on LinkedIn, it appears that employees in many companies are at least trying out, if not actively using, the new text, code, or image generation capabilities.
Given the transformative nature of generative AI and the pace at which its adoption has been growing, most companies have yet to create a strategy or implement governance guardrails to prevent negative outcomes. This article provides a guide for businesses to help them navigate potential risks and help them prepare to take advantage of the transformative power of generative AI.
In conclusion, while generative AI holds tremendous potential to transform various industries, it also presents significant challenges and risks that should not be ignored.
- Security & Privacy
There have been reports that Samsung employees used generative AI tools like ChatGPT to generate and review code, which resulted in the leakage of sensitive information such as source code and internal meeting notes. The incident highlights the need for companies to be aware of the risks of using generative AI tools and the importance of implementing appropriate security measures to protect sensitive information.
Most generative AI models tend to use the data they receive from their users to train further and improve accuracy. With the appropriate prompts or queries, they may repeat or recreate the input data for other users. Therefore, it is crucial for companies to be cautious when using generative AI tools and to have proper review processes in place to prevent potential security breaches.
Many companies have existing internal review programs and processes for adopting open source technologies into their business, which can serve as a great starting point to build upon for the assessment of generative AI solutions. Any code generated by AI must be thoroughly reviewed – there is a well documented risk of bad actors teaching AI to generate malicious code that maybe exploited.
Enterprise versions of some of these new technologies are starting to emerge, such as Azure OpenAI services, which may have features that allow businesses to better manage data access, privacy, and security. Suppose a public, open version of a generative AI product must be used in business critical use cases. In that case, it is possible to abstract that service behind your own interface (UI or API) that has sufficient controls and reviews any data or request before sending it to the public service.
It is also essential for companies to conduct legal reviews of the terms and conditions associated with any new solutions they plan to implement.
Publicly available Generative AI SaaS services have pricing models that are similar to public cloud services like AWS, Azure, and GCP. However, the usage pattern of generative AI is different from typical SaaS services. Users tend to iterate and debate responses to the same prompt multiple times, which can quickly run up large bills without the user realizing it. Additionally, some models have limitations on the amount of data they can handle simultaneously, so users may need to break up the problem into multiple smaller ones, increasing the cost risk. To mitigate these risks, it’s important to implement user training, usage monitoring, alerting, and spending limits.
For image generation use cases such as graphic design, very high-end desktop computers, such as the ones designed for gaming or content creation, with the latest graphics cards (ex: NVIDIA RTX 30 or 40 series) may be sufficient. This also helps reduce the risk of sending proprietary information to a 3rd party website or service.
For companies that want to build their own specialized generative models that extend upon the standard model’s capabilities, it may be possible to use existing infrastructure, such as on-premises or cloud-based GPU clusters, originally acquired for training machine learning models.
However, they need to consider additional costs, such as data acquisition and labeling, as well as the challenge of finding and recruiting experts in deep neural network development. Technology services and outsourcing companies may offer solutions to help optimize these costs.
- Reliability & Performance
Generative AI models are complex and there is a significant risk that the models may produce inaccurate or even malicious results. They are also computationally expensive and the currently available SaaS offerings frequently suffer degradation or prevent usage altogether; perhaps, not surprising given the exponential growth in usage. This means that they may not be suitable for use in real-time applications or applications that require high performance. There are a few things that you may consider for mitigating these risks.
A well thought out testing strategy & plan (like other digital products and solutions you may have) may help manage or reduce risk. But the creative nature of generative AI makes it difficult to anticipate all possible usage scenarios, so building an adequate testing strategy might prove evasive until there is sufficient data and metrics around usage and performance. In such a scenario, it is important to set the right risk & quality expectations with the users of such a system.
- Ethics & Bias:
Generative AI models can amplify ethical and bias-related risks that originate from the data used to train the model. The ability of generative AI to produce entirely new content that may be offensive, harmful, or biased can increase these risks many fold. At present, AI fundamentally lacks the judgment and ability to detect such issues.
To mitigate these risks, defining a governance framework and set of guidelines specific to your business is important. There is no one-size-fits-all approach. Being transparent when using AI to create is equally important, and this allows users to be vigilant and report any issues or concerns they observe. Monitoring the actual usage of these solutions, along with alerting for content that may fall outside guidelines, is another way to manage risk. In general, the more educated your employees or customers regarding generative AI solutions’ use and potential problems, the better your ability to manage risk and prevent major issues.
In conclusion, while generative AI holds tremendous potential to transform various industries, it also presents significant challenges and risks that should not be ignored. At a minimum, companies looking to incorporate generative AI into their operations need to consider various factors such as data privacy, security, cost, reliability and ethical considerations. It’s important to approach the adoption of generative AI solutions with a clear understanding of their limitations, risks, the need for proper governance and guidelines, as well as education for the users. With the right strategy and approach, generative AI can be a powerful tool for innovation and differentiation, helping businesses to stay ahead of the competition and better serve their customers.