Understanding and mitigating security risks in generative AI
If you are a user of generative AI and NLP LLMs, such as ChatGPT, whether you are an individual user or an organization, who is planning on adopting LLMs in your applications, there are security risks to be aware of.
According to CNBC in 2023, “Safety has emerged as a primary concern in the AI world since OpenAI’s release late last year of ChatGPT.”
The topic of security within AI is so relevant and critical that when ChatGPT went mainstream, the US White House officials in July 2023 requested seven of the top artificial intelligence companies—Microsoft, OpenAI, Google (Alphabet), Meta, Amazon, Anthropic, Inflection, and Meta—for voluntary commitments in developing AI technology. The commitments were part of an effort to ensure AI is developed with appropriate safeguards while not impeding innovation. The commitments included the following:
- Developing a way for consumers to...