Summary
Foundation Models offer many opportunities but come with critical risks that must be taken seriously. We saw how some of the best models on the market, such as ChatGPT, GPT-4, and Vertex AI PaLM 2, could stumble occasionally.
Hallucinations can lead to stating that an elephant landed on the moon. Or invent novels that don’t exist. Risky emergent behaviors and disinformation can damage the credibility of LLMs and harm others. Influence campaigns can disrupt the classical flow of information.
Before implementing cloud platform LLMs, we need to check the privacy policies and perform cybersecurity checks.
To mitigate the risks, we went through some of the possible tools. We added a rule base to the moderation model. A knowledge base can create a relatively closed ecosystem and limit open uncontrolled dialogs. The system can be steered with informative messages added to the prompt.
Finally, we saw that token management is an excellent way to control user...