What is fine-tuning and why does it matter?
Issues inherent in general LLMs such as GPT-3 include their tendency to produce outputs that are false, toxic content, or negative sentiments. This is attributed to the training of LLMs, which focuses on predicting subsequent words from vast internet text, rather than securely accomplishing the user’s intended language task. In essence, these models lack alignment with their users’ objectives.
Let’s look at three cases that I found in the first half of 2023 that demonstrate ChatGPT’s hallucination problems.
Case 1 – an American law professor was falsely accused of being a sexual offender by ChatGPT, with the generated response referencing a non-existent Washington News report. If this misinformation had gone unnoticed, it could have had severe and irreparable consequences for the professor’s reputation (source: https://www.firstpost.com/world/chatgpt-makes-up-a-sexual-harassment-scandal-names...