Hallucinations Explained
Hallucinations – factually incorrect or context-ignoring outputs from Large Language Models (LLMs) such as GPT-3 and GPT-4 – pose potential risks when professionals rely on them without verification. This issue has been recently highlighted by incidents involving a US lawyer and an Australian Mayor who erroneously utilized such outputs. While these hallucinations showcase the limitations of LLMs, they do not render them unusable or irrelevant.