AI Gibberish is when the output from an LLM or other type of Generative AI is not only incorrect but non-sensical. AI Gibberish is different from an AI Hallucination. An AI Hallucination is when a Generative AI makes up information, for example when a lawyer asked ChatGPT to write a brief, ChatGPT made up real-sounding case names. It knew, for example that cases were something important in legal writings and typically had the format of NAME v. NAME or NAME v. MUNICIPALITY. An AI Hallucination is when the AI would then fill in those blanks with made up names or municipalities. But overall, the output made sense and was coherent.
AI Gibberish is when an AI produces output that is completely non-sensical. This can happen in Model Collapse scenarios where the AI is trained on previous AI output much like inbreeding in nature. The model at first loses subtlety and depth and eventually produces only non-sensical output in text or nearly random pixels for images. Example:
#AI #AI Gibberish #AI Hallucination #chat model #chatgpt #LLM #Model Collapse