// OpenAI 表示,他們正在致力對抗「AI 幻覺」問題,嘗試用新的方式來訓練 AI 模型。這個而牛其實在生成式 AI 成為熱潮之前已經開始,發表的報告中承認,即使是最先進的 AI 模型,也一樣容易產生 AI 幻覺,在不確定內容的時候仍然會表現出「陳述事實」的傾向。一旦涉及推理的時候,這種邏輯錯誤就會令整個陳述存在問題。
OpenAI stated that they are actively working to combat the issue of “AI hallucinations” and are exploring new approaches to train AI models. They acknowledged in their published report that even the most advanced AI models are prone to generating AI hallucinations and have a tendency to make “factual” statements when faced with uncertain content. This logical fallacy becomes problematic when it comes to reasoning and can introduce flaws in the entire narrative https://unwire.pro/2023/06/01/openai-is-pursuing-a-new-way-to-fight-ai-hallucinations/ai/ //