#AGI Eliezer Yudkowsky:AI 已無法控制,全面停機才是活路

#AGI
「如果有人造出來,我們全都會死。」AI 安全教父 Eliezer Yudkowsky 再次公開疾呼,全面中止強人工智能研發。他的新書強調,現今任何基於既有技術與理解構建超級智能的企圖,都將導致人類滅絕。儘管觀點被批為極端,但他對 AI 界的影響深遠,包括啟發了 OpenAI 的創立。如今他認為,與其妄想「友善 AI」,不如接受「有尊嚴的死亡」。

Eliezer Yudkowsky, long considered the prophet of AI doom, is going public with a renewed plea: shut it all down. In his new book *If Anyone Builds It, Everyone Dies*, he argues any superintelligence built on current techniques will wipe out humanity. Once a guiding light behind AI safety and Rationalism, Yudkowsky has turned from alignment optimism to advocating global shutdowns, warning we’re on a fatal path if action isn’t taken — now.

– **Why it matters:** Yudkowsky’s ideas influenced OpenAI and DeepMind — yet he now believes we’re too late to solve alignment.
– **The big picture:** This is a rare case where an AI insider is calling not for regulation, but for a total stop.

📌 一杯咖啡價錢連接 Web3 世界 https://patreon.com/wanszezit
Full article https://www.nytimes.com/2025/09/12/technology/ai-doomer-eliezer-yudkowsky.html

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑