// Sam Altman’s 3-points plan:
Form a new government agency charged with licensing large AI models, and empower it to revoke that license for companies whose models don’t comply with government standards.
Create a set of safety standards for AI models, including evaluations of their dangerous capabilities. For instance, models would have to pass certain tests for safety, such as whether they could “self-replicate” and “exfiltrate into the wild” — that is, to go rogue and start acting on their own.
Require independent audits, by independent experts, of the models’ performance on various metrics. https://unwire.hk/2023/05/18/ai-15/fun-tech/?utm_source=rss&utm_medium=rss&utm_campaign=ai-15 //