The report evaluates and compares the safety practices of leading AI companies.
It is designed to encourage responsible AI development, enhance transparency, and identify areas of concern. Significant differences exist in risk management among companies: some have developed initial safety frameworks, while others have not adopted even basic precautions. All flagship models are vulnerable to attacks known as 'jailbreaks'. Companies' strategies for developing artificial general intelligence (AGI) are deemed insufficient for ensuring safety and control over systems. Lack of independent oversight allows companies to neglect safety in favor of profit.