Summary:
Original Link:
Original Article:
🚨 GPT-5 And The Mirage of AGI · OpenAI’s New Open-Weight Models · And More. The 225th edition of my newsletter is out, featuring the news, papers, and ideas that will help you understand the AI governance zeitgeist:
1. The news you cannot miss
– The most significant AI news this week was the launch of GPT-5, OpenAI’s new AI model, which Sam Altman described as having a “PhD-level expert in all areas” (you can look at the map below and judge its geography skills yourself; I guess this “expert” cheated on all geography lessons).
– From a technological perspective, there was a lot of expectation and speculation, as Sam Altman wrote that they were confident they knew how to build AGI. To learn more about GPT-5’s technical shortcomings, you can read Gary Marcus and Émile P. Torres’ essays.
– From a privacy perspective, I noticed a blatant disregard for privacy by design. During the live stream, one of OpenAI’s employees showcased an extremely risky use case of agentic AI capabilities, stating that she had given ChatGPT access to her Gmail and Google Calendar and “was using it daily to plan her schedule.” This use case is precisely what Sam Altman previously said people should avoid, which raises questions about how much we should trust the company.
– From a security perspective, according to Security Week, red teams managed to jailbreak GPT-5 with ease, guiding it, for example, to produce a step-by-step manual for creating a Molotov cocktail. They warned that GPT-5 is not suitable for enterprise use.
– OpenAI also dominated the headlines this week with the launch of two open-weight AI models for the first time. As I mentioned earlier, with the rapid rise of DeepSeek and other competitive Chinese AI models, there has been growing pressure on OpenAI to enter the “open” space. Immediately after launch, OpenAI’s models were already trending 1st and 2nd on Hugging Face.
– The EU released a list of companies that have so far signed the Code of Practice for providers of general-purpose AI models, and Apple is notably missing. When one of the world’s leading tech companies does not sign a code of practice that barely reflects the EU AI Act’s provisions, to me (as a lawyer), it signals that the company is probably already planning to legally challenge the EU AI Act.
– According to a recent noyb survey, only 7% of users want Meta to use their personal data for AI. The data from the survey raises questions about Meta’s practices and the fairness of GDPR’s legitimate interest requirement to process personal data in the context of AI training.
(…CONTINUES BELOW…)
👉 Never miss my essays and curations: join my newsletter’s 72,300+ subscribers using the link below.
👉 To learn more about AI’s legal and ethical challenges, join the 24th cohort of my AI Governance Training in October (link below).