Summary:
Original Link:
Original Article:
π¨ Most people missed it, but there was a shocking disregard for PRIVACY-BY-DESIGN in yesterday’s GPT-5 launch (which shows that perhaps OpenAI really doesn’t care):
While announcing GPT-5’s new agentic capabilities, Christina (screenshot below) says that OpenAI’s aspiration is to let ChatGPT get to know the user more and more over time and understand what is meaningful to each user.
She says that starting next week, some users will be able to give ChatGPT access to their Gmail and Google Calendar, and then asks ChatGPT to “help her plan her schedule for tomorrow.”
Pay attention to the details: she mentions that she has been using this feature every day to help get her life together, showcasing an extremely RISKY use case live, to millions of people.
She says she already gave ChatGPT access to her Gmail account and calendar, and shows on screen her private information, including the need to confirm a dentist appointment and an unanswered email.
Now, my comments:
First, this is a great example of AI agents’ privacy risks, as every new permission you grant to an agent (in this case, permission to access your calendar and email), you’re exponentially increasing your privacy risk.
Second, this is a risky use case, and it was irresponsible for OpenAI to broadcast it this way, as if it were the typical or desired type of behavior.
Christina’s schedule was tailored for the live stream. However, other people’s schedules are real, and there are real risks of privacy leaks, including location, financial information, and other sensitive data.
Third, OpenAI is EXTREMELY inconsistent. Less than a month ago, when announcing ChatGPT’s agentic capabilities, Sam Altman posted on X:
“There is more risk in tasks like ‘Look at my emails that came in overnight and do whatever you need to do to address them, donβt ask any follow up questions’. This could lead to untrusted content from a malicious email tricking the model into leaking your data.
We think itβs important to begin learning from contact with reality, and that people adopt these tools carefully and slowly as we better quantify and mitigate the potential risks involved. As with other new levels of capability, society, the technology, and the risk mitigation strategy will need to co-evolve.”
Well, OpenAI has just done the exact opposite yesterday. It announced GPT-5’s agentic capabilities by promoting a risky agentic use case.
It became clearer that what Sam Altman writes online is scripted or legally reviewed for PR purposes, but does not necessarily reflect OpenAI’s plans.
The CEO says he cares in a PR-tailored tweet, then launches a product promoting the exact risky use case they told people to avoid…
–
π Next week, I will publish my thoughts on the launch of GPT-5 in my newsletter. To receive it, join 72,100+ subscribers using the link below.
π To learn more about AI’s legal and ethical challenges, join the 24th cohort of my AI Governance Training in October (link below).