Summary:
Original Link:
Original Article:
OpenAI just released GPT-5!
đź”— Blog post: https://lnkd.in/e_h7MNdG
đź”— System card: https://lnkd.in/epYtkRjw
▶︎ How does GPT-5 work?
GPT-5 uses a routing system to balance speed and depth. Simple queries go to a fast model; complex ones trigger a slower, reasoning model. It also monitors its own chain-of-thought in real time to catch problems like deception or hallucination.
▶︎ How good is it?
GPT-5 outperforms previous models on coding, math, and medical benchmarks. It uses fewer tokens on average by routing efficiently. In tests, it gives more accurate and detailed answers with fewer errors.
▶︎ What’s special about it?
GPT-5 introduces “safe completions”, which aim to answer safely rather than just refusing. It also monitors reasoning internally, which helps reduce sycophancy and deception. Together, these shift safety from input filtering to output quality.
▶︎ How dangerous is the model?
OpenAI classified GPT-5’s reasoning mode as “High” risk for bio-chemical threat modeling. External red-teamers could reproduce risky outputs under certain conditions. These findings triggered extra safeguards before release.
▶︎ What safety measures were taken?
Access to risky features is gated through a Trusted Access Program and usage throttles. The model is trained to give safe answers even on sensitive topics. Red teaming, output monitoring, and post-launch audits are used to catch remaining issues.
👉 Subscribe to the *AI Risk Management Newsletter* for weekly updates: https://lnkd.in/ezEwqVwn