Claude 4.5 is available everywhere today. Through the API, the model maintains the same pricing as Claude Sonnet 4, at $3 per million input tokens and $15 per million output tokens. Developers can access it through the Claude API using “claude-sonnet-4-5” as the model identifier.
Other new features
Some ancillary features of the Claude family got some upgrades today, too. For example, Anthropic added code execution and file creation directly within conversations for users of Claude’s web interface and dedicated apps. Along those lines, users can now generate spreadsheets, slides, and documents without leaving the chat interface.
The company also released a five-day research preview called “Imagine with Claude” for Max subscribers, which demonstrates the model generating software in real time. Anthropic describes it as “a fun demonstration showing what Claude Sonnet 4.5 can do” when combined with appropriate infrastructure.
A screenshot of the available Anthropic AI models for Claude Max users seen in the Claude web interface on September 29, 2025.
Credit:
Benj Edwards
As mentioned above, the command-line development tool Claude Code also received several updates today, alongside the new model. The company added checkpoints that save progress and allow users to roll back to previous states, refreshed the terminal interface, and shipped a native VS Code extension. The Claude API also gains a new context editing feature and memory tool for handling longer-running agent tasks.
Right now, AI companies are particularly clinging to software development benchmarks as proof of AI assistant capability because progress in other fields is difficult to objectively measure, and it’s a domain where LLMs have arguably shown high utility compared to other fields that might suffer from confabulations. But people still use AI chatbots like Claude as general assistants. And given the recent news about troubles with some users going down fantasy rabbit holes with AI chatbots, it’s perhaps more notable than usual that Anthropic claims that Claude Sonnet 4.5 shows reduced “sycophancy, deception, power-seeking, and the tendency to encourage delusional thinking” compared to previous models. Sycophancy, in particular, is the tendency for an AI model to praise the user’s ideas, even if they are wrong or potentially dangerous.
We could quibble with how Anthropic frames some of those AI output behaviors through a decidedly anthropomorphic lens, as we have in the past, but overall, attempts to reduce sycophancy are welcome news in a world that has been increasingly turning to chatbots for far more than just coding assistance.
Leave a Reply