All it takes is 3 environmental variables need to be set in ~/.claude/settings.json config file for you to regain access to Claude Opus 4.6 and Opus 4.5 selection in /model selection
Anthropic’s Claude Opus 4.7 prompting guide references that prompt steering can impact Opus 4.7 more than previous Opus models. Opus 4.7 calibrates to task complexity and lets its extended reasoning be shaped by the prompt.
I did benchmarks of 200 headless Claude Code sessions comparing Opus 4.6 and Opus 4.7 1M-context models across effort levels and prompt steering variants - concise, step by step, ultrathink and how that impacts token usage and costs and instruction following performance.
Claude Code now exposes a reasoning_effort knob with five public rungs: low, medium, high, xhigh, max. The pitch is simple. Higher effort means more thinking, which means better answers on hard problems.
The unasked question is what that knob actually costs, in tokens and dollars, and whether the same crank behaves the same way across different models. I spent an afternoon of subscription quota finding out
I created this Claude Code session-metric skill pluginso that I could have insights into Claude Code models' tokens and cost usage at both the project level and also at the individual chat session level.
There are still some Claude Code users reporting having hit their 5-hour session limits prematurely, and I’m always curious how their patterns of usage differed from mine. So I’m hoping this session-metrics skill becomes a useful tool for others as well
Boris Cherny, creator of Claude Code, posted a six-part thread on Threads on how he and his team get the most out of Opus 4.7. The tips are small on their own but coherent together. I went through each one, cross-checked it against the Claude Code docs, the migration guide, and the Opus 4.7 announcement, and pulled out what I think actually matters.
I wanted per-turn visibility at the individual Claude Code chat session level. So I built a Claude Code skill, sessions-metric that reads Claude Code’s raw conversation logs and breaks down every response at the project and project session level.
There are other popular usage tools, ccusage, ccburn, Claude-Code-Usage-Monitor, codeburn etc, but none would also operate at the Claude Code individual chat session level.
Claude Code/Cowork Skill called, ai-video-creator allows video generation using a unified API that aggregates ByteDance Seedance 2.0, Kling 3.0, Google Veo 3.1, Grok Imagine, Wan 2.7, Runway, ElevenLabs, and Suno AI behind a single authentication flow.
I built a Claude Code skill that generates images from the terminal and also via Claude Desktop MacOS app and Cowork. One command, any AI model, with transparent backgrounds, reference image editing, prompt engineering patterns, and composite banner generation built in.
The skill supports five AI image models through OpenRouter’s API, all proxied through Cloudflare AI Gateway for monitoring and cost control:
Gemini 3.1 Flash Image Preview (Google Nano Banana 2)
FLUX.2 Max
Riverflow v2 Pro
Seedream 4.5
GPT-5 Image
I built a Claude Code skill that generates images from the terminal and now also via Claude Desktop MacOS app. Skill supports image generation via Openrouter models