Live · 7am IST · DailyFeatured
Reel

The ShiftMaker

AI Intelligence Daily
How-To · Coding

10 ways to use Claude Code that actually ship production code.

The patterns Indian dev teams are running today — across fintech, edtech, and SaaS shops in Bengaluru and Hyderabad. With code samples, what works in Cursor vs Claude Code, and what to avoid if you ship to real users.

Published 10 May 2026 · ID 2026-05-10-10-ways-to-use-claude-code-that-actually-ship-production-code
10 ways to use Claude Code that actually ship production code.

The honest version of 'AI coding tools 2026' is that most teams in Bengaluru, Hyderabad, and Pune have stopped debating Cursor versus Claude Code and started using both. The interesting patterns are not in which tool to pick — they are in the disciplines around how the tool gets called.

First, treat Claude Code as a junior engineer with a long memory, not a code-completion engine. The teams getting real leverage write a one-page brief at the top of every session — current architecture, the constraint, the acceptance test, the files Claude is allowed to touch.

Second, use the planning step. Claude Code's plan mode is undervalued because it does not produce code. That is the point — it forces a plan-then-implement loop that catches the misunderstanding while the plan is still cheap to throw away.

Third, set up your own per-project skills. Claude Code lets you ship reusable prompts as .claude/skills/ files inside the repo. The Bengaluru edtech we work with has a migrations skill, a feature-flag skill, and a pr-template skill — all institutional knowledge the agent now reads automatically.

Fourth, INR cost discipline matters. A team running Claude Code unmonitored on Sonnet 4.6 will burn ₹15-25K per developer per month. Route routine edits to Haiku, reserve Sonnet for architecture, and keep Opus for the once-a-month gnarly migration.

Fifth, ship the agent into your CI, not just your IDE. Five teams we spoke to now run Claude Code as a GitHub Action triggered by a claude-please label on a PR. It removes the most demoralising part of code review.

Sixth, lean on subagents for orthogonal work. The Explore agent is read-only and protects your main context window from getting blown out by file searches. The code-reviewer and security-reviewer agents catch what the writer agent missed. The cost discipline carries over: subagents that mostly read can run on Haiku while the main writer stays on Sonnet. A Hyderabad fintech we work with shaved ₹6K per developer per month just by routing all read-the-codebase tasks to Explore.

Seventh, treat permissions as code. Every team that runs Claude Code unattended — overnight refactors, scheduled pipelines, batch migrations — eventually hits the wall where a Bash glob doesn't match and the routine stalls on a permission prompt no human is awake to answer. Write your settings.local.json allow rules early, version them in git, and add new entries every time a routine prompts. Treat the file as part of your CI infrastructure, not Claude config.

Eighth, scheduled tasks for the boring half. The same Windows Task Scheduler that runs your nightly backups runs Claude Code routines too. We have one team firing a 'sweep the inbox + open Linear tickets for anything actionable' routine at 6:45 AM IST every weekday, and a 'check the deploy logs and post a summary to Slack' routine at 7:30 PM. Neither saves a developer day per week. Both run for ₹40-50/day in Sonnet calls and remove a context-switch from human time.

Ninth, give the agent your data shape via local MCP servers. The published Supabase, GitHub, and Postgres MCPs let Claude read your real schema before answering — no more confidently-wrong queries against columns that do not exist. The under-appreciated move is the custom MCP — most teams have an internal API or admin tool that Claude does not know about; a 50-line MCP server changes the agent from a generic assistant into one that knows your stack.

Tenth, validators before deploy, not after. The same week one Bengaluru team shipped an article promising '10 ways' and delivering five — same family of LLM count-versus-reality bug — they wired a build-time numeric-claim validator that aborts the publish when the headline says N and the body delivers fewer. The 'I cannot' / 'as an AI' detector is the same shape. Validators are cheap to write and they convert a class of silent failure into a loud, fixable abort. Worth the hour.

The shared property of all ten patterns is that they are about workflow discipline, not model choice. Pick the tool whose pricing matches your usage and whose ergonomics match your stack. Then put the disciplines in.

Sources

Share on X Share on LinkedIn