50 AI Prompts for Coding
AI has become the most powerful coding assistant available — but only if you prompt it well. These prompts cover the full development lifecycle from scaffolding new projects to debugging production issues. They are designed to produce clean, production-ready code with proper error handling, typing, and documentation rather than toy examples.
Code Generation
Write a [language] function that [describe behavior]. Requirements: handle edge cases for [list them], include TypeScript types / type hints, add JSDoc / docstring comments, follow [framework] conventions, and return [expected output]. Include 3 usage examples.
Tip: Specifying edge cases upfront produces dramatically more robust code than asking for a basic implementation.
Build a REST API endpoint in [framework] for [resource]. Include: route definition, request validation with [library], controller logic, service layer, database query with [ORM], error handling, and response formatting. Follow the repository pattern.
Tip: Asking for layered architecture (controller/service/repository) produces code that is testable and maintainable.
Create a React component for [describe UI element]. Requirements: TypeScript with proper prop types, responsive design with [CSS approach], accessibility (ARIA labels, keyboard navigation), loading and error states, and Storybook story. Use [state management approach].
Tip: Always specify accessibility and loading/error states. AI will skip them unless told otherwise.
Generate a database schema for [application type] using [database]. Include: table definitions with proper data types, indexes for common queries, foreign key relationships, migration file, and seed data script. Optimize for [read-heavy/write-heavy] workload.
Tip: Specifying the workload pattern ensures the AI adds appropriate indexes and normalization level.
Debugging
Debug this error: [paste error message and stack trace]. Context: [language/framework], this happens when [describe trigger], expected behavior is [describe]. Explain: what is causing it, why it happens, the fix, and how to prevent it in the future.
Tip: Always include the full stack trace and the trigger condition — they are more useful than the error message alone.
This code produces the wrong output. Code: [paste code]. Input: [example input]. Expected output: [what it should return]. Actual output: [what it returns]. Walk through the execution step by step, identify where logic diverges from intent, and provide the corrected code.
Tip: Providing both expected and actual output helps the AI pinpoint the logical error faster.
I have a memory leak in my [language/framework] application. Symptoms: [describe]. Relevant code: [paste suspected code]. Identify potential sources of the leak, explain the memory lifecycle issue, and provide fixes with before/after code.
Tip: Memory leaks are often caused by uncleaned event listeners, unclosed connections, or growing caches without eviction.
This API endpoint returns [status code] intermittently. Code: [paste]. It works most of the time but fails under [conditions]. Analyze potential race conditions, timing issues, or resource contention. Provide a robust fix with proper error handling.
Tip: Intermittent bugs are almost always concurrency, timing, or resource exhaustion issues.
Refactoring
Refactor this function for readability and maintainability: [paste code]. Goals: reduce complexity, extract helper functions, improve naming, add types, and remove duplication. Show before/after with explanations for each change. Keep behavior identical.
Tip: Ask the AI to explain each change so you learn refactoring patterns, not just get cleaner code.
This [language] codebase uses callbacks everywhere. Refactor these 3 functions to use async/await: [paste code]. Maintain identical behavior, add proper error handling with try/catch, and handle the case where [specific edge case].
Tip: When refactoring async patterns, always verify error propagation behavior matches the original.
Review this code for performance issues: [paste code]. It processes [X records] and takes [Y seconds]. Profile the bottlenecks, suggest optimizations with Big-O analysis, and rewrite the critical sections. Target: under [Z seconds].
Tip: Setting a specific performance target gives the AI a concrete optimization goal.
Convert this class-based [framework] component to a functional component with hooks: [paste code]. Preserve all behavior including lifecycle methods, state management, and ref usage. Add TypeScript types that were missing.
Tip: Map each lifecycle method to its hook equivalent explicitly to avoid subtle behavior changes.
Testing
Write unit tests for this function using [testing framework]: [paste code]. Cover: happy path, edge cases, error cases, boundary values, and null/undefined inputs. Use descriptive test names following the 'should [expected behavior] when [condition]' pattern.
Tip: Test names that describe behavior serve as documentation and make failures immediately understandable.
Create integration tests for this API endpoint using [testing framework]: [paste route code]. Test: successful requests, validation errors, authentication, authorization, database interactions, and error responses. Use factory functions for test data.
Tip: Integration tests should test the full request/response cycle, not mock away the parts that break in production.
Generate test cases for this complex business logic: [describe rules]. Create a decision table covering all rule combinations, then write parameterized tests implementing each row. Include edge cases where rules conflict.
Tip: Decision tables make complex business logic testable by enumerating every rule combination systematically.
Write end-to-end tests using [Playwright/Cypress] for this user flow: [describe flow]. Cover: happy path, form validation errors, loading states, network failures (mock), and accessibility checks. Use page object pattern.
Tip: E2E tests should mirror real user behavior — interact through the UI, not by manipulating DOM directly.
Architecture and Design
Design the architecture for a [application type] that needs to handle [scale requirements]. Cover: system components, data flow, database choice with justification, API design, caching strategy, authentication approach, and deployment architecture. Include a diagram description.
Tip: State your scale requirements explicitly — architecture for 100 users differs dramatically from architecture for 100,000.
Design a database migration strategy for adding [feature] to a production [database] with [X million] rows. Requirements: zero downtime, backward compatible, rollback plan. Provide the migration steps, SQL, and deployment sequence.
Tip: Zero-downtime migrations require adding before removing — new column, backfill, deploy new code, then drop old column.
Create a CI/CD pipeline configuration for a [language/framework] project using [CI tool]. Stages: lint, type check, unit tests, integration tests, build, deploy to staging, smoke tests, deploy to production. Include caching and parallelization.
Tip: Order pipeline stages by speed — fast checks first so slow builds are not wasted on code that fails linting.
Design an error handling strategy for a [application type]. Cover: error classification (operational vs programmer), custom error classes, logging standards, user-facing messages, retry policies, circuit breaker patterns, and monitoring alerts.
Tip: Distinguish between errors you can recover from and errors you cannot. Handle them differently.