Table of Contents
Introduction
Large Language Models (LLMs) have revolutionized the way developers build software. Whether you’re starting a new project from scratch or improving existing code, proper use of LLMs can significantly boost your productivity, enhance code quality, and drive process optimization. This guide, based on practitioners’ experience, presents proven methods for using LLMs effectively to streamline and optimize every stage of the programming process.
Creating Projects from Scratch (Greenfield)
Step 1: Refining the Idea
Start by using a conversational LLM (e.g., ChatGPT-4o or Claude) to thoroughly develop your idea:
- Begin brainstorming using the following prompt:
Ask me one question about my idea. Then, based on my answers, ask one question at a time to create a detailed technical specification. Goal: systematically cover all project aspects – architecture, data, API, UX, security. After each answer, ask the next question to deepen the specification.
Here’s my idea:
<YOUR_IDEA>
- Summarize the brainstorming once complete:
Create a comprehensive technical specification based on our conversation. Include:
– Functional and non-functional requirements
– System architecture (components, interfaces)
– Data model
– User flows
– Error handling strategy
– Testing plan
Format: technical documentation ready for implementation.
- Save the specification as spec.md in the project repository.
Step 2: Planning
Use a reasoning-focused model (Claude Opus, Claude 3.5 Sonnet, etc.) to create a detailed plan:
For the TDD (Test-Driven Development) approach:
Based on the attached specification:
1. Create an implementation plan divided into stages
2. Break down each stage into small, testable implementation tasks
3. For each task, prepare a prompt for the LLM that:
– Includes context from previous steps
– Defines the tests to be written
– Specifies the functionality to be implemented
– Explains how to integrate with existing code
Result format:
– Each prompt in a markdown code block
– Tasks numbered in logical implementation order
– Prompts optimized for generating TDD-style implementation
<SPECIFICATION>
For the non-TDD approach:
Based on the attached specification:
1. Create an implementation plan divided into stages
2. Break down each stage into small, executable programming tasks
3. For each task, prepare a prompt for the LLM that:
– Includes necessary context from previous steps
– Clearly defines the functionality to implement
– Specifies the expected code format and structure
– Explains how to integrate with previously generated code
Result format:
– Each prompt in a markdown code block
– Tasks numbered in logical implementation order
– Prompts focused on modular, testable implementation
<SPECIFICATION>
Task tracking checklist:
Create a file `todo.md` with a checklist that includes:
1. All tasks identified in the implementation plan
2. Grouped by stage/component
3. Checkboxes to track progress ([ ])
4. Complexity rating for each task (1–5)
5. Task dependencies
Save the prompt plan as prompt_plan.md in the repository.
Step 3: Execution
Many tools are available to help carry out the plan. Success largely depends on the quality of the plan from Step 2.
Using Claude:
- Prepare initial boilerplate code and configure tools.
- Paste a prompt from the plan into Claude.
- Copy the generated code into your IDE.
- Run the code and tests.
- If it works, move on to the next prompt.
- If it doesn’t, use a tool like repomix to send the codebase to Claude for debugging.
- Repeat the process.
Using Aider:
- Prepare initial boilerplate code and configure tools.
- Launch Aider (always on a new branch).
- Paste a prompt from the plan into Aider.
- Watch Aider generate the code.
- Aider will run tests, or you can launch the app to verify it.
- If it works, move on to the next prompt.
- If not, conduct a Q&A session with Aider to fix issues.
- Repeat the process.
Working with existing code (Non-Greenfield)
When working with an existing codebase, a slightly different approach is required – one focused on specific tasks rather than the whole project.
- Choose a tool for code analysis:
- Tools like repomix can extract and compress code
- Alternatives: IDE extensions, Bash/Python scripts, or CI/CD tools
- Tools like repomix can extract and compress code
- Prepare code context:
- Collect relevant code fragments in a text file
- Remove irrelevant elements (external libraries, binary assets)
- Define the importance hierarchy of files and components
- Collect relevant code fragments in a text file
- Use tools to process the context:
- Tools like LLM CLI can help organize prompts
- A template-based prompt generator can standardize code analysis
- Tools like LLM CLI can help organize prompts
Practical Examples
Missing Test Analysis with Claude:
- Navigate to the code directory.
- Use a tool to extract context (e.g., scripts or repomix).
- The author uses custom commands in mise (e.g., mise run LLM:generate_missing_tests) configured in their environment.
- The author uses custom commands in mise (e.g., mise run LLM:generate_missing_tests) configured in their environment.
- Analyze the missing test report.
- Copy the code context and prompt for a specific test.
- Paste it into Claude and request the implementation.
- Copy the generated code into your IDE.
- Run the tests.
- Repeat for other missing tests.
Missing Test Analysis with Aider:
- Navigate to the code directory.
- Launch Aider (on a new branch).
- Use a code analysis tool to identify missing tests.
- You can use scripts, IDE plugins, or test coverage tools.
- You can use scripts, IDE plugins, or test coverage tools.
- Review the report.
- Paste the prompt into Aider.
- Observe the generated test code.
- Run the tests.
- Repeat the process.
Effective Prompts
Code Review
Perform a detailed review of the code below:
1. Identify potential issues in:
– Logical correctness
– Performance
– Security
– Convention compliance
– Readability
2. For each issue, provide:
– Location (file/line)
– Description
– Suggested fix
– Priority (critical/high/medium/low)
3. Response format: markdown with sections by issue type, code excerpts, and suggested changes.
<CODE>
Generating GitHub Issues
Analyze the code and create GitHub-style issues:
1. Identify problems such as:
– Bugs or logical flaws
– Technical debt
– Missing features
– Scalability/security concerns
2. For each issue include:
– Title
– Description
– File/line location
– Suggested fix
– Labels (bug, enhancement, etc.)
– Priority
3. Output format: markdown list of issues ready for GitHub.
<CODE>
Missing Tests
Analyze the code for missing test coverage:
1. Identify key components requiring tests
2. For each missing test, include:
– What to test (function/module/class)
– Test cases (edge cases, normal use)
– Suggested code or pseudocode
– Priority
3. If existing tests are lacking, highlight:
– What’s missing
– Weak spots
– Improvement suggestions
4. Output format: developer-ready GitHub-style task list.
<CODE>
Pitfalls and Challenges
- Losing control – easy to lose project focus. Stick to the plan and test often.
- Wait times – LLMs take time to generate results. Use that time to brainstorm, review, or plan ahead.
- Solo workflow – LLMs are currently single-user tools, making collaboration harder.
Best Practices
- Plan before coding – even small tasks benefit from thoughtful planning.
- Work iteratively – break work into testable chunks.
- Test early and often – write unit tests for each feature.
- Keep documentation – store spec, prompt plan, and task list in your repo.
- Prepare your environment – have boilerplate and tools ready.
- Pace yourself – take breaks when you lose clarity.
- Experiment with tools – – try different tools (Claude, Aider, Cursor, GitHub Workspace) and find the one that best suits your work style.