How I Slashed 3 Story Point Tasks to 2 Hours with AI

6/1/2025

 Plugins

I always look for better ways to work, deliver faster, and solve tough problems. AI-assisted development promises a lot, but it can also be tricky. Sometimes, AI requests are like a black box – you send a big request and you get a big answer, but you don't know how it got there. This makes it hard to fix problems and trust the code.

I recently found a way to finish tasks that usually take three story points in just two hours. That’s a huge improvement!

My secret? A clear, step-by-step AI workflow in the Cursor editor, inspired by the clever claude-task-master project.

In this blog post, I'll show you how I did it. We'll see how I figured out the main ideas behind claude-task-master, changed them to fit a slimmer solution inside Cursor, and became much more efficient.

The Problem with AI-Assisted Development (and how claude-task-master helps)

Using AI in software development sounds great. Imagine an AI helper that writes code, fixes bugs, and even plans big features. But often, it doesn't work as well as we hope. Developers often face these problems when using AI:

  • Big Requests, Hidden Work: When you ask an AI to do a lot at once, it gives you a huge output. It's hard to understand how the AI thought, find mistakes, or make small changes. It's like asking a magic box for an answer and just hoping it's right.
  • Not Enough Control: If you can't guide the AI step-by-step, you lose control. The AI might go off track, write useless code, or miss important details.
  • Debugging Headaches: If AI-made code breaks, fixing it can be very hard. Because you don't see how the AI worked, it's tough to find out why something went wrong. This wastes time and causes frustration.
  • Uneven Quality: AI-generated code can be good or bad. If you just give the AI big, unclear instructions, the code might need a lot of fixing or even a complete rewrite.

These problems show we need a better, more controlled way to use AI in development. This is where projects like claude-task-master come in.
claude-task-master is an AI-powered system for managing tasks. It was made to bring order to this messy process. It uses AI models like Claude (now supports other LLMs also 🚀) to give you a clear way to develop with AI.

 Plugins

claude-task-master uses a few main ideas to solve these problems:

  • Product Requirement Documents (PRDs): Instead of vague requests, claude-task-master starts with a clear PRD. This document explains what needs to be built. It makes sure both you and the AI understand the goal.
  • Detailed Task Lists: The PRD is then broken down into small, clear tasks. This turns a big problem into many smaller, easier steps. This makes the AI's work more focused and predictable.
  • Step-by-Step Work: Instead of making the whole solution at once, claude-task-master works step-by-step. The AI does one task at a time. You can check its work, approve it, and give feedback before it moves on. This helps catch problems early and keeps the quality high.
  • Easy API Key and Model Use: The system is flexible. It works with different AI models (like Claude, OpenAI, Google Gemini, etc.) by using API keys. This lets you pick the best model for each task and switch between them easily.

By using these ideas, claude-task-master changes AI development from messy to organized and effective. This smart solution inspired me to build a similar—but even leaner—approach inside Cursor.

How I Rebuilt It for the Cursor Editor ⚡️

 Plugins

My own experience with AI development was like many others. While AI had great potential, using it in real projects was often hard. I liked claude-task-master's organized way of working, but I wanted something even simpler, tailored to my day-to-day workflow in Cursor. My goal was clear: gain more control over what the AI produced, understand its steps, and accelerate delivery.

Instead of copying claude-task-master line-for-line, I focused on its core principles: breaking down large problems, working iteratively, and keeping a transparent conversation with the AI. I distilled those ideas into a small set of Markdown Command (.mdc) files that slot directly into Cursor. Here are the pieces:

create-prd.mdc

Helps the AI draft a Product Requirement Document. I provide a short feature description; the command expands it into a full PRD so everyone starts with the same understanding.

---
description:
globs:
alwaysApply: false
---
# Rule: Generating a Product Requirements Document (PRD)

## Goal

To guide an AI assistant in creating a detailed Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature.

## Process

1.  **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality.
2.  **Ask Clarifying Questions:** Before writing the PRD, the AI *must* ask clarifying questions to gather sufficient detail. The goal is to understand the "what" and "why" of the feature, not necessarily the "how" (which the developer will figure out).
3.  **Generate PRD:** Based on the initial prompt and the user's answers to the clarifying questions, generate a PRD using the structure outlined below.
4.  **Save PRD:** Save the generated document as `prd-[feature-name].md` inside the `/tasks` directory.

## Clarifying Questions (Examples)

The AI should adapt its questions based on the prompt, but here are some common areas to explore:

*   **Problem/Goal:** "What problem does this feature solve for the user?" or "What is the main goal we want to achieve with this feature?"
*   **Target User:** "Who is the primary user of this feature?"
*   **Core Functionality:** "Can you describe the key actions a user should be able to perform with this feature?"
*   **User Stories:** "Could you provide a few user stories? (e.g., As a [type of user], I want to [perform an action] so that [benefit].)"
*   **Acceptance Criteria:** "How will we know when this feature is successfully implemented? What are the key success criteria?"
*   **Scope/Boundaries:** "Are there any specific things this feature *should not* do (non-goals)?"
*   **Data Requirements:** "What kind of data does this feature need to display or manipulate?"
*   **Design/UI:** "Are there any existing design mockups or UI guidelines to follow?" or "Can you describe the desired look and feel?"
*   **Edge Cases:** "Are there any potential edge cases or error conditions we should consider?"

## PRD Structure

The generated PRD should include the following sections:

1.  **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal.
2.  **Goals:** List the specific, measurable objectives for this feature.
3.  **User Stories:** Detail the user narratives describing feature usage and benefits.
4.  **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements.
5.  **Non-Goals (Out of Scope):** Clearly state what this feature will *not* include to manage scope.
6.  **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable.
7.  **Technical Considerations (Optional):** Mention any known technical constraints, dependencies, or suggestions (e.g., "Should integrate with the existing Auth module").
8.  **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X").
9.  **Open Questions:** List any remaining questions or areas needing further clarification.

## Target Audience

Assume the primary reader of the PRD is a **junior developer**. Therefore, requirements should be explicit, unambiguous, and avoid jargon where possible. Provide enough detail for them to understand the feature's purpose and core logic.

## Output

*   **Format:** Markdown (`.md`)
*   **Location:** `/tasks/`
*   **Filename:** `prd-[feature-name].md`

## Final instructions

1. Do NOT start implementing the PRD
2. Make sure to ask the user clarifying questions
3. Take the user's answers to the clarifying questions and improve the PRD

generate-tasks.mdc

Turns the completed PRD into a granular task list. Big ideas become many bite-sized steps, eliminating guesswork and giving a clear execution roadmap.

---
description: 
globs: 
alwaysApply: false
---
# Rule: Generating a Task List from a PRD

## Goal

To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Product Requirements Document (PRD). The task list should guide a developer through implementation.

## Output

- **Format:** Markdown (`.md`)
- **Location:** `/tasks/`
- **Filename:** `tasks-[prd-file-name].md` (e.g., `tasks-prd-user-profile-editing.md`)

## Process

1.  **Receive PRD Reference:** The user points the AI to a specific PRD file
2.  **Analyze PRD:** The AI reads and analyzes the functional requirements, user stories, and other sections of the specified PRD.
3.  **Phase 1: Generate Parent Tasks:** Based on the PRD analysis, create the file and generate the main, high-level tasks required to implement the feature. Use your judgement on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the PRD. Ready to generate the sub-tasks? Respond with 'Go' to proceed."
4.  **Wait for Confirmation:** Pause and wait for the user to respond with "Go".
5.  **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks necessary to complete the parent task. Ensure sub-tasks logically follow from the parent task and cover the implementation details implied by the PRD.
6.  **Identify Relevant Files:** Based on the tasks and PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable.
7.  **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, and notes into the final Markdown structure.
8.  **Save Task List:** Save the generated document in the `/tasks/` directory with the filename `tasks-[prd-file-name].md`, where `[prd-file-name]` matches the base name of the input PRD file (e.g., if the input was `prd-user-profile-editing.md`, the output is `tasks-prd-user-profile-editing.md`).

## Output Format

The generated task list _must_ follow this structure:

markdown
## Relevant Files

- `path/to/potential/file1.ts` - Brief description of why this file is relevant (e.g., Contains the main component for this feature).
- `path/to/file1.test.ts` - Unit tests for `file1.ts`.
- `path/to/another/file.tsx` - Brief description (e.g., API route handler for data submission).
- `path/to/another/file.test.tsx` - Unit tests for `another/file.tsx`.
- `lib/utils/helpers.ts` - Brief description (e.g., Utility functions needed for calculations).
- `lib/utils/helpers.test.ts` - Unit tests for `helpers.ts`.

### Notes

- Unit tests should typically be placed alongside the code files they are testing (e.g., `MyComponent.tsx` and `MyComponent.test.tsx` in the same directory).
- Use `npx jest [optional/path/to/test/file]` to run tests. Running without a path executes all tests found by the Jest configuration.

## Tasks

- [ ] 1.0 Parent Task Title
  - [ ] 1.1 [Sub-task description 1.1]
  - [ ] 1.2 [Sub-task description 1.2]
- [ ] 2.0 Parent Task Title
  - [ ] 2.1 [Sub-task description 2.1]
- [ ] 3.0 Parent Task Title (may not require sub-tasks if purely structural or configuration)


## Interaction Model

The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details.

## Target Audience

Assume the primary reader of the task list is a **junior developer** who will implement the feature.

process-task-list.mdc

Runs through that task list one item at a time. After each step the AI pauses so I can review, tweak, and approve before moving on—catching issues early and avoiding snowballing errors.

---
description: 
globs: 
alwaysApply: false
---
# Task List Management

Guidelines for managing task lists in markdown files to track progress on completing a PRD

## Task Implementation
- **One sub-task at a time:** Do **NOT** start the next sub‑task until you ask the user for permission and they say “yes” or "y"
- **Completion protocol:**  
  1. When you finish a **sub‑task**, immediately mark it as completed by changing `[ ]` to `[x]`.  
  2. If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed.  
- Stop after each sub‑task and wait for the user’s go‑ahead.

## Task List Maintenance

1. **Update the task list as you work:**
   - Mark tasks and subtasks as completed (`[x]`) per the protocol above.
   - Add new tasks as they emerge.

2. **Maintain the “Relevant Files” section:**
   - List every file created or modified.
   - Give each file a one‑line description of its purpose.

## AI Instructions

When working with task lists, the AI must:

1. Regularly update the task list file after finishing any significant work.
2. Follow the completion protocol:
   - Mark each finished **sub‑task** `[x]`.
   - Mark the **parent task** `[x]` once **all** its subtasks are `[x]`.
3. Add newly discovered tasks.
4. Keep “Relevant Files” accurate and up to date.
5. Before starting work, check which sub‑task is next.
6. After implementing a sub‑task, update the file and then pause for user approval.

claude-task-master is a full-fledged system with server management and API-key rotation. My stripped-down toolkit stays inside Cursor and does just enough to keep the work flowing. Sometimes less really is more, especially when speed is the priority.

Why the speed-up is real

  • Clear goals first: create-prd.mdc forces crystal-clear requirements before any code is written. No wandering down wrong paths.
  • Tiny, actionable units: generate-tasks.mdc converts the PRD into well-defined micro-tasks, making progress visible and manageable.
  • Continuous feedback loop: With process-task-list.mdc, every chunk is reviewed immediately, so corrections are quick and cheap.
  • Focused AI assistance: The AI stays on-track because each prompt is narrow and precise, reducing re-prompting and manual fixes.

Organized AI Development is Powerful

The biggest lesson is that AI isn't a magic button. It works best when you use it in an organized way.
By breaking down big tasks into smaller ones, we give the AI clear instructions. This leads to better and more predictable results.