Best Practices and Workflows
This guide covers recommended patterns for organizing tasks, structuring projects, and getting the most out of taskmd. Each recommendation includes a rationale so you can decide what fits your workflow.
Task File Organization
Flat vs. Grouped Directories
Start flat, group when it gets noisy.
A new project with 10-20 tasks works fine with everything in tasks/:
tasks/
├── 001-project-setup.md
├── 002-user-auth.md
├── 003-api-endpoints.md
└── 004-deploy-pipeline.mdOnce you have 30+ tasks or distinct work areas, introduce subdirectories:
tasks/
├── 001-specification.md
├── backend/
│ ├── 010-database-schema.md
│ ├── 011-api-endpoints.md
│ └── 012-auth-middleware.md
├── frontend/
│ ├── 020-component-library.md
│ └── 021-dashboard-layout.md
└── infra/
├── 030-ci-pipeline.md
└── 031-staging-deploy.mdWhy? Groups reduce visual noise and make taskmd list tasks/backend useful for focusing on one area. But premature grouping adds friction when you're still figuring out the shape of the project. Let the structure emerge from the work.
Tip: The group field is derived from the directory name automatically. You don't need to set it in frontmatter unless you want to override it.
Naming Conventions
Use the NNN-descriptive-slug.md pattern:
- Zero-pad IDs so files sort naturally:
001,042, not1,42 - Keep slugs short but meaningful:
012-auth-middleware.md, not012-implement-jwt-based-authentication-middleware-for-api.md - Use hyphens, not underscores or spaces
When groups have independent ID sequences, prefix with the group name:
tasks/
├── cli/
│ ├── 015-graph-command.md # ID: "015"
│ └── 016-validate-command.md # ID: "016"
└── web/
├── 010-board-view.md # ID: "010"
└── 011-graph-view.md # ID: "011"Why? Consistent naming makes tasks easy to find with ls, grep, and tab completion. The slug in the filename gives you context without opening the file.
Writing Good Tasks
A well-written task has three sections:
- Objective -- what and why (1-2 sentences)
- Tasks -- a checklist of concrete steps
- Acceptance Criteria -- how to know it's done
---
id: "042"
title: "Add rate limiting to API"
status: pending
priority: high
effort: medium
type: feature
tags:
- api
- security
---
# Add Rate Limiting to API
## Objective
Protect the API from abuse by adding rate limiting to public endpoints.
Without this, a single client can exhaust server resources.
## Tasks
- [ ] Choose a rate limiting strategy (token bucket vs sliding window)
- [ ] Add rate limiting middleware
- [ ] Configure per-endpoint limits
- [ ] Add rate limit headers to responses
- [ ] Write integration tests
## Acceptance Criteria
- Public endpoints return 429 when rate limit is exceeded
- Rate limit headers (X-RateLimit-*) are present on all responses
- Authenticated endpoints have higher limits than anonymous onesWhy? The objective gives context to anyone (human or AI) picking up the task. The checklist makes progress visible. The acceptance criteria define "done" unambiguously.
TIP
Keep tasks focused on a single deliverable. If a task has more than 8-10 subtasks, it's probably two tasks. Use dependencies to link them.
Workflows
Solo Developer
For a single developer working on a project, the default solo workflow is straightforward:
# .taskmd.yaml
workflow: soloTypical flow:
- Run
taskmd nextto pick a task - Mark it
in-progresswithtaskmd set <ID> --status in-progress - Do the work, checking off subtasks as you go
- Mark it
completedwithtaskmd set <ID> --done
Tips for solo workflows:
- Use
taskmd next --quick-winswhen you have limited time. It surfaces small, high-impact tasks. - Don't over-specify dependencies. If you're the only one working, you naturally serialize work. Add dependencies only when the order truly matters (e.g., "build the schema before building the API").
- Use tags for planning horizons. Tags like
mvp,v2,nice-to-havehelp you focus on what matters now. - Keep the backlog groomed. Cancel tasks you'll never do. A long list of stale tasks makes
nextless useful.
Team Workflow with PR Review
For teams where work goes through code review, use the pr-review workflow:
# .taskmd.yaml
workflow: pr-reviewTypical flow:
- Pick a task from
taskmd next - Create a feature branch
- Mark the task
in-progress - Do the work
- Open a PR and associate it:
taskmd set <ID> --done --add-pr https://github.com/org/repo/pull/123 - The task moves to
in-review(notcompleted) - After the PR is merged, mark it
completed
Tips for team workflows:
- Use
ownerto assign tasks. Filter withtaskmd list --filter owner=aliceto see who's working on what. - Use
touchesfor scope awareness. When two people work on tasks that touch the same files, thetrackscommand warns you about potential merge conflicts. - Establish a tag vocabulary. Agree on a consistent set of tags across the team (e.g.,
frontend,backend,api,docs). Inconsistent tags make filtering useless. - Review tasks in PRs. Since task files live in the repo, changes to task status show up in pull requests. Reviewers can verify that task updates match the code changes.
AI-Assisted Workflow
When working with AI coding assistants, taskmd becomes the coordination layer between you and the AI:
With Claude Code (plugin):
/taskmd:next-task # AI picks the best task
/taskmd:do-task 042 # AI reads the task, plans, and executes
/taskmd:complete-task 042 # AI marks it done after verificationWith any AI assistant (direct file access):
Point the AI at your task files and ask it to work through them:
"Read the tasks in
tasks/and work on the highest-priority pending task. Mark it in-progress, complete the subtasks, and mark it done."
Tips for AI workflows:
Write detailed acceptance criteria. AI assistants use these to verify their own work. Vague criteria lead to vague results.
Use the
verifyfield to define automated checks. The AI can run these to confirm it did the work correctly:yamlverify: - type: bash run: "go test ./..." dir: "apps/cli" - type: assert check: "New endpoint returns paginated results"Use
contextto point the AI at relevant files. This saves it from searching the codebase:yamlcontext: - "apps/cli/internal/api/handlers.go" - "docs/api-design.md"Keep tasks small enough for a single session. AI assistants work best with focused, well-scoped tasks. A task that says "build the entire authentication system" will produce worse results than three tasks that say "design the auth flow", "implement JWT signing", and "add login endpoint".
Dependencies
When to Use Dependencies
Add a dependency when one task cannot start until another is completed. Common cases:
- Schema before API: You can't build endpoints without the data model
- API before UI: The frontend needs backend endpoints to call
- Setup before everything: Infrastructure tasks often block feature work
# 012-api-endpoints.md
dependencies: ["010"] # Needs the database schema from task 010When Not to Use Dependencies
Don't add dependencies for:
- Preference ordering -- "I'd like to do A before B" is not a dependency. Use priority instead.
- Same-area work -- Two frontend tasks don't necessarily depend on each other. Use
touchesto flag overlap instead. - Completed tasks -- Dependencies on already-completed tasks add noise without value. If the task is done, the dependency is already satisfied.
Why? Over-linking creates artificial bottlenecks. The next command uses dependencies to filter recommendations -- if everything depends on everything, nothing gets recommended.
TIP
A good rule of thumb: if you could realistically complete task B without task A being done, they're not dependent. They might be related, but that's what tags are for.
Keeping the Graph Clean
Visualize your dependency graph periodically:
taskmd graph --format ascii --exclude-status completedLook for:
- Long chains (A → B → C → D → E) -- can any steps be parallelized?
- Fan-in bottlenecks -- one task blocking five others might need to be split
- Orphan clusters -- disconnected groups of tasks might need a shared root
Scopes and Touches
What They're For
The touches field declares which code areas a task modifies. The tracks command uses this to detect overlap and organize tasks into parallel work streams.
# Task that modifies the graph module
touches:
- cli/graph
- cli/outputSetting Up Scopes
Define scopes in .taskmd.yaml to map abstract names to real paths:
scopes:
cli/graph:
description: "Graph visualization and rendering"
paths:
- "apps/cli/internal/graph/"
- "apps/cli/internal/cli/graph.go"
cli/output:
paths:
- "apps/cli/internal/cli/format.go"
web/board:
paths:
- "apps/web/src/components/board/"Best practices:
Name scopes by area, not by task.
cli/graphis reusable;graph-refactoris not.Keep scopes granular enough to be useful. A scope called
clithat covers the entire CLI codebase doesn't help detect conflicts.cli/graphandcli/outputdo.Don't scope everything. Only define scopes for areas where parallel work actually causes conflicts. A shared utility file that everyone imports doesn't need a scope.
Add descriptions to make validation warnings helpful:
yamlscopes: cli/graph: description: "Graph visualization and dependency rendering"
Using Tracks for Parallel Work
Once tasks have touches annotations, the tracks command groups them into non-conflicting work streams:
taskmd tracksThis is especially useful for teams or when running multiple AI assistants in parallel. Each track can be assigned to a different developer or AI session without risking merge conflicts.
Phases
What Phases Are For
Phases group tasks by time-based targets — releases, sprints, project stages, or deadlines. Unlike tags (which categorize by topic), phases answer "when does this need to be done?"
Common uses:
- Releases:
v0.2,v1.0 - Sprints:
sprint-3,2026-Q2 - Project stages:
alpha,beta,launch
Configuring Phases
Define phases in .taskmd.yaml to enable validation and due-date-aware ranking:
# .taskmd.yaml
phases:
- name: "v0.2"
description: "Core CLI features"
due: 2026-04-01
- name: "v0.3"
description: "Web dashboard"
due: 2026-06-01When phases are configured, taskmd validate warns if a task references an undefined phase. Without configuration, all phase values are accepted silently.
Assigning Tasks to Phases
Set a task's phase in frontmatter or via the CLI:
# In the task file
phase: "v0.2"# Via CLI
taskmd set 042 --phase v0.2
taskmd add --title "New feature" --phase v0.2Filtering and Viewing by Phase
# List tasks in a phase
taskmd list --phase v0.2
# Next task for a phase
taskmd next --phase v0.2
# Board grouped by phase
taskmd board --group-by phase
# Stats grouped by phase
taskmd stats --group-by phaseTips
- Start with phases when you have deadlines. If everything is "someday," tags work fine. Once you have a release date, phases make priorities concrete.
- Keep phase names short. They appear in tables and boards.
v0.2is better thanVersion 0.2 - Core CLI Features Release. - Add
duedates to get phase-aware ranking intaskmd next. Tasks in phases with closer due dates score higher. - Review phase progress with
taskmd board --group-by phaseto see which phases are on track.
CI/CD Integration
Pre-commit Validation
Add taskmd validate to your pre-commit hooks to catch issues before they're committed:
# .githooks/pre-commit
#!/bin/sh
taskmd validate
if [ $? -ne 0 ]; then
echo "Task validation failed. Fix errors before committing."
exit 1
fiEnable the hooks directory:
git config core.hooksPath .githooksWhy? Catching invalid task files at commit time is cheaper than discovering them in CI or production. Common catches include duplicate IDs, missing required fields, and broken dependency references.
CI Pipeline Validation
Add validation to your CI pipeline for tasks that bypass local hooks:
# GitHub Actions example
- name: Validate tasks
run: |
brew install driangle/tap/taskmd
taskmd validate --strictValidation in Pull Requests
Use strict validation to catch warnings as well as errors:
taskmd validate --strictStrict mode catches additional issues like:
- Tasks without a
createddate - Missing markdown body
- Non-standard file naming
Worklogs
When to Write Entries
Worklogs are most valuable when they capture decisions and context, not just status updates.
Write a worklog entry when:
- Starting a task -- note your approach and initial plan
- Making a key decision -- document the options you considered and why you chose one
- Hitting a blocker -- describe the problem and what you've tried
- Finishing a task -- summarize what was done and any open items
What to Include
Good worklog entries have:
## 2026-02-15T10:30:00Z
Started implementing rate limiting.
**Approach:** Using token bucket algorithm with Redis for distributed
rate limiting. Considered sliding window but token bucket handles
burst traffic better for our use case.
**Completed:**
- [x] Added rate limit middleware
- [x] Configured per-route limits
**Open questions:**
- Should we expose rate limit config via API?Why? Worklogs create an audit trail that's invaluable for debugging, onboarding, and understanding past decisions. When someone asks "why did we use token bucket instead of sliding window?", the answer is in the worklog.
When to Enable Worklogs
Worklogs are disabled by default. For team projects or when you want an audit trail, enable them:
# .taskmd.yaml
worklogs: trueExisting worklogs are always readable regardless of this setting.
Common Pitfalls
Over-engineering task structure
Problem: Creating deeply nested directories, complex tagging schemes, and dense dependency graphs before you have more than a handful of tasks.
Fix: Start simple. A flat tasks/ directory with 5-10 tasks needs no groups, no scopes, and minimal dependencies. Add structure when the simplicity starts hurting.
Stale backlogs
Problem: A long list of pending tasks that nobody intends to work on. This makes taskmd next recommendations noisy and project stats misleading.
Fix: Regularly review pending tasks. Cancel anything you won't do in the foreseeable future. A smaller, accurate backlog is more useful than a comprehensive, stale one.
# Find old pending tasks
taskmd list --filter status=pending --sort createdDependencies as wish lists
Problem: Adding dependencies that express preference rather than real blockers. "I'd like to do the API before the docs" becomes a dependency that blocks docs unnecessarily.
Fix: Only add a dependency when task B literally cannot be completed without task A's output. Use priority and effort to express ordering preferences instead.
Huge tasks
Problem: Tasks with 15+ subtasks that take days to complete. Progress is invisible, and the task becomes a graveyard of half-checked items.
Fix: Split large tasks into 2-4 focused tasks with clear deliverables. Use dependencies to sequence them. Each task should be completable in a single work session.
Inconsistent tags
Problem: Using frontend, front-end, fe, and ui interchangeably. Filtering by any single tag misses related tasks.
Fix: Establish a tag vocabulary early and document it. Use taskmd list --format json | jq '.[].tags[]' | sort -u to audit your current tags.
Ignoring validation
Problem: Task files with typos in status values, broken dependency references, or duplicate IDs. These cause silent failures in next, graph, and tracks.
Fix: Run taskmd validate before committing. Better yet, add it to your pre-commit hook.
Project Organization Examples
Small Personal Project
my-app/
├── .taskmd.yaml
├── tasks/
│ ├── 001-setup.md
│ ├── 002-core-feature.md
│ ├── 003-polish.md
│ └── 004-deploy.md
└── src/# .taskmd.yaml
dir: ./tasks
worklogs: falseSimple, flat, no groups. Worklogs off because it's just you.
Medium Team Project
platform/
├── .taskmd.yaml
├── tasks/
│ ├── api/
│ │ ├── 010-auth.md
│ │ ├── 011-users-endpoint.md
│ │ └── 012-payments.md
│ ├── web/
│ │ ├── 020-dashboard.md
│ │ └── 021-settings-page.md
│ └── infra/
│ ├── 030-ci-pipeline.md
│ └── 031-monitoring.md
├── apps/
└── docs/# .taskmd.yaml
dir: ./tasks
workflow: pr-review
scopes:
api/auth:
paths: ["apps/api/auth/"]
api/users:
paths: ["apps/api/users/"]
web/dashboard:
paths: ["apps/web/src/dashboard/"]Grouped by area, scopes defined for parallel work, PR-review workflow for code review.
Large Project with AI Assistants
enterprise/
├── .taskmd.yaml
├── tasks/
│ ├── 001-specification.md
│ ├── backend/
│ │ ├── .worklogs/
│ │ │ ├── 010.md
│ │ │ └── 011.md
│ │ ├── 010-data-model.md
│ │ └── 011-api-layer.md
│ ├── frontend/
│ │ ├── .worklogs/
│ │ ├── 020-component-lib.md
│ │ └── 021-app-shell.md
│ └── devops/
│ ├── 030-terraform.md
│ └── 031-monitoring.md
├── CLAUDE.md # AI assistant instructions
└── .mcp.json # MCP server config# .taskmd.yaml
dir: ./tasks
workflow: pr-review
worklogs: true
scopes:
backend/models:
description: "Data models and database schema"
paths: ["apps/backend/models/"]
backend/api:
description: "REST API handlers and routing"
paths: ["apps/backend/api/"]
frontend/components:
paths: ["apps/frontend/src/components/"]Full setup with worklogs for audit trail, scopes for parallel AI sessions, and PR-review workflow for team coordination.
What's Next?
- CLI Guide -- full command reference
- Configuration -- all config options
- Task Specification -- complete format reference
- Claude Code Plugin -- AI assistant integration