Getting Started with MCAF
A practical guide to implementing MCAF in your repository.
Quick Start
Get MCAF running in your repository:
- Bootstrap AGENTS.md with AI analysis
- Create documentation structure in
docs/ - Document existing features
- Create ADRs for existing decisions
- Write feature docs before coding (ongoing workflow)
- Set up test environment
- Configure CI pipeline
Step 1: Bootstrap AGENTS.md
Download templates from Templates:
- AGENTS.md — copy to repository root
- CLAUDE.md — copy to repository root (for Claude Code users)
CLAUDE.md references AGENTS.md, so Claude reads the same rules as other AI agents.
The AI agent will analyze your project and fill in the template with actual commands, patterns, and conventions found in your codebase.
What you get: A customized AGENTS.md with your tech stack, build commands, code style, and workflow patterns.
Prompt:
Analyze this project and fill in AGENTS.md:
1. Detect tech stack — language, framework, versions from config files
2. Scan codebase — folders, modules, layers, architecture
3. Read existing code — patterns, conventions, naming styles
4. Check git history — commit message format, branch naming, team patterns
5. Find existing docs — README, comments with rules, ADRs if exist
6. Analyze tests — structure, frameworks, how organized
Fill each AGENTS.md section:
- Project name and detected stack
- Commands — actual build/test/format commands
- Task Delivery — workflow based on git patterns
- Testing — rules based on test structure
- Code Style — conventions from existing code
- Boundaries — protected/critical areas
Keep Self-Learning section as-is.
Report what you found.
Step 2: Create Documentation Structure
Create a docs/ folder with subfolders for different types of documentation. This gives AI agents and developers a clear place to find and add documentation.
What you get: Organized folder structure ready for feature specs, ADRs, and development guides.
Prompt:
Create documentation structure for this project:
1. Create docs/ folder with subfolders:
- docs/Features/ — for feature specifications
- docs/ADR/ — for architecture decisions
- docs/Testing/ — for test strategy
- docs/Development/ — for setup and workflow
- docs/API/ — for API documentation (if applicable)
2. Create docs/Development/setup.md with:
- How to clone and run the project
- Required tools and versions
- Environment setup steps
3. Create docs/Testing/strategy.md with:
- Test structure found in project
- How to run tests
- Test categories (unit/integration/e2e)
Report what you created.
Step 3: Document Existing Features
Scan the codebase for major features and modules, then create documentation for each. This captures current behavior so AI agents understand what already exists before making changes.
Use Feature-Template.md from templates.
What you get: Feature docs in docs/Features/ describing purpose, flows, components, and tests for each major feature.
Prompt:
Document existing features in this project:
1. Scan codebase for major features/modules
2. For each feature create docs/Features/{feature-name}.md with:
- Purpose — what it does
- Main flows — how it works
- Components — files/classes involved
- Tests — what tests exist for it
- Current behavior — how it behaves now
Use template from docs/templates/Feature-Template.md if exists.
List all features you documented.
Step 4: Create ADRs for Existing Decisions
Document architectural decisions that were already made in the project. This prevents AI agents from suggesting changes that conflict with existing architecture.
Use ADR-Template.md from templates.
What you get: ADRs in docs/ADR/ explaining why the database, framework, auth approach, and other technical choices were made.
Prompt:
Create ADRs for architectural decisions found in this project:
1. Analyze codebase for architectural patterns:
- Database choice
- Framework choice
- Authentication approach
- API structure
- Any significant technical decisions
2. For each decision create docs/ADR/{number}-{title}.md with:
- Status: Accepted (already implemented)
- Context: Why this decision was needed
- Decision: What was chosen
- Consequences: Trade-offs
Use template from docs/templates/ADR-Template.md if exists.
List all ADRs you created.
Step 5: Write Feature Docs (Ongoing Workflow)
For new features, write documentation before coding. This is your ongoing workflow after bootstrap.
What you get: Clear specification that both humans and AI agents can implement without guessing.
Include:
- Feature name and purpose
- Business rules and constraints
- Main flow description
- Test flows (positive, negative, edge cases)
- Definition of Done
Feature docs should be precise enough that:
- A human can implement and verify the feature
- An AI agent can derive code and tests without inventing behaviour
Step 6: Set Up Tests
Integration tests are the backbone of MCAF. Configure your test environment to use real dependencies instead of mocks.
What you get: Test infrastructure that catches real integration issues, not just unit-level bugs.
Principles:
- Use real dependencies, not mocks
- Internal systems (database, cache, queues) run in containers
- Test environment starts from documented scripts
- Same commands work locally and in CI
For .NET projects, consider:
- Aspire for container orchestration
- TUnit for test framework
- WebApplicationFactory for Integration tests
- Playwright for UI tests
The specific tools matter less than the principle: test real behaviour with real dependencies.
Step 7: Configure CI
Set up CI to run all tests automatically. This ensures every PR is verified before merge.
What you get: Automated quality gate that runs build, tests, and static analysis.
CI pipeline should:
- Build the solution
- Run all tests (unit, integration, API, UI)
- Run static analysis
- Fail on test failures or violations
Both GitHub Actions and Azure DevOps support containerized test environments.
Docker is available by default on hosted runners.
Working with AI Agents
Delegated Mode
Agent does most work. Human reviews and merges.
Best for:
- Bug fixes with clear reproduction
- Features with complete documentation
- Routine refactoring
Collaborative Mode
Agent and human work together throughout.
Best for:
- Complex features
- Architectural changes
- High-risk modifications
Consultative Mode
Agent advises. Human implements.
Best for:
- Security-sensitive code
- Learning new codebase
- Design exploration
FAQ
Does MCAF work with any programming language?
build, test, format commands for your stack.
Do I need all documentation folders?
Features/, ADR/, Testing/, and Development/. Add others as needed.
What if my team doesn't have a dedicated QA?
Why avoid mocking internal services?
How much test coverage is enough?
Which AI agents work with MCAF?
AGENTS.md.
How does the agent learn my preferences?
AGENTS.md when you give feedback. Chat is not memory — the file is.
Can I adopt MCAF gradually?
AGENTS.md and one feature doc. Add structure as you go.
Common Mistakes
Writing code before docs
Write feature doc with test flows first. Then implement.
Mocking everything
Use real containers. Catch real integration issues.
Treating AGENTS.md as static
Update after every significant feedback or pattern discovery.
Skipping the plan step
Even small changes benefit from explicit planning.
Example Project Structure
my-project/
├── .github/
│ └── workflows/
│ └── ci.yml
├── docs/
│ ├── Features/
│ │ ├── user-auth.md
│ │ └── payment-flow.md
│ ├── ADR/
│ │ ├── 001-database-choice.md
│ │ └── 002-auth-strategy.md
│ ├── Testing/
│ │ └── strategy.md
│ ├── Development/
│ │ └── setup.md
│ └── API/
│ └── endpoints.md
├── src/
│ └── ...
├── tests/
│ ├── integration/
│ ├── api/
│ └── ui/
├── AGENTS.md
└── README.md
Next Steps
- Copy templates from Templates
- Read the full MCAF Guide for detailed specifications
- Set up your first feature using this workflow
- Iterate and improve your
AGENTS.mdas you learn
Need help? Open an issue on GitHub.