Getting Started with MCAF

A practical guide to implementing MCAF in your repository.

Quick Start

Get MCAF running in your repository:

  1. Bootstrap AGENTS.md with AI analysis
  2. Create documentation structure in docs/
  3. Document existing features
  4. Create ADRs for existing decisions
  5. Write feature docs before coding (ongoing workflow)
  6. Set up test environment
  7. Configure CI pipeline

Step 1: Bootstrap AGENTS.md

Download templates from Templates:

CLAUDE.md references AGENTS.md, so Claude reads the same rules as other AI agents.

The AI agent will analyze your project and fill in the template with actual commands, patterns, and conventions found in your codebase.

What you get: A customized AGENTS.md with your tech stack, build commands, code style, and workflow patterns.

Prompt:

Analyze this project and fill in AGENTS.md:

1. Detect tech stack — language, framework, versions from config files
2. Scan codebase — folders, modules, layers, architecture
3. Read existing code — patterns, conventions, naming styles
4. Check git history — commit message format, branch naming, team patterns
5. Find existing docs — README, comments with rules, ADRs if exist
6. Analyze tests — structure, frameworks, how organized

Fill each AGENTS.md section:
- Project name and detected stack
- Commands — actual build/test/format commands
- Task Delivery — workflow based on git patterns
- Testing — rules based on test structure
- Code Style — conventions from existing code
- Boundaries — protected/critical areas

Keep Self-Learning section as-is.
Report what you found.

Step 2: Create Documentation Structure

Create a docs/ folder with subfolders for different types of documentation. This gives AI agents and developers a clear place to find and add documentation.

What you get: Organized folder structure ready for feature specs, ADRs, and development guides.

Prompt:

Create documentation structure for this project:

1. Create docs/ folder with subfolders:
   - docs/Features/ — for feature specifications
   - docs/ADR/ — for architecture decisions
   - docs/Testing/ — for test strategy
   - docs/Development/ — for setup and workflow
   - docs/API/ — for API documentation (if applicable)

2. Create docs/Development/setup.md with:
   - How to clone and run the project
   - Required tools and versions
   - Environment setup steps

3. Create docs/Testing/strategy.md with:
   - Test structure found in project
   - How to run tests
   - Test categories (unit/integration/e2e)

Report what you created.

Step 3: Document Existing Features

Scan the codebase for major features and modules, then create documentation for each. This captures current behavior so AI agents understand what already exists before making changes.

Use Feature-Template.md from templates.

What you get: Feature docs in docs/Features/ describing purpose, flows, components, and tests for each major feature.

Prompt:

Document existing features in this project:

1. Scan codebase for major features/modules
2. For each feature create docs/Features/{feature-name}.md with:
   - Purpose — what it does
   - Main flows — how it works
   - Components — files/classes involved
   - Tests — what tests exist for it
   - Current behavior — how it behaves now

Use template from docs/templates/Feature-Template.md if exists.
List all features you documented.

Step 4: Create ADRs for Existing Decisions

Document architectural decisions that were already made in the project. This prevents AI agents from suggesting changes that conflict with existing architecture.

Use ADR-Template.md from templates.

What you get: ADRs in docs/ADR/ explaining why the database, framework, auth approach, and other technical choices were made.

Prompt:

Create ADRs for architectural decisions found in this project:

1. Analyze codebase for architectural patterns:
   - Database choice
   - Framework choice
   - Authentication approach
   - API structure
   - Any significant technical decisions

2. For each decision create docs/ADR/{number}-{title}.md with:
   - Status: Accepted (already implemented)
   - Context: Why this decision was needed
   - Decision: What was chosen
   - Consequences: Trade-offs

Use template from docs/templates/ADR-Template.md if exists.
List all ADRs you created.

Step 5: Write Feature Docs (Ongoing Workflow)

For new features, write documentation before coding. This is your ongoing workflow after bootstrap.

What you get: Clear specification that both humans and AI agents can implement without guessing.

Include:

Feature docs should be precise enough that:


Step 6: Set Up Tests

Integration tests are the backbone of MCAF. Configure your test environment to use real dependencies instead of mocks.

What you get: Test infrastructure that catches real integration issues, not just unit-level bugs.

Principles:

For .NET projects, consider:

The specific tools matter less than the principle: test real behaviour with real dependencies.


Step 7: Configure CI

Set up CI to run all tests automatically. This ensures every PR is verified before merge.

What you get: Automated quality gate that runs build, tests, and static analysis.

CI pipeline should:

Both GitHub Actions and Azure DevOps support containerized test environments.

Docker is available by default on hosted runners.


Working with AI Agents

Delegated Mode

Agent does most work. Human reviews and merges.

Best for:

Collaborative Mode

Agent and human work together throughout.

Best for:

Consultative Mode

Agent advises. Human implements.

Best for:


FAQ

Does MCAF work with any programming language?
Yes. MCAF is language-agnostic. Define your build, test, format commands for your stack.
Do I need all documentation folders?
Start with Features/, ADR/, Testing/, and Development/. Add others as needed.
What if my team doesn't have a dedicated QA?
Developers take the QA perspective. The point is ensuring test coverage, not having a specific role.
Why avoid mocking internal services?
Mocks hide integration bugs. Real containers catch issues that mocks miss.
How much test coverage is enough?
Every significant behaviour needs at least one integration/API/UI test. Focus on workflows, not percentages.
Which AI agents work with MCAF?
Any AI coding assistant that can read files. Point them to AGENTS.md.
How does the agent learn my preferences?
Update AGENTS.md when you give feedback. Chat is not memory — the file is.
Can I adopt MCAF gradually?
Yes. Start with AGENTS.md and one feature doc. Add structure as you go.

Common Mistakes

Writing code before docs

Write feature doc with test flows first. Then implement.

Mocking everything

Use real containers. Catch real integration issues.

Treating AGENTS.md as static

Update after every significant feedback or pattern discovery.

Skipping the plan step

Even small changes benefit from explicit planning.


Example Project Structure

my-project/
├── .github/
│   └── workflows/
│       └── ci.yml
├── docs/
│   ├── Features/
│   │   ├── user-auth.md
│   │   └── payment-flow.md
│   ├── ADR/
│   │   ├── 001-database-choice.md
│   │   └── 002-auth-strategy.md
│   ├── Testing/
│   │   └── strategy.md
│   ├── Development/
│   │   └── setup.md
│   └── API/
│       └── endpoints.md
├── src/
│   └── ...
├── tests/
│   ├── integration/
│   ├── api/
│   └── ui/
├── AGENTS.md
└── README.md

Next Steps

  1. Copy templates from Templates
  2. Read the full MCAF Guide for detailed specifications
  3. Set up your first feature using this workflow
  4. Iterate and improve your AGENTS.md as you learn

Need help? Open an issue on GitHub.