MCAF Concepts

Managed Code Coding AI Framework

Developed and sustained by Managed Code
March 2026

12 min read


1. What MCAF Is

MCAF is a framework for building real software with AI coding agents.

It defines how to:

The goal of MCAF:

Use AI to build real products in a way that is predictable, safe, and repeatable.

MCAF has three core elements:

These concepts define the framework (the “what” and “why”).
TUTORIAL.md is the bootstrap procedure (the “how”).
Repository AGENTS.md files apply both to a specific solution.

1.1 Bootstrap Surface

v1.2 is skill-first.

Bootstrap stays minimal:

Canonical install entry point:

Optional direct shortcuts:

2. Context

Context is everything needed to understand, change, and run the system.

2.1 Repository Context

In MCAF, repository context includes:

Anything that materially affects development, verification, or operation belongs in the repo.

2.2 Documentation Layout

A typical MCAF repo keeps durable docs under docs/:

This is a reference layout, not a rigid folder law. The important part is that the repo has clear homes for architecture, behaviour, testing, development, and operations.

2.3 Bootstrap Templates

Public bootstrap templates are intentionally minimal:

Authoring scaffolds for architecture docs, feature specs, ADRs, governance, and maintainability do not live in docs/templates/. They live in skills under references/ or assets/.

2.4 Skills

Skills are small, versioned workflow packs that make repetitive agent work predictable.

A skill contains:

Recommended target locations in a consuming repo:

The public skill catalog lives on the Skills page:

Platform-specific bundles can stay small and still be explicit. For example, a typical .NET repo baseline can install mcaf-dotnet as the entry skill, mcaf-dotnet-features, mcaf-solution-governance, mcaf-testing, exactly one of mcaf-dotnet-xunit, mcaf-dotnet-tunit, or mcaf-dotnet-mstest, plus mcaf-dotnet-quality-ci, mcaf-dotnet-complexity, mcaf-solid-maintainability, mcaf-architecture-overview, and mcaf-ci-cd. In that setup, mcaf-dotnet knows when to open the more specific .NET skills, the repo-root lowercase .editorconfig is the default source of truth for formatting and analyzer severity, and AGENTS.md records the exact dotnet build, dotnet test, dotnet format, analyze, and coverage commands. Nested .editorconfig files are allowed when they serve a clear subtree-specific purpose, such as stricter domain rules, generated-code handling, test-specific conventions, or legacy-code containment. For .NET code changes, the task is not done when tests are green if the repo also configured formatters, analyzers, coverage, architecture tests, or security gates. Agents should run the repo-defined post-change quality pass before completion. If the repo standardizes on concrete tools, install the matching tool skills as well. Typical open or free .NET additions include mcaf-dotnet-format, mcaf-dotnet-code-analysis, mcaf-dotnet-analyzer-config, mcaf-dotnet-stylecop-analyzers, mcaf-dotnet-roslynator, mcaf-dotnet-meziantou-analyzer, mcaf-dotnet-cloc, mcaf-dotnet-coverlet, mcaf-dotnet-profiling, mcaf-dotnet-quickdup, mcaf-dotnet-reportgenerator, mcaf-dotnet-resharper-clt, mcaf-dotnet-stryker, mcaf-dotnet-netarchtest, mcaf-dotnet-archunitnet, and mcaf-dotnet-csharpier. mcaf-dotnet-codeql stays available, but should be chosen only when its hosting and licensing model fits the repository. Every mcaf-dotnet* tool skill should include a Bootstrap When Missing section so agents can detect, install, verify, and first-run the tool without guessing.

2.5 Context Rules

3. Verification

Verification is how the team proves that behaviour and code quality meet expectations.

3.1 Test Levels

MCAF expects layered verification:

The goal is not “one test per feature.”
The goal is enough automated evidence to trust the change.

3.2 Verification Rules

3.3 Verification Artifacts

Feature docs and ADRs should point to:

4. Instructions and AGENTS.md

Instructions define how AI agents behave in the repository and how they improve over time.

4.1 Root and Local AGENTS.md

Every MCAF repo has a solution-root AGENTS.md.

In multi-project solutions, each project or module root also has a local AGENTS.md.

Root AGENTS.md owns:

Local AGENTS.md owns:

4.2 Rule Precedence

Agents follow this order:

  1. Read the root AGENTS.md.
  2. Read the nearest local AGENTS.md.
  3. Apply the stricter rule if both apply.
  4. Do not silently weaken root policy in a local file.

4.3 Required Content

Root AGENTS.md stays current with:

Project-local AGENTS.md files stay current with:

4.4 Maintainability Limits

MCAF requires a Maintainability Limits section in AGENTS.md with stable keys:

These values are repo policy, not framework constants.

MCAF may show starter values, but the active limits live only in the consuming repo’s AGENTS.md.

4.5 Self-Learning

Chat is not memory.

Stable corrections, preferences, and recurring mistakes should become:

If the same mistake happens twice, the framework expects the rule to be made durable.

4.6 Hard Rules for Instructions

5. Coding and Testability

MCAF coding rules exist to keep systems changeable and testable.

5.1 Design Policy

5.2 Maintainability Policy

5.3 Constants and Configuration

Meaningful literals are not scattered through the codebase.

Extract shared values into:

Hardcoded values are forbidden.

String literals do not belong in implementation logic. If a string matters, define it once as a named constant, enum value, configuration entry, or dedicated type and reference that symbol everywhere else.

5.4 Hard Rules for Coding and Testability

6. Perspectives

MCAF describes responsibilities using four perspectives.

6.1 Product

6.2 Dev

6.3 QA

6.4 AI Agent

Humans still own approval and merge decisions.

7. Development Cycle

7.1 Describe

Before heavy coding:

  1. update or create feature docs
  2. update or create ADRs if architecture changes
  3. align test expectations
  4. identify the right skills

7.2 Plan

For non-trivial work, create a root-level <slug>.plan.md and keep it current. The plan records:

Before implementation starts, run the full relevant test baseline. If anything is already failing, add each failing test to the plan with its symptom, suspected or confirmed root cause, and intended fix path.

7.3 Implement

7.4 Verify

Run verification in layers:

  1. changed tests
  2. related suite
  3. broader required regressions and the full relevant suite
  4. analyzers, formatters, and any configured architecture, security, mutation, or other quality gates
  5. coverage comparison against the pre-change baseline

7.5 Update Durable Context and Close the Task

After implementation:

8. AI Participation Modes

MCAF supports three common AI participation modes.

8.1 Delegated

The agent executes scoped work under current docs, skills, and AGENTS.md.

8.2 Collaborative

The agent and engineer iterate together on design, code, tests, and docs.

8.3 Consultative

The agent reviews, critiques, or drafts options while humans retain implementation control.

The repo may choose different modes per task, but the same verification and governance rules still apply.

9. Adopting MCAF in a Repository

Use the tutorial as the canonical install flow:

  1. Open Tutorial.
  2. Follow the tutorial flow to fetch templates and install the needed skills.
  3. In multi-project solutions, add project-local AGENTS.md files using the governance skill.
  4. Restart the agent so it reloads the installed skills.

Adoption is complete when: