Doc Linting

Master this essential documentation concept

Quick Definition

An automated quality-checking process that scans documentation for style inconsistencies, formatting errors, and structural problems, similar to how code linters check source code.

How Doc Linting Works

graph TD A[Documentation Source Files
.md / .rst / .adoc] --> B[Doc Linter Engine] B --> C{Rule Violations
Detected?} C -->|Style Errors| D[Heading Hierarchy Violations
e.g. H1 → H3 skip] C -->|Formatting Errors| E[Broken Links &
Malformed Code Blocks] C -->|Structural Issues| F[Missing Required Sections
e.g. no Prerequisites] C -->|Pass| G[✅ Docs Approved
for Publishing] D --> H[Linting Report
with Line Numbers] E --> H F --> H H --> I[Author Fixes Issues] I --> B

Understanding Doc Linting

An automated quality-checking process that scans documentation for style inconsistencies, formatting errors, and structural problems, similar to how code linters check source code.

Key Features

  • Centralized information management
  • Improved documentation workflows
  • Better team collaboration
  • Enhanced user experience

Benefits for Documentation Teams

  • Reduces repetitive documentation tasks
  • Improves content consistency
  • Enables better content reuse
  • Streamlines review processes

Making Doc Linting Standards Searchable Across Your Team

Many documentation teams establish their doc linting rules and style enforcement workflows through recorded onboarding sessions, internal training videos, or meeting recordings where senior writers walk through linting configurations and explain why certain rules exist. This works well in the moment, but it creates a fragile knowledge structure over time.

The core problem with video-only approaches to doc linting is discoverability. When a new team member encounters a linting error they don't recognize — say, a flagged heading hierarchy or a prohibited passive-voice construction — they can't easily search a recording for the answer. They either interrupt a colleague or skip past the rule entirely, which defeats the purpose of having automated quality checks in the first place.

Converting those recordings into structured documentation changes this dynamic directly. Your doc linting conventions, the reasoning behind specific rule configurations, and examples of common violations become searchable, linkable, and referenceable at the exact moment someone needs them. A writer troubleshooting a failed lint check can find the relevant explanation in seconds rather than scrubbing through a forty-minute onboarding video.

If your team documents doc linting standards through recorded sessions or walkthroughs, there's a more practical way to make that knowledge accessible when it matters most.

Real-World Documentation Use Cases

Enforcing Consistent Heading Styles Across a Multi-Author API Reference

Problem

A team of 12 engineers each write API endpoint documentation independently, resulting in wildly inconsistent heading capitalization (Title Case vs. sentence case), skipped heading levels (H1 jumping to H3), and mixed use of bold vs. heading tags for section titles. Readers cannot predict document structure.

Solution

Doc Linting enforces a shared style ruleset — mandating sentence-case headings, sequential heading levels, and prohibiting bold text as a substitute for headings — catching violations before any PR is merged into the docs repository.

Implementation

['Install Vale or markdownlint with a custom ruleset file (.vale.ini or .markdownlint.json) that encodes heading-level sequencing and capitalization rules.', 'Add a pre-commit hook using Husky or lefthook that runs the linter against only changed documentation files on every git commit.', 'Configure the CI pipeline (GitHub Actions or GitLab CI) to block PR merges when heading violations are reported, outputting the file path and line number of each error.', "Publish the ruleset and a 'Doc Style Guide Quick Reference' in the team wiki so authors understand why rules exist and how to fix flagged issues."]

Expected Outcome

Heading inconsistencies drop to near-zero within two sprint cycles; code review time spent on style comments decreases by roughly 40% as linting handles enforcement automatically.

Detecting Broken Internal Links Before a Docs Site Deployment

Problem

After a large documentation restructure, dozens of cross-reference links pointing to renamed or deleted pages go unnoticed. Users encounter 404 errors on a freshly published docs site, eroding trust and generating support tickets.

Solution

Doc Linting with a link-checking rule set scans every internal href and relative file reference during the build pipeline, flagging dead links before the deployment artifact is ever pushed to production.

Implementation

["Integrate a linting tool such as lychee, htmltest, or Sphinx's linkcheck builder into the documentation build step of the CI/CD pipeline.", 'Differentiate between internal link checks (run on every PR) and external link checks (run nightly, since external URLs can be flaky) to avoid false-positive build failures.', 'Configure the linter to output a structured report (JSON or JUnit XML) that the CI system can parse and display as annotated PR comments showing exactly which file and line contains the broken reference.', 'Set the pipeline to fail on any broken internal link but only warn on external link failures, keeping the build signal meaningful.']

Expected Outcome

Zero broken internal links reach the production docs site; the team catches an average of 15-20 broken references per major restructure before they ever affect end users.

Standardizing Code Block Language Tags in a Developer SDK Guide

Problem

Contributors annotate fenced code blocks inconsistently — using 'js', 'javascript', 'JavaScript', and 'node' interchangeably — causing syntax highlighting to fail in some renderers and making automated code-sample extraction scripts break unpredictably.

Solution

A custom Doc Linting rule enforces an approved list of language identifiers for fenced code blocks, rejecting ambiguous or non-standard tags and requiring every code block to have an explicit language annotation.

Implementation

["Define an allowed-languages list in the linting configuration (e.g., only 'javascript', 'python', 'bash', 'json', 'yaml' are permitted) and write a custom Vale rule or markdownlint custom rule that checks the opening fence tag.", 'Run the linter as part of the docs build in CI, surfacing each violation with the file name, line number, and the offending tag alongside a suggestion for the correct identifier.', 'Add a one-time bulk-fix script using sed or a Python AST parser to migrate existing non-standard tags across the entire repository before enabling the rule as a hard failure.', 'Document the approved language tag list in the contributing guide with a table mapping common aliases to their canonical form.']

Expected Outcome

Syntax highlighting works correctly across all renderers for 100% of code blocks; automated code-sample extraction pipelines run without errors after the tag standardization is enforced.

Validating Required Sections in Runbook Templates for an SRE Team

Problem

SRE runbooks are supposed to include mandatory sections — 'Impact', 'Detection', 'Mitigation Steps', and 'Escalation Path' — but authors frequently omit one or more sections under deadline pressure. During incidents, on-call engineers waste critical minutes discovering the escalation path is missing.

Solution

Doc Linting checks every runbook file against a structural schema that asserts required H2 sections exist in the correct order, blocking the runbook from being merged into the on-call rotation repository until all mandatory sections are present.

Implementation

["Define a Vale or custom Python linting rule that parses the markdown AST and verifies the presence and order of the required H2 headings: 'Impact', 'Detection', 'Mitigation Steps', and 'Escalation Path'.", 'Wire the linter into the GitHub Actions workflow triggered on pull requests to the runbooks repository, posting inline PR review comments that name exactly which required section is absent.', 'Create a runbook template file (TEMPLATE.md) in the repository root that pre-populates all required sections, reducing the chance of omission from the start.', 'Set the linting check as a required status check in branch protection rules so runbooks cannot be merged without passing structural validation.']

Expected Outcome

All runbooks in the on-call rotation contain the four required sections; during a major incident post-mortem, the team confirms that complete runbooks reduced mean-time-to-mitigate by an estimated 8 minutes per incident.

Best Practices

Start with a Minimal Ruleset and Expand Incrementally

Enabling every available linting rule at once on an existing documentation repository will generate thousands of violations, overwhelming authors and causing teams to disable linting entirely out of frustration. Begin with three to five high-impact rules — such as heading level sequencing, broken internal links, and missing code block language tags — and add new rules only after the team has resolved existing violations. This incremental approach builds trust in the tooling without creating an unmanageable backlog.

✓ Do: Enable a small, agreed-upon rule subset first, fix all violations, then hold a team retrospective to decide which rules to add next based on recurring review feedback.
✗ Don't: Don't activate the full default ruleset of a tool like Vale or markdownlint on day one of adoption; the flood of errors will cause the team to treat linting as noise rather than signal.

Separate Linting Failures from Linting Warnings in CI

Not all documentation quality issues carry equal severity — a missing required section in a runbook is a hard blocker, while a passive-voice sentence in a tutorial is an advisory suggestion. Configuring your linting pipeline to distinguish between errors that block merges and warnings that are surfaced as informational comments prevents overly strict gates from slowing down legitimate documentation updates. This tiered approach keeps the CI signal trustworthy and actionable.

✓ Do: Map structural and broken-link rules to error severity (blocking) and style-preference rules like sentence length or passive voice to warning severity (non-blocking but visible).
✗ Don't: Don't treat every linting rule as a hard build failure; doing so causes authors to add lint-ignore comments liberally just to unblock their PRs, defeating the purpose of linting.

Version-Control Your Linting Configuration Alongside the Docs

Storing the linting ruleset file (.vale.ini, .markdownlint.json, or equivalent) in the same repository as the documentation ensures that rule changes are reviewed through the same pull request process as content changes. This prevents a single person from silently loosening rules to suppress violations, and it gives the team a clear audit trail of when and why specific rules were added or modified. It also ensures every contributor automatically gets the correct ruleset when they clone the repository.

✓ Do: Commit your linting configuration file to the root of the documentation repository and require a brief PR description explaining the rationale whenever the ruleset is modified.
✗ Don't: Don't store linting configuration only in a CI environment variable or a team wiki page that is decoupled from the actual documentation source, as this leads to configuration drift and inconsistent local developer experiences.

Provide Actionable Fix Guidance in Linting Error Messages

A linting error that says only 'heading violation on line 42' forces the author to look up the rule definition before they can fix the problem, adding unnecessary friction. Well-configured doc linters include a short explanation of the rule and a concrete example of the correct form directly in the error output. Investing time in writing clear rule descriptions — or choosing tools like Vale that support custom message templates — dramatically reduces the back-and-forth between authors and reviewers.

✓ Do: Write custom error message templates that include the rule rationale and a corrected example, such as: 'Heading skips a level (H1 → H3). Use H2 for this subsection.'
✗ Don't: Don't rely solely on cryptic rule identifiers like 'MD001' or 'heading-increment' without accompanying human-readable explanations; new contributors cannot act on error codes they have never encountered before.

Run Doc Linting Locally Before CI to Shift Feedback Left

Waiting for a CI pipeline to surface linting errors after a pull request is opened introduces a slow feedback loop — the author has already context-switched away from the writing task by the time the report arrives. Providing a simple local linting command (e.g., 'make lint-docs' or a pre-commit hook) lets authors catch and fix violations within seconds of writing them, when the context is still fresh. Faster feedback loops produce higher-quality first drafts and reduce the total number of CI pipeline runs consumed by fixup commits.

✓ Do: Configure a pre-commit hook using the pre-commit framework or Husky that runs the doc linter against staged documentation files, and document the one-line setup command prominently in the contributing guide.
✗ Don't: Don't make local linting setup complicated or optional; if running the linter locally requires more than two commands to set up, most contributors will skip it and rely entirely on CI feedback instead.

How Docsie Helps with Doc Linting

Build Better Documentation with Docsie

Join thousands of teams creating outstanding documentation

Start Free Trial