Master this essential documentation concept
An automated quality-checking process that scans documentation for style inconsistencies, formatting errors, and structural problems, similar to how code linters check source code.
An automated quality-checking process that scans documentation for style inconsistencies, formatting errors, and structural problems, similar to how code linters check source code.
Many documentation teams establish their doc linting rules and style enforcement workflows through recorded onboarding sessions, internal training videos, or meeting recordings where senior writers walk through linting configurations and explain why certain rules exist. This works well in the moment, but it creates a fragile knowledge structure over time.
The core problem with video-only approaches to doc linting is discoverability. When a new team member encounters a linting error they don't recognize — say, a flagged heading hierarchy or a prohibited passive-voice construction — they can't easily search a recording for the answer. They either interrupt a colleague or skip past the rule entirely, which defeats the purpose of having automated quality checks in the first place.
Converting those recordings into structured documentation changes this dynamic directly. Your doc linting conventions, the reasoning behind specific rule configurations, and examples of common violations become searchable, linkable, and referenceable at the exact moment someone needs them. A writer troubleshooting a failed lint check can find the relevant explanation in seconds rather than scrubbing through a forty-minute onboarding video.
If your team documents doc linting standards through recorded sessions or walkthroughs, there's a more practical way to make that knowledge accessible when it matters most.
A team of 12 engineers each write API endpoint documentation independently, resulting in wildly inconsistent heading capitalization (Title Case vs. sentence case), skipped heading levels (H1 jumping to H3), and mixed use of bold vs. heading tags for section titles. Readers cannot predict document structure.
Doc Linting enforces a shared style ruleset — mandating sentence-case headings, sequential heading levels, and prohibiting bold text as a substitute for headings — catching violations before any PR is merged into the docs repository.
['Install Vale or markdownlint with a custom ruleset file (.vale.ini or .markdownlint.json) that encodes heading-level sequencing and capitalization rules.', 'Add a pre-commit hook using Husky or lefthook that runs the linter against only changed documentation files on every git commit.', 'Configure the CI pipeline (GitHub Actions or GitLab CI) to block PR merges when heading violations are reported, outputting the file path and line number of each error.', "Publish the ruleset and a 'Doc Style Guide Quick Reference' in the team wiki so authors understand why rules exist and how to fix flagged issues."]
Heading inconsistencies drop to near-zero within two sprint cycles; code review time spent on style comments decreases by roughly 40% as linting handles enforcement automatically.
After a large documentation restructure, dozens of cross-reference links pointing to renamed or deleted pages go unnoticed. Users encounter 404 errors on a freshly published docs site, eroding trust and generating support tickets.
Doc Linting with a link-checking rule set scans every internal href and relative file reference during the build pipeline, flagging dead links before the deployment artifact is ever pushed to production.
["Integrate a linting tool such as lychee, htmltest, or Sphinx's linkcheck builder into the documentation build step of the CI/CD pipeline.", 'Differentiate between internal link checks (run on every PR) and external link checks (run nightly, since external URLs can be flaky) to avoid false-positive build failures.', 'Configure the linter to output a structured report (JSON or JUnit XML) that the CI system can parse and display as annotated PR comments showing exactly which file and line contains the broken reference.', 'Set the pipeline to fail on any broken internal link but only warn on external link failures, keeping the build signal meaningful.']
Zero broken internal links reach the production docs site; the team catches an average of 15-20 broken references per major restructure before they ever affect end users.
Contributors annotate fenced code blocks inconsistently — using 'js', 'javascript', 'JavaScript', and 'node' interchangeably — causing syntax highlighting to fail in some renderers and making automated code-sample extraction scripts break unpredictably.
A custom Doc Linting rule enforces an approved list of language identifiers for fenced code blocks, rejecting ambiguous or non-standard tags and requiring every code block to have an explicit language annotation.
["Define an allowed-languages list in the linting configuration (e.g., only 'javascript', 'python', 'bash', 'json', 'yaml' are permitted) and write a custom Vale rule or markdownlint custom rule that checks the opening fence tag.", 'Run the linter as part of the docs build in CI, surfacing each violation with the file name, line number, and the offending tag alongside a suggestion for the correct identifier.', 'Add a one-time bulk-fix script using sed or a Python AST parser to migrate existing non-standard tags across the entire repository before enabling the rule as a hard failure.', 'Document the approved language tag list in the contributing guide with a table mapping common aliases to their canonical form.']
Syntax highlighting works correctly across all renderers for 100% of code blocks; automated code-sample extraction pipelines run without errors after the tag standardization is enforced.
SRE runbooks are supposed to include mandatory sections — 'Impact', 'Detection', 'Mitigation Steps', and 'Escalation Path' — but authors frequently omit one or more sections under deadline pressure. During incidents, on-call engineers waste critical minutes discovering the escalation path is missing.
Doc Linting checks every runbook file against a structural schema that asserts required H2 sections exist in the correct order, blocking the runbook from being merged into the on-call rotation repository until all mandatory sections are present.
["Define a Vale or custom Python linting rule that parses the markdown AST and verifies the presence and order of the required H2 headings: 'Impact', 'Detection', 'Mitigation Steps', and 'Escalation Path'.", 'Wire the linter into the GitHub Actions workflow triggered on pull requests to the runbooks repository, posting inline PR review comments that name exactly which required section is absent.', 'Create a runbook template file (TEMPLATE.md) in the repository root that pre-populates all required sections, reducing the chance of omission from the start.', 'Set the linting check as a required status check in branch protection rules so runbooks cannot be merged without passing structural validation.']
All runbooks in the on-call rotation contain the four required sections; during a major incident post-mortem, the team confirms that complete runbooks reduced mean-time-to-mitigate by an estimated 8 minutes per incident.
Enabling every available linting rule at once on an existing documentation repository will generate thousands of violations, overwhelming authors and causing teams to disable linting entirely out of frustration. Begin with three to five high-impact rules — such as heading level sequencing, broken internal links, and missing code block language tags — and add new rules only after the team has resolved existing violations. This incremental approach builds trust in the tooling without creating an unmanageable backlog.
Not all documentation quality issues carry equal severity — a missing required section in a runbook is a hard blocker, while a passive-voice sentence in a tutorial is an advisory suggestion. Configuring your linting pipeline to distinguish between errors that block merges and warnings that are surfaced as informational comments prevents overly strict gates from slowing down legitimate documentation updates. This tiered approach keeps the CI signal trustworthy and actionable.
Storing the linting ruleset file (.vale.ini, .markdownlint.json, or equivalent) in the same repository as the documentation ensures that rule changes are reviewed through the same pull request process as content changes. This prevents a single person from silently loosening rules to suppress violations, and it gives the team a clear audit trail of when and why specific rules were added or modified. It also ensures every contributor automatically gets the correct ruleset when they clone the repository.
A linting error that says only 'heading violation on line 42' forces the author to look up the rule definition before they can fix the problem, adding unnecessary friction. Well-configured doc linters include a short explanation of the rule and a concrete example of the correct form directly in the error output. Investing time in writing clear rule descriptions — or choosing tools like Vale that support custom message templates — dramatically reduces the back-and-forth between authors and reviewers.
Waiting for a CI pipeline to surface linting errors after a pull request is opened introduces a slow feedback loop — the author has already context-switched away from the writing task by the time the report arrives. Providing a simple local linting command (e.g., 'make lint-docs' or a pre-commit hook) lets authors catch and fix violations within seconds of writing them, when the context is still fresh. Faster feedback loops produce higher-quality first drafts and reduce the total number of CI pipeline runs consumed by fixup commits.
Join thousands of teams creating outstanding documentation
Start Free Trial