Helm Charts

Master this essential documentation concept

Quick Definition

Pre-configured packages of Kubernetes resources that define, install, and manage complex applications on a Kubernetes cluster, commonly used for enterprise software deployment.

How Helm Charts Works

graph TD A[Developer] -->|helm package| B[Helm Chart .tgz] B -->|helm push| C[Chart Repository ArtifactHub / Harbor] C -->|helm repo add| D[Local Helm Client] D -->|helm install| E[Kubernetes API Server] E --> F[Deployment] E --> G[Service] E --> H[ConfigMap] E --> I[Ingress] F & G & H & I --> J[Running Application on Cluster] K[values.yaml] -->|overrides| D style A fill:#4A90D9,color:#fff style C fill:#F5A623,color:#fff style J fill:#7ED321,color:#fff style K fill:#9B59B6,color:#fff

Understanding Helm Charts

Pre-configured packages of Kubernetes resources that define, install, and manage complex applications on a Kubernetes cluster, commonly used for enterprise software deployment.

Key Features

  • Centralized information management
  • Improved documentation workflows
  • Better team collaboration
  • Enhanced user experience

Benefits for Documentation Teams

  • Reduces repetitive documentation tasks
  • Improves content consistency
  • Enables better content reuse
  • Streamlines review processes

Keeping Helm Chart Knowledge Out of Video Silos

When your team onboards engineers to Kubernetes deployments, walkthroughs of Helm Charts often end up recorded as setup tutorials, architecture reviews, or sprint demos. Someone shares their screen, walks through the values.yaml overrides, explains the release naming conventions, and demonstrates how your organization customizes upstream charts — and then that knowledge lives exclusively in a video file that most engineers will never find when they actually need it.

The problem surfaces at the worst moments: a developer troubleshooting a failed Helm Chart deployment at 11pm, or a new team member trying to understand why certain default values were overridden for your production environment. Scrubbing through a 45-minute onboarding recording to find a two-minute explanation is not a workflow that scales.

Converting those recordings into structured documentation changes how your team interacts with that knowledge. Helm Chart configuration decisions, upgrade procedures, and environment-specific overrides become searchable, linkable, and version-trackable — the same qualities your charts themselves are valued for. A concrete example: a recorded architecture review explaining your chart dependency strategy becomes a reference doc engineers can search by chart name or flag, rather than a video they have to know exists.

If your team regularly captures Helm Chart knowledge on video, explore how converting those recordings into searchable documentation can close the gap between what your team knows and what your team can find.

Real-World Documentation Use Cases

Standardizing Multi-Environment Microservice Deployments Across Dev, Staging, and Production

Problem

Platform engineering teams manually maintain separate YAML manifests for each environment, leading to configuration drift where staging differs from production, causing bugs that only surface after release.

Solution

Helm Charts centralize all Kubernetes resource templates in a single chart, using values files (values-dev.yaml, values-staging.yaml, values-prod.yaml) to override environment-specific settings like replica counts, resource limits, and image tags without duplicating manifests.

Implementation

['Create a base Helm chart with templated Deployment, Service, and Ingress manifests using Go templating for dynamic values like {{ .Values.replicaCount }} and {{ .Values.image.tag }}.', 'Define environment-specific values files (values-prod.yaml sets replicaCount: 5, resources.limits.memory: 2Gi) while values-dev.yaml sets replicaCount: 1.', 'Integrate helm upgrade --install --values values-prod.yaml myapp ./chart into the CI/CD pipeline (GitHub Actions or ArgoCD) so each environment deploys from the same chart version.', 'Use helm diff plugin before every upgrade to show a git-diff-style preview of what will change in the cluster, requiring team approval for production changes.']

Expected Outcome

Configuration drift between environments is eliminated, rollback time drops from 45 minutes of manual YAML editing to under 2 minutes using helm rollback myapp 3, and audit trails are maintained via Helm release history.

Packaging and Distributing an Internal Platform SDK for 30+ Application Teams

Problem

A platform team maintains security policies, observability sidecars, and network policies that must be consistently applied across all microservices. Each app team re-implements these differently, creating security gaps and compliance failures.

Solution

Helm Charts with library charts allow the platform team to publish a company-internal base chart to a private Harbor registry. App teams use helm dependency on the library chart, inheriting standard PodDisruptionBudgets, NetworkPolicies, and Prometheus ServiceMonitor resources automatically.

Implementation

["Create a library chart (type: library in Chart.yaml) containing named templates for security contexts, resource quotas, and Datadog sidecar injection that app teams can include via {{ include 'platform-lib.securityContext' . }}.", 'Publish versioned releases (platform-lib-2.3.1.tgz) to the internal Harbor chart repository with semantic versioning and a CHANGELOG documenting breaking changes.', "App teams add the library as a dependency in their Chart.yaml: dependencies: [{name: platform-lib, version: '~2.3.0', repository: 'https://harbor.internal/chartrepo/platform'}] and run helm dependency update.", 'Enforce chart dependency compliance via a CI gate that runs helm lint and validates that platform-lib is listed as a dependency before any deployment PR is merged.']

Expected Outcome

100% of microservices inherit the approved security context and observability configuration. A security patch to the library chart propagates to all 30+ services within one sprint cycle instead of requiring 30 separate PRs.

Deploying Complex Third-Party Software Stacks Like Kafka and Elasticsearch in Air-Gapped Environments

Problem

Infrastructure teams in regulated industries (finance, healthcare) must deploy complex stateful systems like Apache Kafka with ZooKeeper or Elasticsearch clusters in air-gapped data centers with no internet access, making it impossible to pull from public chart repositories.

Solution

Helm Charts from Bitnami or Elastic can be pulled, mirrored, and re-published to an internal Nexus or Harbor registry. Container images referenced in the chart's values.yaml are retagged and pushed to an internal registry, with values overriding all image references to point inward.

Implementation

['Pull the official Bitnami Kafka chart: helm pull bitnami/kafka --version 26.4.3 --untar, then inspect values.yaml to identify all image references (kafka.image.registry, zookeeper.image.registry).', 'Mirror all referenced Docker images to the internal registry using crane copy or skopeo, then push the chart package to the internal Harbor instance: helm push kafka-26.4.3.tgz oci://harbor.internal/chartrepo/bitnami.', 'Create an override values file (values-airgap.yaml) that sets global.imageRegistry: harbor.internal and disables any init containers that pull from the internet.', 'Document the mirroring runbook and automate it with a weekly CI job that checks for new upstream chart versions and triggers the mirror pipeline for security-patched releases.']

Expected Outcome

Kafka clusters deploy reliably in air-gapped environments with zero manual YAML editing. The mirroring pipeline reduces the time to adopt upstream security patches from 3 weeks of manual work to 4 hours of automated pipeline execution.

Implementing GitOps-Driven Helm Releases with ArgoCD for Audit Compliance

Problem

Financial services teams face SOC 2 and PCI-DSS audit requirements mandating that every production change is traceable to an approved pull request, but imperative helm upgrade commands run by engineers leave no Git-based audit trail.

Solution

ArgoCD's HelmRelease custom resources declaratively define the desired Helm chart version and values in a Git repository. Every production change requires a PR, and ArgoCD continuously reconciles the cluster state to match Git, providing a complete, immutable audit log.

Implementation

['Create a GitOps repository with an ArgoCD Application manifest that points to a specific chart version: spec.source.chart: myapp, spec.source.targetRevision: 1.4.2, spec.source.helm.valueFiles: [values-prod.yaml].', 'Configure branch protection on the GitOps repo requiring two approvals and a passing helm lint CI check before merging any change to the main branch that ArgoCD watches.', "Enable ArgoCD's sync history and resource tracking so every deployment records the Git commit SHA, the deploying user, and the diff of changed Kubernetes resources, exportable to Splunk or Datadog for audit ingestion.", 'Set ArgoCD sync policy to automated with selfHeal: true so manual kubectl edits to the cluster are automatically reverted, ensuring the cluster always matches the Git-approved state.']

Expected Outcome

Audit evidence for SOC 2 Type II is generated automatically from ArgoCD's sync history. The team passes their annual audit without manual evidence collection, and unauthorized cluster changes are detected and reverted within 3 minutes.

Best Practices

Version Helm Charts with Semantic Versioning Tied to Application Changes

Every change to a Helm chart—whether a new Kubernetes resource, a template fix, or a default value update—should increment the chart version in Chart.yaml following SemVer (MAJOR.MINOR.PATCH). This allows teams to pin to stable versions in production while adopting new features in lower environments, and helm history shows exactly which chart version produced each release.

✓ Do: Increment appVersion when the application Docker image changes, increment version (chart version) when the chart templates or defaults change, and document both in a CHANGELOG.md inside the chart directory.
✗ Don't: Don't reuse the same chart version after making changes and re-publishing to the repository. Overwriting a published chart version breaks reproducibility and makes it impossible to roll back to a known-good state.

Validate All Chart Templates with helm lint and helm template Before Every Commit

Running helm lint catches structural errors in Chart.yaml and template syntax issues, while helm template --debug renders the full Kubernetes manifests locally so developers can inspect exactly what will be applied to the cluster. Integrating both commands into a pre-commit hook or CI pipeline gate prevents broken charts from reaching the chart repository.

✓ Do: Add helm lint ./chart && helm template myapp ./chart --values values-prod.yaml | kubeval to your CI pipeline to catch both Helm syntax errors and invalid Kubernetes resource schemas before merging.
✗ Don't: Don't rely solely on helm install to discover template errors. Deploying an untested chart to a staging cluster wastes time and can cause partial deployments that leave the cluster in a degraded state.

Use a Dedicated values.yaml Schema (values.schema.json) to Enforce Input Validation

Helm supports JSON Schema validation of values.yaml, allowing chart authors to declare required fields, valid types, and allowed enum values. When a user provides an invalid value (e.g., a string where an integer is expected for replicaCount), Helm rejects the install with a clear error message before any Kubernetes resources are created.

✓ Do: Define a values.schema.json file in the chart root that marks critical fields like image.repository and service.port as required with correct types, and use enum constraints for fields like service.type to restrict to ClusterIP, NodePort, or LoadBalancer.
✗ Don't: Don't leave values.yaml undocumented and unvalidated. Without schema enforcement, users pass invalid configurations that only fail at runtime inside the pod, making debugging significantly harder and slower.

Store Sensitive Configuration in Kubernetes Secrets Referenced by Values, Not Embedded in Charts

Helm charts should never contain raw secret values in values.yaml or templates, as chart packages stored in a chart repository are not encrypted. Instead, charts should reference pre-existing Kubernetes Secrets by name (e.g., secretKeyRef.name: {{ .Values.database.secretName }}) and use external secret management tools like External Secrets Operator or Vault Agent to populate those secrets.

✓ Do: Design chart templates to accept a secretName value that references an externally managed Kubernetes Secret, and document in README.md that the secret must be created before helm install using a tool like External Secrets Operator with an AWS Secrets Manager backend.
✗ Don't: Don't use helm secrets or pass --set database.password=mysecretpassword on the command line in CI pipelines, as these values are stored in plaintext in the Helm release history (kubectl get secret sh.helm.release.v1.myapp.v1 -o yaml) and accessible to anyone with cluster read access.

Write Comprehensive README.md and NOTES.txt for Every Chart

A chart's README.md is the primary documentation surface for users discovering it on ArtifactHub or an internal registry, and should include a prerequisites section, a complete values reference table, and example install commands for common scenarios. The templates/NOTES.txt file is rendered after every helm install or upgrade and should provide actionable next steps like how to get the application URL or verify the deployment is healthy.

✓ Do: Include a values table in README.md generated by helm-docs (a documentation generator that reads values.yaml comments) and write NOTES.txt to output the LoadBalancer IP retrieval command: kubectl get svc {{ include 'myapp.fullname' . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}'.
✗ Don't: Don't leave NOTES.txt as the default boilerplate or omit README.md entirely. Users who install undocumented charts from an internal registry frequently misconfigure values or don't know how to verify a successful deployment, leading to repeated support requests to the platform team.

How Docsie Helps with Helm Charts

Build Better Documentation with Docsie

Join thousands of teams creating outstanding documentation

Start Free Trial