Helm 4 & Kubernetes Package Management: Best Practices for Production 2026



Helm 4 & Kubernetes Package Management: Best Practices for Production 2026

Helm remains the de facto standard for Kubernetes package management, and the release of Helm 4 has brought significant improvements in security, OCI support, and developer experience. This guide covers everything you need to know about modern Helm practices — from chart architecture to GitOps integration and production hardening.

Container ship representing Kubernetes deployments Photo by Ian Taylor on Unsplash


What’s New in Helm 4

Breaking Changes from Helm 3

Helm 4 introduces several breaking changes that require migration:

FeatureHelm 3Helm 4
TillerRemovedN/A
Releases storageConfigMaps/SecretsSecrets only (more secure)
Push to OCIPlugin requiredBuilt-in
Lua templatingSprig onlyLua + Sprig
ValidationSchema optionalSchema recommended
Hook deletion policyManualAutomatic with TTL

New in Helm 4: Key Features

1. Native OCI Support (First-Class)

# Helm 4: Push chart to OCI registry
helm push ./mychart-0.1.0.tgz oci://registry.example.com/charts

# Pull from OCI
helm pull oci://registry.example.com/charts/mychart --version 0.1.0

# Install directly from OCI
helm install myapp oci://registry.example.com/charts/mychart \
  --version 0.1.0 \
  --namespace production

# Login to OCI registry
helm registry login registry.example.com

2. Improved Values Merging

# Helm 4: New --values-from-secret flag
helm install myapp ./mychart \
  --values values.yaml \
  --values-from-secret prod-secrets  # Reads from K8s Secret

3. Post-Renderer Templates

# Apply Kustomize patches after Helm rendering
helm install myapp ./mychart \
  --post-renderer kustomize \
  --post-renderer-args ./kustomize-overlay

Chart Structure Best Practices

Modern Chart Layout

mychart/
├── Chart.yaml           # Chart metadata and dependencies
├── Chart.lock           # Locked dependency versions
├── values.yaml          # Default values
├── values.schema.json   # JSON Schema validation (required!)
├── .helmignore          # Files to ignore when packaging
├── templates/
│   ├── _helpers.tpl     # Template helpers
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   ├── hpa.yaml
│   ├── pdb.yaml         # PodDisruptionBudget
│   ├── serviceaccount.yaml
│   ├── NOTES.txt        # Post-install notes
│   └── tests/
│       └── test-connection.yaml
├── crds/                # Custom Resource Definitions
└── charts/              # Dependency charts (bundled)

Chart.yaml: Full Example

apiVersion: v2
name: myapp
description: A production-ready web application
type: application
version: 1.2.0        # Chart version
appVersion: "2.5.1"   # App version it deploys

keywords:
  - web
  - api
  - microservice

maintainers:
  - name: Platform Team
    email: platform@mycompany.com
    url: https://platform.mycompany.com

dependencies:
  - name: postgresql
    version: "15.x.x"
    repository: oci://registry-1.docker.io/bitnamicharts
    condition: postgresql.enabled
    
  - name: redis
    version: "19.x.x"
    repository: oci://registry-1.docker.io/bitnamicharts
    condition: redis.enabled

annotations:
  # Artifact Hub annotations
  artifacthub.io/license: Apache-2.0
  artifacthub.io/links: |
    - name: Documentation
      url: https://docs.mycompany.com
  artifacthub.io/changes: |
    - kind: added
      description: HPA support for all deployments
    - kind: fixed
      description: Resource limits now properly applied

Template Best Practices

The _helpers.tpl Pattern





helm.sh/chart: 

app.kubernetes.io/version: 
app.kubernetes.io/managed-by: 
app.kubernetes.io/part-of: myapp
environment: 

app.kubernetes.io/name: 
app.kubernetes.io/instance: 


Production Deployment Template

# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: 
  labels:
  annotations:
spec:
  replicas: 
  selector:
    matchLabels:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 
      maxUnavailable: 
  template:
    metadata:
      labels:
      annotations:
        # Force pod restart when config changes
        checksum/config: 
        checksum/secret: 
    spec:
      serviceAccountName: 
      
      # Security context (required for production)
      securityContext:
        runAsNonRoot: true
        runAsUser: 
        runAsGroup: 
        fsGroup: 
        seccompProfile:
          type: RuntimeDefault
      
      # Topology spread for HA
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
      
      containers:
        - name: 
          image: 
          imagePullPolicy: 
          
          # Container security context
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            capabilities:
              drop:
                - ALL
          
          ports:
            - name: http
              containerPort: 
              protocol: TCP
            - name: metrics
              containerPort: 9090
              protocol: TCP
          
          # Environment from ConfigMap and Secrets
          envFrom:
            - configMapRef:
                name: -config
            - secretRef:
                name: -secrets
          
          # Health checks (critical for rolling updates)
          livenessProbe:
            httpGet:
              path: /health/live
              port: http
            initialDelaySeconds: 15
            periodSeconds: 10
            failureThreshold: 3
          
          readinessProbe:
            httpGet:
              path: /health/ready
              port: http
            initialDelaySeconds: 5
            periodSeconds: 5
            failureThreshold: 3
          
          startupProbe:
            httpGet:
              path: /health/live
              port: http
            failureThreshold: 30
            periodSeconds: 10
          
          # Resource management
          resources:
          
          volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: cache
              mountPath: /app/.cache
      
      volumes:
        - name: tmp
          emptyDir: {}
        - name: cache
          emptyDir: {}
      nodeSelector:
      affinity:
      tolerations:

Values Schema Validation

JSON Schema validation is essential for catching configuration errors early:

// values.schema.json
{
  "$schema": "https://json-schema.org/draft-07/schema#",
  "type": "object",
  "required": ["image", "resources"],
  "properties": {
    "replicaCount": {
      "type": "integer",
      "minimum": 1,
      "maximum": 100,
      "default": 3
    },
    "image": {
      "type": "object",
      "required": ["repository"],
      "properties": {
        "registry": {
          "type": "string",
          "default": "docker.io"
        },
        "repository": {
          "type": "string",
          "description": "Container image repository"
        },
        "tag": {
          "type": "string",
          "description": "Image tag (defaults to chart appVersion)"
        },
        "pullPolicy": {
          "type": "string",
          "enum": ["Always", "IfNotPresent", "Never"],
          "default": "IfNotPresent"
        }
      }
    },
    "resources": {
      "type": "object",
      "required": ["limits", "requests"],
      "properties": {
        "limits": {
          "type": "object",
          "required": ["memory"],
          "properties": {
            "memory": {"type": "string"},
            "cpu": {"type": "string"}
          }
        },
        "requests": {
          "type": "object",
          "required": ["memory", "cpu"],
          "properties": {
            "memory": {"type": "string"},
            "cpu": {"type": "string"}
          }
        }
      }
    },
    "autoscaling": {
      "type": "object",
      "properties": {
        "enabled": {"type": "boolean", "default": false},
        "minReplicas": {"type": "integer", "minimum": 1},
        "maxReplicas": {"type": "integer", "minimum": 1},
        "targetCPUUtilizationPercentage": {
          "type": "integer",
          "minimum": 1,
          "maximum": 100
        }
      }
    }
  }
}

GitOps Integration with ArgoCD

Helm 4 + ArgoCD is the gold standard for GitOps deployments:

# argocd/apps/myapp-production.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp-production
  namespace: argocd
  annotations:
    argocd.argoproj.io/sync-wave: "2"
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: production
  
  source:
    repoURL: oci://registry.mycompany.com/charts
    chart: myapp
    targetRevision: "1.2.0"
    
    helm:
      releaseName: myapp
      
      # Reference encrypted values via ArgoCD Vault Plugin
      valuesObject:
        replicaCount: 3
        image:
          repository: mycompany/myapp
          tag: "2.5.1"
        resources:
          limits:
            memory: "512Mi"
          requests:
            cpu: "100m"
            memory: "256Mi"
        autoscaling:
          enabled: true
          minReplicas: 3
          maxReplicas: 20
          targetCPUUtilizationPercentage: 70
  
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  
  syncPolicy:
    automated:
      prune: true          # Remove resources not in chart
      selfHeal: true       # Revert manual changes
    syncOptions:
      - CreateNamespace=true
      - PrunePropagationPolicy=foreground
      - PruneLast=true
      - RespectIgnoreDifferences=true
    retry:
      limit: 5
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m
  
  ignoreDifferences:
    - group: apps
      kind: Deployment
      jsonPointers:
        - /spec/replicas  # Managed by HPA

Progressive Delivery with Argo Rollouts

# Instead of Deployment, use Rollout for canary deployments
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: 
spec:
  replicas: 
  selector:
    matchLabels:
  template:
    # Same as Deployment pod spec...
  
  strategy:
    canary:
      canaryService: -canary
      stableService: -stable
      
      trafficRouting:
        nginx:
          stableIngress: 
      
      steps:
        - setWeight: 5     # 5% canary traffic
        - pause: {}        # Manual approval gate
        - setWeight: 20
        - pause: {duration: 10m}  # Auto-promote after 10min
        - setWeight: 50
        - pause: {duration: 5m}
        - setWeight: 100
      
      analysis:
        templates:
          - templateName: error-rate
        startingStep: 2
        args:
          - name: service-name
            value: -canary

Helm Testing

Always include Helm tests:

# templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "-test-connection"
  labels:
  annotations:
    "helm.sh/hook": test
    "helm.sh/hook-weight": "0"
    "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
  restartPolicy: Never
  containers:
    - name: connection-test
      image: busybox:1.36
      command: 
        - sh
        - -c
        - |
          echo "Testing connection to ..."
          wget -qO- http://:/health/live
          if [ $? -eq 0 ]; then
            echo "Connection test PASSED"
          else
            echo "Connection test FAILED"
            exit 1
          fi
# Run tests after deployment
helm test myapp --namespace production --logs

Useful Helm Commands for Production

# Check what would change before upgrading
helm diff upgrade myapp ./mychart \
  --namespace production \
  --values values-prod.yaml

# Upgrade with automatic rollback on failure
helm upgrade myapp ./mychart \
  --namespace production \
  --values values-prod.yaml \
  --atomic \          # Rollback if failed
  --timeout 5m \      # Timeout
  --wait              # Wait for resources to be ready

# View deployment history
helm history myapp --namespace production

# Rollback to previous version
helm rollback myapp 0 --namespace production  # 0 = previous

# Debug template rendering
helm template myapp ./mychart \
  --values values-prod.yaml \
  --debug 2>&1 | head -100

# Lint chart for errors
helm lint ./mychart --strict --values values-prod.yaml

# Package and push to OCI registry
helm package ./mychart
helm push mychart-1.2.0.tgz oci://registry.mycompany.com/charts

Common Pitfalls and Solutions

1. Secret Management

Never store secrets in values files committed to Git

✅ Use one of:

# Option A: External Secrets Operator
# Option B: ArgoCD Vault Plugin  
# Option C: Sealed Secrets
# Option D: --set-string from CI/CD environment variables
helm upgrade myapp ./mychart \
  --set "env.DATABASE_PASSWORD=$DATABASE_PASSWORD"

2. Resource Limits Are Required

# Always set both requests AND limits
resources:
  requests:
    cpu: "100m"
    memory: "256Mi"
  limits:
    memory: "512Mi"  # CPU limit is optional (throttling vs OOMKilled)

3. Version Pinning for Dependencies

# Chart.lock (auto-generated, always commit this!)
dependencies:
- name: postgresql
  repository: oci://registry-1.docker.io/bitnamicharts
  version: 15.5.17  # Pinned exact version
  digest: sha256:abc123...

Conclusion

Helm 4 brings production-ready package management to Kubernetes with improved security, better OCI registry support, and enhanced validation. Combined with GitOps tooling like ArgoCD and progressive delivery tools like Argo Rollouts, you can achieve truly reliable, auditable Kubernetes deployments.

Key takeaways:

  • Always use values.schema.json for early error detection
  • OCI registries are now the recommended distribution method
  • GitOps with ArgoCD provides full auditability and self-healing
  • Progressive delivery with canary deployments reduces production risk
  • Helm tests are your first line of defense for deployment validation

The Helm ecosystem has matured significantly — treat your Helm charts like production code: test them, version them, and review them carefully.


Tags: #Helm #Kubernetes #DevOps #CloudNative #GitOps #ArgoCD #PackageManagement #CNCF

이 글이 도움이 되셨다면 공감 및 광고 클릭을 부탁드립니다 :)