Kubernetes has a reputation for being complex. Some of that reputation is deserved, but a lot of it comes from trying to learn everything at once instead of starting with what matters.
Here’s how we recommend teams get started.
Start with why
Before diving into YAML files, ask yourself: do you need Kubernetes? If you have a single application with predictable traffic, a managed container service like AWS ECS or Google Cloud Run might be simpler.
Kubernetes makes sense when you have:
- Multiple services that need to communicate
- Workloads that scale independently
- A team that deploys frequently
- Requirements for self-healing and automated rollouts
The basics you need to know
Kubernetes runs your containers across a cluster of machines. The key concepts:
- Pods are the smallest unit, one or more containers running together
- Deployments manage your pods and handle rolling updates
- Services expose your pods to network traffic
- ConfigMaps and Secrets store configuration separately from your containers
That’s all you need to get started. Ignore Helm charts, service meshes, and custom operators for now.
Your first deployment
A basic deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:latest
ports:
- containerPort: 8080 Then expose it with a Service:
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
type: LoadBalancer Apply both with kubectl apply -f and you have a running, load-balanced application.
Where to run it
For most teams, we recommend a managed Kubernetes service:
- EKS (AWS): best if you’re already in the AWS ecosystem
- GKE (Google Cloud): the smoothest managed experience
- AKS (Azure): solid if you’re a Microsoft shop
All three handle the control plane for you, which removes the hardest part of running Kubernetes.
Common pitfalls
- Over-engineering from day one. You don’t need a service mesh with three pods.
- Not setting resource limits. One runaway container can starve everything else.
- Ignoring health checks. Add liveness and readiness probes to every deployment.
- Skipping namespaces. Separate your environments from the start.
Next steps
Once you’re comfortable with basic deployments, the natural progression is:
- Add health checks and resource limits
- Set up a CI/CD pipeline that deploys automatically
- Introduce Helm for templating once you have multiple similar services
- Consider monitoring with Prometheus and Grafana
You do not need to learn everything at once. Get a deployment running, add health checks, then build from there. Most of the teams we work with are productive within a few days of starting.