↑
Key Concepts & Terminologies in Kubernetes
- Cluster
- A
cluster is a group of machines (nodes) managed by Kubernetes.
- It consists of:
- Control Plane — the brain
- Nodes (Workers) — machines that run applications
- Kubernetes ensures the cluster always matches the “desired state” you declare in YAML manifests.
- Node
- A
node is a machine (VM or physical) that runs your containers.
- Each node runs:
- kubelet — talks to the control plane
- container runtime — containerd, CRI-O, etc.
- kube-proxy — manages networking rules
- Nodes are the worker machines powering workloads.
- Pod
- A
pod is the smallest deployable unit in Kubernetes.
- It contains:
- one container
- or multiple tightly-coupled containers
- Pods share:
- network namespace
- IP address
- volumes
- Pods are ephemeral — they can be destroyed and recreated anytime.
- Deployment
- A
Deployment manages stateless applications.
- It defines:
- how many pods should exist
- how pods update (rolling updates)
- how to roll back
- Kubernetes ensures the Deployment maintains the correct number of pods at all times.
- ReplicaSet
- A
ReplicaSet ensures that the specified number of pod replicas are running.
- Deployments use ReplicaSets internally.
- You rarely interact with ReplicaSets directly.
- Service
- A
Service exposes pods to other pods or to the outside world.
- Types of services:
- ClusterIP — internal access only
- NodePort — exposed on a port on each node
- LoadBalancer — cloud provider load balancer
- ExternalName — DNS alias
- Services provide stable networking even if pods constantly restart.
- Ingress
- An
Ingress exposes HTTP/HTTPS routes from outside the cluster to services inside the cluster.
- Example routing features:
- domain routing
- path-based routing
- TLS termination
- Requires an Ingress Controller (e.g., NGINX Ingress Controller).
- ConfigMap
ConfigMaps store non-sensitive configuration data such as:
- environment variables
- application configs
- files
- They help avoid hardcoding configuration inside images.
- Secret
Secrets store sensitive data such as:
- passwords
- API keys
- tokens
- Unlike ConfigMaps, Secrets are base64-encoded and handled more securely.
- PersistentVolume (PV)
- A
PV is storage that exists independently of pods.
- PVs represent actual storage resources:
- local disk
- NFS
- cloud disk (AWS EBS, GCP Persistent Disk)
- PersistentVolumeClaim (PVC)
PVCs are storage requests made by pods.
- A PVC asks:
- “Give me 10GB of storage”
- “Give me ReadWriteOnce access”
- Kubernetes binds PVCs to PVs automatically if requirements match.
- StatefulSet
- A
StatefulSet manages stateful applications like:
- databases
- message queues
- storage systems
- Features:
- stable network identity
- stable persistent storage
- ordered pod startup/shutdown
- DaemonSet
- A
DaemonSet ensures that exactly one instance of a pod runs on every node.
- Used for:
- log collectors
- node monitoring (Prometheus Node Exporter)
- storage drivers
- Job
- A
Job runs a task to completion one time.
- Example use cases:
- database migrations
- data processing tasks
- one-time scripts
- CronJob
- A
CronJob runs a Job on a time-based schedule.
- Example:
- clean temporary data every hour
- run backups daily
- Labels and Selectors
Labels are key-value metadata attached to objects.
Selectors are queries used to match labels.
- Used for:
- service-to-pod matching
- deployment-to-pod matching
- organizational grouping
- Namespace
- A
namespace logically divides cluster resources.
- Common uses:
- separate teams or projects
- isolate environments (dev, staging, prod)
- apply resource quotas
- Default namespaces:
- default
- kube-system
- kube-public
- kube-node-lease
- The Control Plane (Master Components)
- Core components controlling the cluster:
- kube-apiserver — central API interface
- etcd — persistent key-value store
- kube-scheduler — decides which node to place pods on
- kube-controller-manager — manages controllers (jobs, nodes, replicas)
- cloud-controller-manager — integrates cloud provider logic
- Users normally interact only with the API server via
kubectl.
How to Write a Kubernetes Deployment YAML (Step by Step)
- What Is a Deployment YAML?
- A Deployment in Kubernetes is a higher-level object that:
- manages Pods (your running containers)
- ensures a certain number of replicas (copies) are running
- handles rolling updates and rollbacks
- A
Deployment is described in a YAML file (often called deployment.yaml or similar).
- You then apply this file with:
kubectl apply -f deployment.yaml
- Basic Skeleton of a Deployment
- Every Deployment YAML has 4 main top-level keys:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
# deployment details go here
- Explanation:
apiVersion: apps/v1 is the API group & version for Deployments (modern standard).
kind: Deployment tells Kubernetes this file defines a Deployment.
metadata: identifies information like name, labels, annotations.
spec: is the desired state (what this Deployment should do).
- Step 1: Choose a Name and Labels in
metadata
metadata describes the Deployment object itself, not the Pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
app: nginx-app
- Explanation:
name: nginx-deploy tells how you reference this Deployment via kubectl.
labels: are key-value metadata, often used for grouping and selection.
- Common pattern:
- Deployment label:
app: nginx-app
- Pods inside will also have
app: nginx-app label, so Services can target them.
- Step 2: Define Replicas and the Pod Selector in
spec
- The
spec section of a Deployment has two critical parts:
replicas — how many Pods
selector — which Pods belong to this Deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
- Explanation:
replicas: 3 tells that Deployment maintains 3 identical Pods.
selector.matchLabels tells that Deployment manages all Pods with label app=nginx-app.
- Important rule: The labels defined here must match the labels inside the Pod template (next step).
- Step 3: Define the Pod Template in
spec.template
spec.template describes the Pod that will be created.
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:1.25
ports:
- containerPort: 80
- Breakdown of
template:
template: is the Pod template definition.
template.metadata.labels labels applied to each Pod.
template.spec.containers is a list of containers in the Pod.
- Container details:
name: nginx-container is the internal name for this container.
image: nginx:1.25 tells which Docker image to run.
ports.containerPort: 80 tells which port the container listens on.
- Step 4: Add Environment Variables with
env
- You often need to pass configuration to containers via environment variables.
spec:
template:
spec:
containers:
- name: nginx-container
image: nginx:1.25
env:
- name: NGINX_ENV
value: "production"
- name: API_URL
value: "https://api.example.com"
ports:
- containerPort: 80
- Explanation:
env is a list of name / value pairs.
- These become environment variables inside the container.
- For secrets or ConfigMaps, you’d later replace
value with valueFrom (advanced topic).
- Step 5: Set Resource Requests and Limits
- To control CPU and memory usage, use
resources:
spec:
template:
spec:
containers:
- name: nginx-container
image: nginx:1.25
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
- Explanation:
requests tells what the container needs at minimum.
limits tells maximum CPU/memory allowed.
"100m" means 0.1 CPU core (100 millicores).
"128Mi" means 128 mebibytes of memory.
- Good practice: always define at least requests for production workloads.
- Step 6: Add Liveness and Readiness Probes
- Probes let Kubernetes know whether your app is:
- alive (liveness)
- ready to receive traffic (readiness)
spec:
template:
spec:
containers:
- name: nginx-container
image: nginx:1.25
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
- Explanation:
livenessProbe — if it fails repeatedly, Kubernetes restarts the container.
readinessProbe — if it fails, the Pod is removed from Service load-balancing.
initialDelaySeconds — wait this long before first check.
periodSeconds — how often to check.
- Step 7: Set the Update Strategy
- Deployments support rolling updates controlled by
strategy:
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
- Explanation:
type: RollingUpdate updates Pods gradually (default).
maxUnavailable: 1 means at most 1 Pod can be unavailable during update.
maxSurge: 1 means Kubernetes may create 1 extra Pod above desired replicas during update.
- Step 8: Add Annotations (Optional but Useful)
- Annotations store extra metadata (not used for selection):
metadata:
name: nginx-deploy
labels:
app: nginx-app
annotations:
app.kubernetes.io/owner: "Junzhe"
app.kubernetes.io/description: "Demo nginx deployment"
- Step 9: Full Example of a Well-Structured Deployment
- Here is a complete Deployment that combines all core concepts:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
app: nginx-app
annotations:
app.kubernetes.io/owner: "Junzhe"
app.kubernetes.io/description: "Simple Nginx web server"
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:1.25
ports:
- containerPort: 80
env:
- name: NGINX_ENV
value: "production"
- name: API_URL
value: "https://api.example.com"
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
- What this Deployment guarantees:
- Exactly 3 Pods with Nginx are running (unless you scale).
- Pods have label
app=nginx-app so Services can route to them.
- Deployment updates Pods gradually with rolling update rules.
- Resources are constrained and requested properly.
- Health checks ensure traffic only goes to healthy Pods.
- Step 10: Using and Inspecting Your Deployment
kubectl apply -f nginx-deploy.yaml
- See the Deployment:
kubectl get deployments
- See the Pods it created:
kubectl get pods -l app=nginx-app
- Detailed info:
kubectl describe deployment nginx-deploy
- Update the image:
kubectl set image deployment/nginx-deploy \
nginx-container=nginx:1.26
- Roll back if something goes wrong:
kubectl rollout undo deployment nginx-deploy
Introduction to kubectl (Kubernetes Command-Line Tool)
- What Is
kubectl?
kubectl is the official command-line interface used to interact with Kubernetes clusters.
- Under the hood:
kubectl communicates with the Kubernetes API Server
- authentication is based on
~/.kube/config
- commands produce YAML or JSON outputs
kubectl is the primary tool used by developers, DevOps engineers, and SREs.
- Where Does
kubectl Connect?
kubectl communicates with the Kubernetes API Server via HTTP(S).
- The connection details are stored in:
~/.kube/config
- This config file defines:
clusters → the API server endpoints
users → authentication credentials
contexts → cluster + namespace + user combinations
- List contexts:
kubectl config get-contexts
- Switch context:
kubectl config use-context minikube
- The Basic Syntax of
kubectl
- The general structure is:
kubectl [COMMAND] [TYPE] [NAME] [FLAGS]
- Example:
kubectl get pods -n kube-system
- Meaning:
get → list resources
pods → resource type
-n kube-system → in this namespace
- Command groups:
- View →
get, describe, logs, events
- Modify →
apply, edit, scale, patch
- Delete →
delete
- Debug →
exec, port-forward, attach
- Common Resource Types
- Some common Kubernetes objects you will interact with:
| Resource Type |
Description |
pods / po | Individual running containers |
deployments / deploy | Manages replicas and updates |
services / svc | Networking entry points |
nodes | Worker machines |
configmaps / cm | Configuration as key-value data |
secrets | Base64-encoded sensitive data |
ingress | HTTP routing gateway |
namespaces | Logical grouping of resources |
- Essential Commands Every Developer Uses
kubectl get pods
kubectl get deployments
kubectl get services
- 2. Describe resources:
kubectl describe pod mypod
- 3. Get logs from a container:
kubectl logs mypod
- 4. Execute a command inside a container:
kubectl exec -it mypod -- bash
- 5. Apply a YAML file:
kubectl apply -f deployment.yaml
- 6. Delete resources:
kubectl delete pod mypod
- 7. Port-forward for local testing:
kubectl port-forward deployment/myapp 8080:80
- Using
-o for Output Formatting
kubectl can output data in multiple formats:
kubectl get pods -o wide
kubectl get pod mypod -o yaml
kubectl get nodes -o json
- Common formats:
-o wide → more details
-o yaml → show full YAML spec
-o json → machine-parsable JSON
-o name → only return resource names
- Editing Resources with
kubectl edit
- You can modify resources live:
kubectl edit deployment nginx-deploy
- This opens the YAML in your system editor (usually
vi or nano).
- When saved, Kubernetes applies the change automatically.
- Scaling Applications
- Increase or decrease the number of Pods:
kubectl scale deployment nginx-deploy --replicas=5
- Kubernetes will automatically create or remove Pods.
- Rollouts and Rollbacks
kubectl rollout status deployment nginx-deploy
- Undo a failed rollout:
kubectl rollout undo deployment nginx-deploy
- This is extremely useful when updating images.
- Using Namespaces
kubectl get namespaces
- Specify a namespace:
kubectl get pods -n testing
- Set a default namespace for your context:
kubectl config set-context --current --namespace=testing
- Debugging with
kubectl
- Run an interactive shell inside a container:
kubectl exec -it mypod -- bash
- Check events:
kubectl get events --sort-by=.metadata.creationTimestamp
- Inspect failing pods:
kubectl describe pod mypod
Choosing a Managed Kubernetes Provider
- What Is a Managed Kubernetes Provider?
- A managed Kubernetes provider is a cloud service that runs and maintains the Kubernetes control plane for you.
- This means you do not manage:
- API server
- scheduler
- etcd database
- controller manager
- control-plane nodes
- Instead, you only manage:
- worker nodes (or sometimes not even these)
- deployments, services, autoscaling
- network policies
- storage and secrets
- Managed Kubernetes is essential for:
- production workloads
- high availability
- enterprise deployments
- teams without deep Kubernetes expertise
- Why Choose a Managed Provider?
- Running Kubernetes manually requires:
- patching control plane
- maintaining etcd backups
- managing API certificates
- configuring HA/replicas for control plane
- debugging cluster internals
- A managed provider removes 90% of that work.
- Popular Managed Kubernetes Providers
| Provider |
Service Name |
| Amazon AWS | Amazon Elastic Kubernetes Service (EKS) |
| Google Cloud | Google Kubernetes Engine (GKE) |
| Microsoft Azure | Azure Kubernetes Service (AKS) |
| DigitalOcean | DigitalOcean Kubernetes (DOKS) |
| Linode | Linode Kubernetes Engine (LKE) |
| Oracle Cloud | Oracle OKE |
| IBM Cloud | IBM Cloud Kubernetes Service |
| Vultr | Vultr Kubernetes Engine (VKE) |
Installing a Local Kubernetes Cluster
- What Is a Local Kubernetes Cluster?
- A local Kubernetes cluster is a Kubernetes environment that runs entirely on your local machine.
- It allows you to:
- practice Kubernetes without paying for cloud resources
- test deployments locally before pushing to production
- experiment with YAML manifests
- learn core concepts (pods, services, deployments, volumes, etc.)
- You do NOT need real cloud nodes. Everything runs as:
- Docker containers, or
- local virtual machines
- Popular Tools for Running Kubernetes Locally
- There are four major options:
| Tool |
Description |
Best For |
| Minikube |
Single-node cluster using VM or Docker |
Beginners |
| Kind (Kubernetes in Docker) |
Runs cluster entirely inside Docker containers |
Fast local setups, CI pipelines |
| Docker Desktop Kubernetes |
K8s cluster built into Docker Desktop |
Mac/Windows users |
| MicroK8s |
Lightweight single-node K8s from Canonical |
Linux developers |
- Installing Kubernetes Locally Using Minikube
- Minikube is the most popular tool.
- Step 1: Install Minikube(Check for Your Specific Machine)
- Step 2: Start the cluster
minikube start
- By default:
- 1 node cluster
- uses Docker driver if installed
- You can choose the VM driver explicitly:
minikube start --driver=docker
- Step 3: Check the cluster status
kubectl get nodes
- Step 4: Access the Kubernetes Dashboard
minikube dashboard
- This opens a full graphical K8S dashboard in your browser.
- Installing Kubernetes Locally Using kind (Kubernetes in Docker)
- Kind creates clusters inside Docker containers.
- It is extremely fast and widely used in CI pipelines.
- Step 1: Install kind
- Step 2: Create a cluster
kind create cluster
- This creates:
- a control-plane container
- a worker node container
- Step 3: Verify
kubectl get nodes
- Optional: Create a multi-node cluster
# kind-cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
kind create cluster --config kind-cluster.yaml
- Installing Kubernetes Using Docker Desktop (Mac / Windows)
- If you use Mac or Windows, Docker Desktop includes Kubernetes support.
- Step 1: Install Docker Desktop
- Step 2: Enable Kubernetes
- Open Docker Desktop
- Go to: Settings → Kubernetes
- Check:
Enable Kubernetes
- Click Apply & Restart
- After a few minutes, check:
kubectl get nodes
- You now have a single-node Kubernetes cluster running inside Docker Desktop.
- Testing Your Local Cluster
- After installing any of the above, test with a simple workload:
kubectl create deployment hello --image=nginx
kubectl expose deployment hello --port=80 --type=NodePort
kubectl get svc
- Access application:
- Minikube →
minikube service hello
- Kind / MicroK8s → use
kubectl port-forward
Deploying Your First Application in Kubernetes
- Step 1: Ensure Your Kubernetes Cluster Is Ready
- You need a running cluster.
Examples:
- minikube (local development)
- Docker Desktop Kubernetes
- kind (Kubernetes-in-Docker)
- Verify cluster status:
kubectl get nodes
You should see at least one node in Ready state.
Step 2: Create Your First Deployment
- Create a file named
nginx-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
spec:
replicas: 2
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:latest
ports:
- containerPort: 80
Explanation:
replicas: 2 → run two identical pods
selector → Deployment uses this label to manage pods
template → describes the pod
containerPort: 80 → nginx listens on port 80
Apply the manifest:
kubectl apply -f nginx-deployment.yaml
Step 3: Verify Deployment
kubectl get deployments
Check pods:
kubectl get pods
You should see two nginx pods running.
Describe more details:
kubectl describe deployment nginx-deploy
Step 4: Expose Your Application with a Service
- Create
nginx-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx-app
ports:
- port: 80
targetPort: 80
nodePort: 32000
Explanation:
type: NodePort → expose service on all nodes
selector → connect service to pods
port: 80 → service port
targetPort: 80 → container port
nodePort: 32000 → external port (30000–32767)
Apply the service:
kubectl apply -f nginx-service.yaml
Step 5: Access the Application
kubectl get nodes -o wide
Check INTERNAL-IP or EXTERNAL-IP.
Now visit the app in your browser:
http://NODE-IP:32000
If you use minikube:
minikube service nginx-service
Step 6: Scaling Your Application
- Kubernetes makes scaling easy:
kubectl scale deployment nginx-deploy --replicas=5
Verify the new pods:
kubectl get pods
Your application now runs with 5 replicas.
Step 7: Updating Your Application (Rolling Update)
kubectl set image deployment/nginx-deploy nginx-container=nginx:1.25
Kubernetes will:
- start new pods using new image
- stop old pods gradually
Check rollout status:
kubectl rollout status deployment nginx-deploy
Step 8: Rolling Back to Previous Version
kubectl rollout undo deployment nginx-deploy
Kubernetes automatically returns to the last stable version.
Step 9: Deleting the Application
kubectl delete -f nginx-service.yaml
Delete deployment:
kubectl delete -f nginx-deployment.yaml
Pods and Service are cleanly removed.
Deploying Your First Real Application in Minikube
- Overview
- After installing Minikube, the next step is deploying a real workload.
- We will now try to deploy a complete real-world application consisting of:
- a backend API (Node.js)
- a frontend (Nginx static site)
- a Kubernetes Service to expose the app
- Minikube behaves like a real Kubernetes cluster:
- Pods, Deployments, Services, Ingress all work the same
- Perfect for local development before pushing to cloud
- Project Structure
myapp/
├── backend/
│ └── Dockerfile
│ └── server.js
├── frontend/
│ └── Dockerfile
│ └── index.html
└── k8s/
├── backend-deploy.yaml
├── backend-svc.yaml
├── frontend-deploy.yaml
├── frontend-svc.yaml
└── ingress.yaml
- We will deploy backend → service → frontend → ingress.
- Step 1: Write the Backend API
const express = require('express');
const app = express();
const PORT = 5000;
app.get('/api/hello', (req, res) => {
res.json({ message: "Hello from Minikube Backend!" });
});
app.listen(PORT, () => console.log("Backend running on port", PORT));
- Create a simple
Dockerfile for backend:
FROM node:18
WORKDIR /app
COPY . .
RUN npm install express
CMD ["node", "server.js"]
- Step 2: Write the Frontend
<!DOCTYPE html>
<html>
<body>
<h1>Frontend served in Minikube</h1>
<button id="btn">Call API</button>
<pre id="result"></pre>
<script>
document.getElementById('btn').onclick = async () => {
const res = await fetch('/api/hello');
const data = await res.json();
document.getElementById('result').innerText = data.message;
}
</script>
</body>
</html>
- Frontend
Dockerfile:
FROM nginx:latest
COPY . /usr/share/nginx/html/
- Step 3: Build Docker Images Inside Minikube
- Minikube uses its own Docker daemon.
- Connect your terminal to Minikube's Docker environment:
eval $(minikube docker-env)
- Build images:
docker build -t my-backend:1.0 ./backend
docker build -t my-frontend:1.0 ./frontend
- These images remain inside Minikube’s Docker daemon, NOT your system Docker.
- Step 4: Write the Backend Deployment + Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deploy
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: my-backend:1.0
ports:
- containerPort: 5000
k8s/backend-svc.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-svc
spec:
selector:
app: backend
ports:
- port: 5000
targetPort: 5000
type: ClusterIP
- This exposes the backend internally as
http://backend-svc:5000.
- Step 5: Write the Frontend Deployment + Service
k8s/frontend-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deploy
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: my-frontend:1.0
ports:
- containerPort: 80
k8s/frontend-svc.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
selector:
app: frontend
ports:
- port: 80
targetPort: 80
type: ClusterIP
- This exposes frontend internally as
http://frontend-svc.
- Step 6: Configure Ingress for External Access
minikube addons enable ingress
k8s/ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- host: myapp.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-svc
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend-svc
port:
number: 5000
- Explanation:
myapp.local will point to Minikube
/ serves frontend
/api routes to backend
- Add to local hosts file (
/etc/hosts):
127.0.0.1 myapp.local
- Step 7: Apply All Kubernetes Manifests
kubectl apply -f k8s/backend-deploy.yaml
kubectl apply -f k8s/backend-svc.yaml
kubectl apply -f k8s/frontend-deploy.yaml
kubectl apply -f k8s/frontend-svc.yaml
kubectl apply -f k8s/ingress.yaml
- Check resources:
kubectl get all
- Check ingress:
kubectl get ingress
- Step 8: Access the Application
http://myapp.local
- Click button → backend API is called.
- You now have a full full-stack application deployed inside Minikube.
- This mirrors real production deployments using K8S + Ingress.
- Step 9: Debugging Tips
kubectl get pods
- Describe pod (very useful):
kubectl describe pod backend-deploy-xxxxx
- See logs:
kubectl logs backend-deploy-xxxxx
- Restart pods:
kubectl rollout restart deployment backend-deploy