Hands-On: Deploy Microservices to Minikube
This section provides a step-by-step practical workflow to deploy multiple microservices on a local Kubernetes cluster using Minikube. This setup helps simulate a production-like Kubernetes environment for testing and learning purposes.
Step 1: Start Minikube Cluster
First, start a local Kubernetes cluster using Minikube. This will spin up a single-node cluster on your machine.
minikube start
What happens here:
- Minikube provisions a lightweight Kubernetes cluster locally
- Installs necessary components like kubelet and kubectl
- Prepares a Docker environment for your microservices
Tip: Use
minikube statusto verify the cluster is running properly.
Step 2: Build Docker Images
Next, containerize each microservice with Docker. Each service should have its own Dockerfile.
docker build -t user-service:1.0 .
docker build -t order-service:1.0 .
Why this is important:
- Containers ensure consistent environments across development, testing, and production
- Docker images package your microservice code with dependencies
Make sure to tag your images properly for versioning (e.g.,
1.0,latest).
Step 3: Load Images into Minikube
If using Minikube, Docker images built on your local machine may not automatically be available inside the Minikube cluster. Load them manually:
minikube image load user-service:1.0
minikube image load order-service:1.0
Purpose:
- Ensures Kubernetes can find and run the Docker images inside the cluster
- Avoids errors like
ImagePullBackOff
Step 4: Apply Deployment YAML
Deploy your microservices using Kubernetes Deployment manifests. These YAML files define:
- Number of replicas (pods)
- Docker image to use
- Ports to expose
- Labels for service discovery
kubectl apply -f user-deployment.yaml
kubectl apply -f order-deployment.yaml
Kubernetes automatically creates the pods and maintains the desired number of replicas.
Step 5: Create Services
Expose your deployments so that they can communicate with each other or be accessed externally:
kubectl apply -f user-service.yaml
kubectl apply -f order-service.yaml
Service types:
- ClusterIP: Internal communication only
- NodePort: External access from browser or tools
Tip: Always use services for inter-service communication rather than hardcoding IPs.
Step 6: Verify Pods & Services
Check the status of your pods and services:
kubectl get pods
kubectl get services
Expected outcome:
- Pods should show
Runningstatus - Services should display assigned ClusterIP or NodePort
- Verify replicas match the number defined in deployment YAML
This ensures your microservices are up and running correctly.
Step 7: Access Application
Access your services via Minikube:
minikube service user-service
Alternative:
Use the NodePort URL in your browser:
http://<minikube-ip>:<nodePort>
This opens your deployed application locally, allowing you to test functionality.
Step 8: Debug Issues (Optional)
If something isn’t working, inspect logs and pod details:
kubectl logs <pod-name>
kubectl describe pod <pod-name>
Why this is useful:
- Helps detect errors like missing environment variables or image issues
- Provides insight into pod events and container status
Step 9: Scale Application
You can scale microservices dynamically without downtime:
kubectl scale deployment user-service --replicas=3
Benefits of scaling:
- Handles higher traffic
- Tests your microservices’ ability to scale horizontally
- Useful for performance testing
Step 10: Clean Up Resources
After testing, remove all deployed resources and stop the Minikube cluster:
kubectl delete -f .
minikube stop
Why clean-up matters:
- Frees up local system resources
- Prevents conflicts with future deployments
Conclusion
This workflow covers the complete lifecycle of microservices deployment on Minikube:
- Start Minikube cluster
- Build Docker images
- Load images into Minikube
- Deploy applications using Deployment YAML
- Expose services
- Verify pods and services
- Access applications
- Debug if required
- Scale microservices
- Clean up resources
In simple terms, this guide provides a real production-like Kubernetes workflow locally, enabling you to test, debug, and scale microservices efficiently.
