How to Use Kubernetes for Microservices

Bravin Wasike
April 28, 2025

Microservices have changed the way developers build and deploy applications, making it easier to scale and manage complex systems. Instead of relying on a single, monolithic codebase, microservices break applications into smaller, independent services that work together. This approach improves flexibility, but it also introduces challenges—like managing service communication, scaling efficiently, and handling failures.

Kubernetes addresses these challenges. As a container orchestration platform, it helps teams deploy, scale, and manage microservices efficiently. It takes care of scheduling, networking, load balancing, and failover, allowing developers to focus on building features rather than managing infrastructure.

In this guide, you'll learn how to deploy microservices in Kubernetes, set up networking between services, scale them effectively, and ensure high availability.

What Is Kubernetes?

Kubernetes is an open-source container orchestration platform that automates containerized applications' deployment, scaling, and management. Google initially developed it, and it's now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes allows developers to manage complex application infrastructures and maintain reliability, scalability, and high availability.

Key Features of Kubernetes

  • Automated deployment and scaling: Dynamically manages workloads based on resource demands.
  • Load balancing and service discovery: Automatically distributes traffic across microservices.
  • Self-healing capabilities: Automatically restarts containers that fail.
  • Declarative configuration: Uses YAML files to define desired states for applications.
  • Secret and configuration management: Securely stores sensitive information, such as API keys and credentials.

What Are Microservices?

Microservices is an architectural style where developers build applications as a collection of small, independent services that communicate via APIs. Furthermore, each microservice focuses on a specific business capability, allowing teams to develop, scale, and maintain applications more efficiently.

  • Independently deployable: Teams can deploy and update each microservice separately.
  • Loosely coupled: Services interact with minimal dependencies, increasing flexibility.
  • Technology agnostic: Developers can use different programming languages and frameworks to build microservices.
  • Resilient & scalable: Microservices help isolate faults and simplify scaling.

How to Use Kubernetes for Microservices

Step 1: Setting Up a Kubernetes Cluster

To begin, you need a Kubernetes cluster to orchestrate and manage your services before deploying microservices.

There are several ways to set up a Kubernetes cluster, depending on your development and production needs:

  • Minikube (for local development)
  • Kubernetes on Docker Desktop
  • Managed Kubernetes Services such as Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or Azure Kubernetes Service (AKS).

To install and start Minikube:

Using Kind (Kubernetes in Docker)

To check if your cluster is running:

Step 2: Containerizing Microservices for Kubernetes

Before deploying microservices to Kubernetes, they need to be containerized using Docker. Create a simple Node.js microservice and containerize it.

After that, create a Dockerfile for the Node.js microservice. The following Dockerfile defines how to package a user-service microservice:

Step 3: Creating a Kubernetes Deployment With Microservices Containers

Once your microservice is containerized and available in a container registry, deploy it to a Kubernetes cluster so that the required number of replicas (instances) of a microservice is always running. If a pod fails, the deployment automatically replaces it to maintain availability.

Below is a deployment YAML file (microservice-deployment.yaml) that runs a Node.js-based microservice in Kubernetes:

Breaking Down the Deployment YAML:

  • replicas: 3: Ensures that three instances of the microservice run at all times. If one fails, Kubernetes will automatically start a replacement.
  • selector.matchLabels: Ensures that only pods with the label app: my-microservice are managed by this deployment.
  • template.metadata.labels: Applies labels to the pods created by this deployment.
  • containers.image: Specifies the container image that Kubernetes will pull from a registry.
  • ports.containerPort: Defines the port (3000) exposed by the container inside the cluster.

Apply the deployment:

Verify the deployment:

You should see an output similar to the following:

To check the status of the running pods, use:

Afterward, this will list the three pod instances managed by the deployment:

Step 4: Creating a Kubernetes Service

After you deploy a microservice, it needs a way to communicate with other services or to be accessed within the cluster. In Kubernetes, a service provides a stable network endpoint that routes traffic to the appropriate set of pods even if pod instances are restarted or replaced.

Below is a service YAML file (my-microservice-service.yaml) that exposes the previously deployed microservice:

Breaking Down the Service YAML:

  • selector.app: my-microservice: Makes sure that traffic is routed to the pods labeled app: my-microservice.
  • port: 80: The port on which the service is accessible within the cluster.
  • targetPort: 3000: The port where the microservice is listening inside the container.
  • type: ClusterIP: Exposes the microservice internally within the Kubernetes cluster.

Note:

  • If you want to expose the service externally, change type: ClusterIP to type: LoadBalancer (for cloud environments) or type: NodePort (for testing).
  • With LoadBalancer, the cloud provider will assign an external IP automatically.

Apply the service.

Checking the Service Status

To confirm the service is running and get its details, use:

Output for a ClusterIP service:

If using a LoadBalancer service, the external IP will be assigned (may take a few minutes):

To test the service, use curl (for ClusterIP, test from within the cluster):

For LoadBalancer, you can access the microservice directly using the assigned external IP:

Step 5: Scaling Microservices

Kubernetes allows the scaling of microservices by adjusting the number of replicas in the deployment. To handle increased traffic, Kubernetes can scale microservices using horizontal pod autoscaling.

Scale Up to 5 Instances

Auto-scaling Based on CPU Usage

Apply the autoscaler:

To check the autoscaler status, use:

Afterward, this is the output:

Testing Autoscaling

To test autoscaling, simulate CPU load using stress-ng in a pod:

Then monitor the pod count:

As CPU usage increases, the HPA will automatically scale up pods to handle the load.

Step 6: Load Balancing

Load balancing ensures efficient distribution of traffic across multiple instances of a microservice, preventing overload on any single pod. In Kubernetes, you can expose microservices externally using a LoadBalancer or an Ingress Controller.

Using an Ingress Controller (Recommended for Multiple Microservices)

If you need to manage multiple microservices under a single domain, an Ingress Controller is more efficient. The Ingress resource routes external traffic to the appropriate microservice based on the request URL.

Example Ingress configuration (ingress.yaml):

Applying the Ingress Resource

To enable Ingress, install an Ingress Controller like NGINX:

Then deploy the Ingress rule:

Check the Ingress status:

If you’re using Minikube, expose the Ingress:

Now your microservice is accessible at http://mymicroservice.example.com.

Step 7: Configuring Microservices With Environment Variables

Microservices often require configuration values such as database URLs, API keys, and service endpoints. Instead of hardcoding these values in application code, Kubernetes provides ConfigMaps and secrets to manage them securely.

Using ConfigMaps for Non-Sensitive Data

A ConfigMap is used to store nonsensitive configuration data, such as database connection strings, feature flags, and environment-specific variables.

Example: Creating a ConfigMap (configmap.yaml)

Using Secrets for Sensitive Data

For sensitive data such as API keys, passwords, and credentials, use secrets instead of ConfigMaps. Secrets store data in a Base64-encoded format to add a layer of security.

Example: Creating a Secret for Database Credentials (secret.yaml)

Apply the secret:

Using ConfigMaps and Secrets in a Deployment

Now update the microservice Deployment file (my-microservice-service.yaml) to inject these values as environment variables.

Apply the updated deployment:

Verifying Configuration

Check if the ConfigMap and secret are correctly applied:

Finally, verify that environment variables are correctly set in a running pod:

Final Thoughts: Simplifying Kubernetes for Microservices With DevZero

Deploying microservices on Kubernetes can be complex, requiring infrastructure setup, monitoring, and scaling. DevZero simplifies Kubernetes development by providing a preconfigured cloud development environment that allows teams to focus on building applications rather than managing infrastructure. With DevZero, developers can spin up microservices environments instantly, collaborate seamlessly, and deploy confidently.

Ready to streamline your Kubernetes microservices workflow? Sign up for DevZero today!

Cut Kubernetes Costs with Smarter Resource Optimization

DevZero helps you unlock massive efficiency gains across your Kubernetes workloads—through live rightsizing, automatic instance selection, and adaptive scaling. No changes to your app, just better bin packing, higher node utilization, and real savings.