How to create a k3s cluster with Nginx Ingress controller

linux Jul 23, 2021

One of the easiest ways to install a Kubernetes distro for personal projects is using k3s, but you may not want to use some features built-in, like traefik as the default Ingress controller. Here you will learn how to create a k3s cluster with Nginx as the Ingress controller.

Why use k3s with Nginx Ingress controller?

The k3s project was created by Rancher Labs (https://k3s.io/) with the goal to be a lightweight Kubernetes distro. It's maybe not the best distro for the production environment, but it fits as a good solution for personal projects. It's also compatible with ARM architecture if you want to run it in a Raspberry Pi, for example.

The k3s is shipped with some internal components installed by default, like traefik, coredns and Service LB (also created by Rancher Labs). You may want to keep de defaults for a first try, but one of those components gave me a lot of trouble, the traefik.

In my experience, it's too hard to find good quality documentation and examples to use traefik for Kubernetes, so I prefer to use nginx instead, because does exist very good quality documentation on official Kubernetes docs, as you can check here https://kubernetes.github.io/ingress-nginx/ .

So how do I remove traefik and install nginx as the default Ingress controller?

How to create a k3s cluster?

Deploy the cluster without traefik installed:

# Disable traefik
export INSTALL_K3S_EXEC="server --no-deploy traefik"

# Create k3s cluster
curl -sfL https://get.k3s.io | sh -s -
controller server

You can get the admin user credentials at:

# admin credentials
# (probably the server is pointing to 127.0.0.1,
#  so you'll need to change to public/private
#  server ip)
cat /etc/rancher/k3s/k3s.yaml
controller server

How to add another k3s node?

On controller server (where you created the k3s cluster), get the node token:

cat /var/lib/rancher/k3s/server/node-token
controller server

This token is the "user and password" that will be used by any additional k3s cluster.

Now on some new server, run:

# Set variables
export CONTROLLER_SERVER_IP="1.2.3.4"
export K3S_TOKEN="CONTROLLER_TOKEN_HERE!!!"

# Add server as a worker node
curl -sfL https://get.k3s.io | K3S_URL=https://${CONTROLLER_SERVER_IP}:6443 sh
worker node

How to install Nginx Ingress Controller

Probably this version or even the method will change when you are reading this post, so should be better to check the official documentation here:

  • Check the latest version at the "Bare-metal" session:
Installation Guide - NGINX Ingress Controller
Check on the official documentation for the latest version available. 

Now, install the Nginx Ingress controller:

# Install Nginx Ingress controller, version 0.47.0
# (change the version for the newest one)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml

Your ingress controller current has no entry point to it, so let's create a load balancer to expose the ingress ports:

---

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx-controller-loadbalancer
  namespace: ingress-nginx
spec:
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
    - name: https
      port: 443
      protocol: TCP
      targetPort: 443
  type: LoadBalancer
ingress-controller-load-balancer.yaml

Create an example for testing

The example below will create a Deployment and expose it using the Nginx Ingress Controller. It's important to notice the annotation nginx.ingress.kubernetes.io/ssl-redirect: "false", because by default the SSL will be used and raise an error for missing certificate.

Another important point is the domain name used. I'm using the test.w1.thenets.org domain name, but you obviously must change it to your own domain name and point it to your k3s instance node.

To deploy this example for Ingress testing, create a file called my-example.yaml and apply it using:

# Create a test Namespace, if not exist
kubectl create namespace test

# Apply the example file
kubectl -n test -f my-example.yaml
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-nginx-app
  namespace: test
spec:
  selector:
    matchLabels:
      name: test-nginx-backend
  template:
    metadata:
      labels:
        name: test-nginx-backend
    spec:
      containers:
        - name: backend
          image: docker.io/nginx:alpine
          imagePullPolicy: Always
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: test-nginx-service
  namespace: test
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    name: test-nginx-backend
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-nginx-ingress
  namespace: test
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: test.w1.thenets.org
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: test-nginx-service
              port:
                number: 80

my-example.yaml

Reference

Tags

Luiz Felipe F M Costa

I am a quality engineer at Red Hat / Ansible. I love automation tools, games, and coffee. I am also an active contributor to open-source projects on GitHub.