THE LANDSCAPE — K3S | Lightweight Kubernetes

K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

https://k3s.io/

Key Characteristics

  1. No Tunnels Needed: K3s includes a built-in lightweight load balancer (ServiceLB / Klipper). When a Kubernetes Service of type LoadBalancer is created, K3s automatically binds it to the server's public IP on the requested ports (typically 80 and 443). K3s runs a small ServiceLB pod that uses hostPort to open the requested ports (e.g., 80/443) on the node and forwards traffic to the Service.
  2. Production Defaults: K3s is designed for edge and production environments. APIs are not left unauthenticated by default.
  3. UFW Bypass Behavior: K3s installs iptables rules that can override UFW expectations. K3s manipulates iptables directly (similar to Docker). If a Kubernetes LoadBalancer exposes a port, it becomes publicly accessible regardless of UFW status. Access control must therefore be managed using Kubernetes NetworkPolicies or external firewalls from the hosting provider.

Control & Data Plane

Plane What it includes Relation to YAML Control Plane kube-apiserver, scheduler, controllers, etcd Reads YAML and ensures cluster state Data Plane Nodes, kubelet, kube-proxy, pods, containers Runs workloads actually described by YAML
  
        ┌─────────────┐
        │  kubectl    │
        │  (CLI/API)  │
        └─────┬───────┘
              │
              ▼
     ┌───────────────────┐
     │  Control Plane    │
     │────────────────── │
     │ kube-apiserver    │
     │ kube-scheduler    │
     │ kube-controller-  │
     │ manager           │
     │ etcd              │
     └────────┬───────── ┘
              │
              ▼
        ┌───────────┐
        │ Data Plane│
        │ (Nodes)   │
        │───────────│
        │ kubelet   │
        │ kube-proxy│
        │ Pods &    │
        │ Containers│
        └───────────┘
        

Architecture & Routing Flow

Phase 1: The Background Setup (Control Plane)

  1. SSL Prep: Cert-manager fetches a certificate from Let's Encrypt and saves it as a Kubernetes Secret.
  2. Rule Reading: The NGINX Controller watches the Ingress resource to learn that yourdomain.com should route to webapp-service.
  3. Endpoint Discovery: NGINX queries the Kubernetes API to ask, "What are the actual Pod IPs behind webapp-service?" It saves those IPs in its internal configuration.

Phase 2: The Actual Request (Data Plane)

  1. The Entry: A client requests https://yourdomain.com.
  2. The L4 Funnel: The external LoadBalancer blindly forwards this TCP traffic into the cluster to the NGINX Pod.
  3. SSL Termination: NGINX intercepts the traffic, reads the TLS Secret, and decrypts the HTTPS request into plain HTTP.
  4. The Direct Hop (The Bypass): Because NGINX already memorized the Pod IPs in Phase 1, it routes the plain HTTP traffic directly to the Webapp Pod, completely bypassing kube-proxy and the Service IP.
           
                          [ Client ]
                                |
                                |
                                v
                +--------------------------------+
                |   ServiceLB / LoadBalancer     |
                | L4 forwarding · binds 80/443   |
                +--------------------------------+
                                |
- - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - - - - - - - - -
 K3s cluster                    |
                                |
  [ NAMESPACE: ingress-nginx ]  |           [ NAMESPACE: cert-manager ]
  +--------------------------+  |           +--------------------------+     external
  |                          |  |           |                          |   +----------------+
  |        NGINX pod         |<--           |       cert-manager       |<->| Let's Encrypt  |
  | L7: SSL term + routing   |              |       ACME solver        |   | (Internet CA)  |
  |                          |              |                          |   +----------------+
  +--------------------------+              +--------------------------+
       |       :          :                       :              :         
  [ NAMESPACE: default ] -------------------------------------------------------+
       |       :          :                       :              :              |
       |       :          : watches               : watches      :              | 
       |       :          v                       v              : stores       |            
       |       :    +--------------------------------+           : cert         |            
       |       :    |        Ingress resource        |           :              |                          
       |       :    |  webapp-ingress (declarative)  |           :              |                          
       |       :    +--------------------------------+           :              |
       |       :                                                 :              |
       |       : reads                  +------------------------v-------+      |
       |       .......................> |           TLS Secret           |      |
       |                                |    webapp-tls · tls.crt/key    |      |
       |                                +--------------------------------+      |
       |                                                                        |
       |                                +--------------------------------+      |
       |                                |         webapp-service         |      |
       |..............................> | (Endpoints / Pod IP discovery) |      |
       |   watches for Pod IPs          +--------------------------------+      |
       |                                                                        |
       | plain HTTP                                                             |
       v                                                                        |
  +--------------------------+                                                  |                                                            
  |        Webapp pod        |                                                  |
  |   receives plain HTTP    |                                                  |
  +--------------------------+                                                  |
                                                                                |
--------------------------------------------------------------------------------+
                                                                        
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

 LEGEND:
 -----> Data plane (actual packet flow)
 .....> Control plane (watches API / reads config / writes secrets)

PHASE 1 — Virtual Server and Domain

  1. Rent a virtual private server (VPS) with a static IP.
  2. Purchase a domain (Example: yourdomain.com).
  3. Configure DNS: Set the domain's A record to the static IP of the VPS.
  4. SSH into the server (Operating system: Ubuntu).
  5. Configure UFW for SSH access:
    sudo ufw allow ssh
    sudo ufw enable
    Note: Kubernetes services exposed via LoadBalancer will bypass UFW rules.

PHASE 2 — Install K3s

K3s includes Traefik as the default Ingress Controller. Because this setup uses NGINX, Traefik should be disabled during installation.

Install K3s:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -s -

Verify cluster status:

sudo k3s kubectl get nodes
Explanation: K3s runs directly as a systemd service on the host machine. There are no virtual machines or hidden Docker networking layers. kubectl will automatically use K3s's built-in configuration.

PHASE 3 — Private Registry Access and NGINX Ingress

K3s uses containerd instead of Docker. Therefore, do not use docker login. Instead, create a Kubernetes secret for registry authentication.

Create GitHub Container Registry secret:

sudo k3s kubectl create secret docker-registry ghcr-login \
--docker-server=ghcr.io \
--docker-username=<your-username> \
--docker-password=<your-token> \
--docker-email=<your-email>

Install the NGINX Ingress Controller (bare-metal version):

sudo k3s kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/ingress-nginx/REPLACE_WITH_ACTUAL_VERSION/deploy/static/provider/baremetal/deploy.yaml

Patch the service to use a LoadBalancer:

sudo k3s kubectl patch svc ingress-nginx-controller \
-n ingress-nginx \
-p '{"spec": {"type": "LoadBalancer"}}'
Explanation:

Changing the service type to LoadBalancer allows K3s ServiceLB to bind the host ports 80 and 443 directly. No minikube tunnel process is required.

Traffic flow: Internet → VPS public IP → NGINX Ingress Controller → Kubernetes Service → Pods

Troubleshooting Check:
sudo k3s kubectl get svc -n ingress-nginx
If there is no external IP, check ServiceLB:
sudo k3s kubectl get pods -n kube-system | grep svclb

If Traefik wasn't disabled correctly, NGINX will be stuck in Pending because Traefik is holding ports 80/443. To fix:
sudo k3s kubectl delete helmchart traefik -n kube-system
sudo k3s kubectl delete helmchart traefik-crd -n kube-system
echo "disable: traefik" | sudo tee -a /etc/rancher/k3s/config.yaml
sudo systemctl restart k3s

PHASE 4 — Application Deployment

The imagePullSecrets reference in the deployment YAML assumes the secret name ghcr-login matches what was created in Phase 3.

Deployment configuration (deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      imagePullSecrets:
        - name: ghcr-login
      containers:
        - name: webapp
          image: ghcr.io/your-org/your-app:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 3000

Service configuration (service.yaml)

apiVersion: v1
kind: Service
metadata:
  name: webapp-service
spec:
  type: ClusterIP
  selector:
    app: webapp
  ports:
    - port: 80
      targetPort: 3000

Apply resources:

sudo k3s kubectl apply -f deployment.yaml
sudo k3s kubectl apply -f service.yaml

PHASE 5 — SSL with Cert-Manager

Install Cert-Manager:

sudo k3s kubectl apply -f \
https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml

Wait for the pods:

sudo k3s kubectl get pods -n cert-manager
What is actually installed?

ClusterIssuer configuration (cluster_issuer.yaml)

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    email: you@yourdomain.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-key
    solvers:
      - http01:
          ingress:
            class: nginx

Ingress configuration (ingress.yaml)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/cluster-issuer: letsencrypt
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - yourdomain.com
      secretName: webapp-tls
  rules:
    - host: yourdomain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: webapp-service
                port:
                  number: 80

Apply SSL and routing resources:

sudo k3s kubectl apply -f cluster_issuer.yaml
sudo k3s kubectl apply -f ingress.yaml
Traffic Resolution:
  1. The Ingress: User visits https://yourdomain.com. NGINX intercepts, encrypts/decrypts traffic using the webapp-tls secret, and looks at routing rules.
  2. Direct Name Reference: Ingress sends traffic to the internal webapp-service on port 80.
  3. The Service: Receives traffic safely inside the cluster and uses its selector (app: webapp) to forward it to healthy Pods.
  4. The Deployment: Ensures those labeled Pods are alive and running.

PHASE 6 — Verification

Check certificate issuance:

sudo k3s kubectl get certificate
sudo k3s kubectl describe certificate

View all cluster resources:

sudo k3s kubectl get all -A
sudo k3s kubectl get all -A --sort-by='.kind' -o custom-columns='NAMESPACE:.metadata.namespace,TYPE:.kind,NAME:.metadata.name,AGE:.metadata.creationTimestamp'

Security How-To

Even if your web app is simple, your VPS is exposed to the internet. Here’s how to check for open ports and secure your system.

1. Check for open ports

Run the following command to see which services are listening on TCP/UDP ports:

$ ss -tulpen

To specifically check if the Kubernetes API server (port 6443) is exposed:

$ ss -tulpen | grep 6443
Warning: If you see *:6443, your Kubernetes API is reachable from the public internet — this is a critical risk.

2. Use your VPS provider firewall

Even though UFW is active, K3s may bypass it. Use the cloud firewall to restrict access:

3. Restrict SSH to your IP (optional but recommended)

Instead of allowing SSH from anywhere (0.0.0.0/0), allow only your home/work IP:

Source IP / CIDR: YOUR.IP.ADD.RESS/32

This prevents attackers from brute-forcing your SSH login.

4. Verify after applying firewall rules

Check that 6443 and other sensitive ports are no longer publicly reachable:

$ ss -tulpen | grep 6443
$ ss -tulpen | grep -E "22|80|443"

5. Keep your system updated

6. Monitor logs

Check logs for suspicious activity:

$ sudo journalctl -u k3s
$ sudo k3s kubectl logs -n ingress-nginx <nginx-pod>

7. Container Security Contexts & Oversized Images

Container workloads should avoid default root execution and unnecessary operating system utilities.

The Risk: Containers that run as root increase the impact of an application compromise. If a vulnerability is exploited, elevated privileges inside the container can make lateral movement and post-exploitation easier. Using full base images also introduces shells, package managers, and extra binaries that expand the attack surface.

The Fix: Enforce a non-root runtime with a defined securityContext and use minimal or distroless images to reduce available tooling and shrink the attack surface.

8. Resource Requests & Limits

Workloads should define CPU and memory boundaries.

The Risk: Without resource requests and limits, unexpected traffic spikes, memory leaks, or inefficient processes can consume excessive node resources and destabilize other workloads or the cluster itself.

The Fix: Define resources.requests and resources.limits to ensure predictable scheduling, prevent resource starvation, and allow unhealthy containers to be restarted safely.

9. Unrestricted Internal Networking

NetworkPolicies

The Risk: Kubernetes defaults to a "flat" network where any pod can communicate with any other pod. If your webapp is compromised, the attacker can scan and communicate with internal K3s components or future backend databases you deploy.

The Fix: Implement a default-deny NetworkPolicy that only allows traffic from your NGINX ingress controller to your webapp pods.

Following these steps will help ensure your VPS remains secure even if it’s hosting a public web service.

-----

K3s Stop & Restart Instructions

1. Stop k3s completely

# Stop the k3s systemd service
sudo systemctl stop k3s

# Kill leftover containerd processes
sudo pkill -f "containerd-shim"

# Confirm no k3s or containerd processes remain
ps aux | grep -E 'k3s|containerd-shim'

# Optionally, prevent k3s from starting on boot
sudo systemctl disable k3s
sudo systemctl reset-failed k3s

2. Restart k3s

# Start the k3s service
sudo systemctl start k3s

# Verify the service is active
sudo systemctl status k3s

# Check node and pods
sudo k3s kubectl get nodes
sudo k3s kubectl get pods -A

3. Optional: Full Uninstall

# Full uninstall on server node
sudo /usr/local/bin/k3s-uninstall.sh

# Full uninstall on agent node
sudo /usr/local/bin/k3s-agent-uninstall.sh

Notes: