K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.
https://k3s.io/LoadBalancer is created, K3s automatically binds it to the
server's public IP on the requested ports (typically 80 and 443).
K3s runs a small ServiceLB pod that uses hostPort to open the requested ports (e.g., 80/443) on the node and
forwards traffic to the Service.
┌─────────────┐
│ kubectl │
│ (CLI/API) │
└─────┬───────┘
│
▼
┌───────────────────┐
│ Control Plane │
│────────────────── │
│ kube-apiserver │
│ kube-scheduler │
│ kube-controller- │
│ manager │
│ etcd │
└────────┬───────── ┘
│
▼
┌───────────┐
│ Data Plane│
│ (Nodes) │
│───────────│
│ kubelet │
│ kube-proxy│
│ Pods & │
│ Containers│
└───────────┘
yourdomain.com should route to webapp-service.
https://yourdomain.com.kube-proxy and the Service IP.
[ Client ]
|
|
v
+--------------------------------+
| ServiceLB / LoadBalancer |
| L4 forwarding · binds 80/443 |
+--------------------------------+
|
- - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - - - - - - - - -
K3s cluster |
|
[ NAMESPACE: ingress-nginx ] | [ NAMESPACE: cert-manager ]
+--------------------------+ | +--------------------------+ external
| | | | | +----------------+
| NGINX pod |<-- | cert-manager |<->| Let's Encrypt |
| L7: SSL term + routing | | ACME solver | | (Internet CA) |
| | | | +----------------+
+--------------------------+ +--------------------------+
| : : : :
[ NAMESPACE: default ] -------------------------------------------------------+
| : : : : |
| : : watches : watches : |
| : v v : stores |
| : +--------------------------------+ : cert |
| : | Ingress resource | : |
| : | webapp-ingress (declarative) | : |
| : +--------------------------------+ : |
| : : |
| : reads +------------------------v-------+ |
| .......................> | TLS Secret | |
| | webapp-tls · tls.crt/key | |
| +--------------------------------+ |
| |
| +--------------------------------+ |
| | webapp-service | |
|..............................> | (Endpoints / Pod IP discovery) | |
| watches for Pod IPs +--------------------------------+ |
| |
| plain HTTP |
v |
+--------------------------+ |
| Webapp pod | |
| receives plain HTTP | |
+--------------------------+ |
|
--------------------------------------------------------------------------------+
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
LEGEND:
-----> Data plane (actual packet flow)
.....> Control plane (watches API / reads config / writes secrets)
yourdomain.com).sudo ufw allow ssh
sudo ufw enable
Note: Kubernetes services exposed via LoadBalancer will bypass UFW rules.
K3s includes Traefik as the default Ingress Controller. Because this setup uses NGINX, Traefik should be disabled during installation.
Install K3s:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -s -
Verify cluster status:
sudo k3s kubectl get nodes
kubectl will automatically use K3s's built-in
configuration.
K3s uses containerd instead of Docker. Therefore, do not use docker login. Instead, create a
Kubernetes secret for registry authentication.
Create GitHub Container Registry secret:
sudo k3s kubectl create secret docker-registry ghcr-login \
--docker-server=ghcr.io \
--docker-username=<your-username> \
--docker-password=<your-token> \
--docker-email=<your-email>
Install the NGINX Ingress Controller (bare-metal version):
sudo k3s kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/ingress-nginx/REPLACE_WITH_ACTUAL_VERSION/deploy/static/provider/baremetal/deploy.yaml
Patch the service to use a LoadBalancer:
sudo k3s kubectl patch svc ingress-nginx-controller \
-n ingress-nginx \
-p '{"spec": {"type": "LoadBalancer"}}'
Changing the service type to LoadBalancer allows K3s ServiceLB to bind the host ports 80 and 443
directly. No minikube tunnel process is required.
Traffic flow: Internet → VPS public IP → NGINX Ingress Controller → Kubernetes Service → Pods
Troubleshooting Check:
sudo k3s kubectl get svc -n ingress-nginx
If there is no external IP, check ServiceLB:
sudo k3s kubectl get pods -n kube-system | grep svclb
If Traefik wasn't disabled correctly, NGINX will be stuck in Pending because Traefik is holding ports 80/443.
To fix:
sudo k3s kubectl delete helmchart traefik -n kube-system
sudo k3s kubectl delete helmchart traefik-crd -n kube-system
echo "disable: traefik" | sudo tee -a /etc/rancher/k3s/config.yaml
sudo systemctl restart k3s
The imagePullSecrets reference in the deployment YAML assumes the secret name
ghcr-login matches what was created in Phase 3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 2
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
imagePullSecrets:
- name: ghcr-login
containers:
- name: webapp
image: ghcr.io/your-org/your-app:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: ClusterIP
selector:
app: webapp
ports:
- port: 80
targetPort: 3000
Apply resources:
sudo k3s kubectl apply -f deployment.yaml
sudo k3s kubectl apply -f service.yaml
Install Cert-Manager:
sudo k3s kubectl apply -f \
https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
Wait for the pods:
sudo k3s kubectl get pods -n cert-manager
Certificate,
Issuer, and ClusterIssuer.
cert-manager-controller (the
brain), webhook (validates YAML syntax), and cainjector.apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: you@yourdomain.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-key
solvers:
- http01:
ingress:
class: nginx
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt
spec:
ingressClassName: nginx
tls:
- hosts:
- yourdomain.com
secretName: webapp-tls
rules:
- host: yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 80
Apply SSL and routing resources:
sudo k3s kubectl apply -f cluster_issuer.yaml
sudo k3s kubectl apply -f ingress.yaml
https://yourdomain.com. NGINX intercepts,
encrypts/decrypts traffic using the webapp-tls secret, and looks at routing rules.webapp-service on port 80.
app: webapp) to forward it to healthy Pods.Check certificate issuance:
sudo k3s kubectl get certificate
sudo k3s kubectl describe certificate
View all cluster resources:
sudo k3s kubectl get all -A
sudo k3s kubectl get all -A --sort-by='.kind' -o custom-columns='NAMESPACE:.metadata.namespace,TYPE:.kind,NAME:.metadata.name,AGE:.metadata.creationTimestamp'
Even if your web app is simple, your VPS is exposed to the internet. Here’s how to check for open ports and secure your system.
Run the following command to see which services are listening on TCP/UDP ports:
$ ss -tulpen
To specifically check if the Kubernetes API server (port 6443) is exposed:
$ ss -tulpen | grep 6443
*:6443, your Kubernetes API is reachable from the public
internet — this is a critical risk.
Even though UFW is active, K3s may bypass it. Use the cloud firewall to restrict access:
Instead of allowing SSH from anywhere (0.0.0.0/0), allow only your home/work IP:
Source IP / CIDR: YOUR.IP.ADD.RESS/32
This prevents attackers from brute-forcing your SSH login.
Check that 6443 and other sensitive ports are no longer publicly reachable:
$ ss -tulpen | grep 6443
$ ss -tulpen | grep -E "22|80|443"
$ sudo apt update && sudo apt upgrade -y
$ sudo k3s kubectl version
Check logs for suspicious activity:
$ sudo journalctl -u k3s
$ sudo k3s kubectl logs -n ingress-nginx <nginx-pod>
Container workloads should avoid default root execution and unnecessary operating system utilities.
The Risk: Containers that run as root increase the impact of an application compromise. If a vulnerability is exploited, elevated privileges inside the container can make lateral movement and post-exploitation easier. Using full base images also introduces shells, package managers, and extra binaries that expand the attack surface.
The Fix: Enforce a non-root runtime with a defined securityContext and use
minimal or distroless images to reduce available tooling and shrink the attack surface.
Workloads should define CPU and memory boundaries.
The Risk: Without resource requests and limits, unexpected traffic spikes, memory leaks, or inefficient processes can consume excessive node resources and destabilize other workloads or the cluster itself.
The Fix: Define resources.requests and resources.limits to ensure
predictable scheduling, prevent resource starvation, and allow unhealthy containers to be restarted safely.
NetworkPolicies
The Risk: Kubernetes defaults to a "flat" network where any pod can communicate with any other pod. If your webapp is compromised, the attacker can scan and communicate with internal K3s components or future backend databases you deploy.
The Fix: Implement a default-deny NetworkPolicy that only allows traffic from your NGINX ingress controller to your webapp pods.
Following these steps will help ensure your VPS remains secure even if it’s hosting a public web service.
-----# Stop the k3s systemd service
sudo systemctl stop k3s
# Kill leftover containerd processes
sudo pkill -f "containerd-shim"
# Confirm no k3s or containerd processes remain
ps aux | grep -E 'k3s|containerd-shim'
# Optionally, prevent k3s from starting on boot
sudo systemctl disable k3s
sudo systemctl reset-failed k3s
# Start the k3s service
sudo systemctl start k3s
# Verify the service is active
sudo systemctl status k3s
# Check node and pods
sudo k3s kubectl get nodes
sudo k3s kubectl get pods -A
# Full uninstall on server node
sudo /usr/local/bin/k3s-uninstall.sh
# Full uninstall on agent node
sudo /usr/local/bin/k3s-agent-uninstall.sh
Notes:
containerd-shim processes must be killed manually.
kubectl get nodes to confirm cluster status after start/stop.