RamNode logo
Container Orchestration

Kubernetes Basics on RamNode

Master container orchestration with Kubernetes on RamNode's high-performance VPS hosting . Learn to deploy, scale, and manage containerized applications with industry-standard orchestration tools.

Ubuntu 20.04/22.04
K3s/kubeadm
⏱️ 60-90 minutes

Prerequisites

Before starting with Kubernetes, ensure you have:

Server Requirements

  • • RamNode VPS (2GB+ RAM minimum)
  • • Ubuntu 20.04/22.04 or CentOS 7+
  • • At least 2 CPU cores
  • • 20GB+ disk space
  • • Docker installed

Knowledge Requirements

  • • Docker fundamentals
  • • Basic Linux administration
  • • YAML syntax understanding
  • • Networking concepts
  • • Command line proficiency
2

What is Kubernetes?

Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.

🎯 Purpose

While Docker manages individual containers, Kubernetes orchestrates entire fleets of containers across multiple servers, ensuring high availability, scalability, and efficient resource utilization.

🏗️ Architecture

Kubernetes uses a master-worker architecture with a control plane managing worker nodes that run your containerized applications in pods.

⚙️ Key Features

Automatic scaling, load balancing, self-healing, rolling updates, service discovery, and configuration management out of the box.

3

Why Use Kubernetes on RamNode?

RamNode's infrastructure provides an excellent foundation for Kubernetes deployments:

🏎️ Resource Efficiency

KVM-based virtualization with dedicated resources ensures consistent performance, eliminating "noisy neighbor" issues common in shared hosting.

📈 Scalability

Easily add more RamNode instances to expand your Kubernetes cluster as your applications grow.

💰 Cost-Effectiveness

Competitive pricing allows running multiple nodes without breaking the budget, perfect for both learning and production use.

🌐 Network Performance

Low-latency network infrastructure ensures fast communication between cluster nodes, crucial for optimal performance.

4

Core Kubernetes Concepts

Understanding these fundamental building blocks is essential for working with Kubernetes:

🏗️ Pods

The smallest deployable unit. Contains one or more containers sharing storage and network resources.

🖥️ Nodes

Worker machines in your cluster. Each RamNode VPS can serve as a Kubernetes node running pods.

🌐 Cluster

A set of nodes grouped together, allowing Kubernetes to distribute applications across multiple machines.

🚀 Deployments

Describe desired state for applications, managing replicas, updates, and rollbacks automatically.

🔗 Services

Provide stable network endpoints and load balancing for accessing applications.

🔧 ConfigMaps & Secrets

Manage configuration data and sensitive information separately from application code.

5

Choose Your Kubernetes Distribution

For RamNode deployments, you have several excellent options:

K3s (Recommended for Beginners)

Lightweight, easy to install, includes everything in a single binary. Perfect for development or small production workloads.

kubeadm (Standard Kubernetes)

Official Kubernetes installer providing full upstream compatibility. Best for learning standard Kubernetes or multi-node clusters.

MicroK8s

Canonical's lightweight distribution with useful addons. Great for development and testing environments.

6

Install K3s (Recommended)

K3s is perfect for RamNode deployments due to its simplicity and low resource requirements:

Install K3s Single Command
curl -sfL https://get.k3s.io | sh -
Verify Installation
sudo kubectl get nodes
Get Kubeconfig for External Access
sudo cat /etc/rancher/k3s/k3s.yaml
Optional: Set Up kubectl for Regular User
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config

✅ K3s is now running! This single command sets up a complete Kubernetes cluster with sensible defaults.

7

Install kubeadm (Multi-Node Clusters)

For multi-node clusters across multiple RamNode instances:

Add Kubernetes Repository
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Install Kubernetes Components
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Initialize Cluster (Master Node Only)
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Set Up kubectl for Your User
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install Pod Network (Flannel)
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

💡 Tip: Save the join command from kubeadm init output to add worker nodes later.

8

Deploy Your First Application

Let's deploy a simple web application to demonstrate Kubernetes basics:

Create a Deployment

nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
 name: nginx-deployment
spec:
 replicas: 3
 selector:
 matchLabels:
 app: nginx
 template:
 metadata:
 labels:
 app: nginx
 spec:
 containers:
 - name: nginx
 image: nginx:latest
 ports:
 - containerPort: 80
 resources:
 requests:
 memory: "64Mi"
 cpu: "50m"
 limits:
 memory: "128Mi"
 cpu: "100m"
Apply the Deployment
kubectl apply -f nginx-deployment.yaml

Expose the Application

nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
 name: nginx-service
spec:
 selector:
 app: nginx
 ports:
 - protocol: TCP
 port: 80
 targetPort: 80
 type: NodePort
Apply the Service
kubectl apply -f nginx-service.yaml
Check Your Deployment
# View all pods
kubectl get pods
# Check service status
kubectl get services
# Get service details
kubectl describe service nginx-service
9

Essential Kubernetes Commands

Master these commands to effectively manage your Kubernetes cluster:

Cluster Information

Cluster Status Commands
# Cluster information
kubectl cluster-info
# Node information
kubectl get nodes -o wide
# Resource usage (requires metrics-server)
kubectl top nodes
kubectl top pods

Managing Applications

Application Management Commands
# Create resources from files
kubectl apply -f deployment.yaml
# Delete resources
kubectl delete -f deployment.yaml
# Scale deployments
kubectl scale deployment nginx-deployment --replicas=5
# Rolling updates
kubectl set image deployment/nginx-deployment nginx=nginx:1.21
# Rollback
kubectl rollout undo deployment/nginx-deployment

Debugging & Inspection

Debugging Commands
# Describe resources for debugging
kubectl describe pod <pod-name>
kubectl describe service <service-name>
# View logs
kubectl logs <pod-name>
kubectl logs -f <pod-name> # Follow logs
# Execute commands in pods
kubectl exec -it <pod-name> -- /bin/bash
# Port forwarding for testing
kubectl port-forward pod/<pod-name> 8080:80
10

Best Practices

Follow these best practices for optimal Kubernetes deployments on RamNode:

Resource Management

Always set resource requests and limits:

Resource Limits Example
resources:
 requests:
 memory: "128Mi"
 cpu: "100m"
 limits:
 memory: "256Mi"
 cpu: "200m"

Security Considerations

  • • Keep Kubernetes and Docker updated with latest security patches
  • • Use Kubernetes Secrets for sensitive data, not ConfigMaps
  • • Implement Network Policies to control pod-to-pod communication
  • • Configure RBAC (Role-Based Access Control) for production deployments
  • • Run containers with non-root users when possible

Health Checks

Implement readiness and liveness probes:

Health Check Example
livenessProbe:
 httpGet:
 path: /health
 port: 8080
 initialDelaySeconds: 30
 periodSeconds: 10
readinessProbe:
 httpGet:
 path: /ready
 port: 8080
 initialDelaySeconds: 5
 periodSeconds: 5
11

Monitoring Your Cluster

Set up monitoring to track cluster health and performance:

Install Metrics Server

Deploy Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Basic Monitoring Commands

Monitoring Commands
# View resource usage
kubectl top nodes
kubectl top pods --all-namespaces
# Monitor events
kubectl get events --sort-by=.metadata.creationTimestamp
# Watch resource changes
kubectl get pods -w

Backup Strategy

Backup Commands
# Backup cluster configuration
kubectl get all --all-namespaces -o yaml > cluster-backup.yaml
# For etcd backup (kubeadm clusters)
sudo ETCDCTL_API=3 etcdctl snapshot save /backup/etcd-snapshot.db \
 --endpoints=https://127.0.0.1:2379 \
 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
 --cert=/etc/kubernetes/pki/etcd/server.crt \
 --key=/etc/kubernetes/pki/etcd/server.key
12

Scaling Your Applications

Learn to scale your Kubernetes applications effectively:

Manual Scaling

Scale Deployments
# Scale up
kubectl scale deployment nginx-deployment --replicas=5
# Scale down
kubectl scale deployment nginx-deployment --replicas=2

Horizontal Pod Autoscaling

Configure automatic scaling based on resource utilization:

hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
 name: nginx-hpa
spec:
 scaleTargetRef:
 apiVersion: apps/v1
 kind: Deployment
 name: nginx-deployment
 minReplicas: 3
 maxReplicas: 10
 metrics:
 - type: Resource
 resource:
 name: cpu
 target:
 type: Utilization
 averageUtilization: 70
Apply HPA
kubectl apply -f hpa.yaml

Adding Worker Nodes

Scale your cluster by adding more RamNode instances:

  1. Provision additional RamNode VPS instances
  2. Install Docker and Kubernetes components on new nodes
  3. Use the join token from your master node to add workers
  4. Verify new nodes with kubectl get nodes
13

Troubleshooting Common Issues

Common issues and their solutions when running Kubernetes on RamNode:

Recommended: Kubernetes Ingress Controllers

For production Kubernetes deployments, consider using an Ingress controller to manage external access to your services:

Skipper

Powerful HTTP router with advanced routing, custom filter plugins, and native Kubernetes Ingress support.

View Skipper Guide →

Traefik

Cloud-native edge router with automatic service discovery and Let's Encrypt integration.

View Traefik Guide →

🎉 Congratulations!

You've successfully learned Kubernetes basics on RamNode! You can now deploy, scale, and manage containerized applications with industry-standard orchestration tools.

AltStyle によって変換されたページ (->オリジナル) /