Software Development

Getting Started With Rancher Cheatsheet

1. Introduction

Getting started, Rancher is an open-source platform with its design to simplify container management within Kubernetes environments. It provides a comprehensive suite of tools and features that streamline the deployment, orchestration, and scaling of containers, making it an ideal solution for DevOps teams, system administrators, and developers.

At its core, Rancher focuses on Kubernetes cluster management, allowing users to create, configure, and oversee multiple clusters across diverse cloud providers or on-premises infrastructure. This multi-cluster management capability is a key strength of Rancher, offering a centralized point of control for various environments and workloads.

One of Rancher’s standout features is its robust user authentication and access control system. This ensures secure access to the platform and its resources, essential for maintaining the integrity of your containerized applications.

Rancher comes equipped with an application catalog that contains a library of pre-defined application templates. These templates simplify the deployment process for complex applications, enabling users to launch them with minimal configuration. Additionally, Rancher supports infrastructure provisioning, allowing users to set up the underlying infrastructure for their clusters through integrations with different cloud providers.

Monitoring and logging are crucial aspects of container management, and Rancher addresses this with seamless integration with popular tools like Prometheus and Grafana. This integration provides valuable insights into application performance, enabling proactive management and issue resolution.

Scalability is a key consideration for modern applications, and Rancher facilitates this by managing infrastructure resources and load balancing traffic. This ensures that applications can handle increased workloads without compromising performance.

Networking capabilities are also integral to Rancher’s offering. It allows users to manage communication between containers and services effectively. This includes support for overlay networks, which simplify complex network topologies.

Security is a paramount concern in container environments, and Rancher addresses this through features like integration with external authentication systems and encryption for data at rest. These security measures enhance the overall robustness of the platform.

Rancher’s versatility extends to its support for custom workloads. While it emphasizes Kubernetes orchestration, it also accommodates other container runtimes and the deployment of unique application types.

1.1 RKE

Rancher Kubernetes Engine (RKE) is a powerful open-source tool provided by Rancher Labs that simplifies the process of deploying, managing, and maintaining Kubernetes clusters. RKE enables users to create production-ready Kubernetes clusters with a user-friendly approach and a focus on security, flexibility, and ease of use.

FeatureDescription
Cluster DeploymentRKE automates the provisioning and configuration of infrastructure and Kubernetes components for clusters. It’s valuable for setting up clusters on various infrastructure providers and on-premises environments.
Configuration ManagementRKE employs a declarative YAML configuration file to define cluster properties, including node roles, network settings, and authentication. This file ensures consistency and reproducibility when creating clusters.
Node RolesRKE supports different node roles, like control plane nodes (hosting API server and control plane components) and worker nodes (for application workloads). This customization enables cluster architecture tailored to specific needs.
Customization and FlexibilityRKE offers flexibility in customizing aspects like networking, authentication providers, and plugins. This adaptability suits a wide range of use cases, enhancing cluster compatibility.
SecurityRKE prioritizes security, supporting encryption for communication and data at rest. Integration with external authentication providers enhances secure access to the Kubernetes API server.
Kubernetes Version ControlRKE allows specifying the desired Kubernetes version for deployment. This is essential for application compatibility and staying up-to-date with Kubernetes releases.
High Availability (HA)RKE enables HA clusters by deploying multiple control plane nodes. This ensures continuous availability of the control plane, even in the presence of node failures.
Upgrades and MaintenanceRKE simplifies Kubernetes version upgrades. It permits node replacement during upgrades while maintaining cluster availability, streamlining maintenance tasks.
Stateful ServicesRKE supports the deployment of stateful services, requiring persistent storage. This is crucial for running databases and other applications needing data persistence.
External ServicesRKE integrates with external services like load balancers for ingress, enhancing application security and efficient traffic management.
Backup and RestoreRKE provides options for backing up and restoring cluster components and configurations. This capability ensures recovery from potential failures, safeguarding against data loss.

1.2 K3s

K3s is a lightweight, certified Kubernetes distribution developed by Rancher Labs. It’s designed to be efficient, easy to install, and suitable for resource-constrained environments. K3s is a perfect solution for scenarios where a full-scale Kubernetes cluster might be overkill or where simplicity and rapid deployment are essential.

FeatureDescription
Lightweight and FastK3s is optimized for performance and memory efficiency, making it suitable for constrained environments like edge computing and IoT. It uses fewer resources compared to standard Kubernetes clusters.
Single Binary DeploymentK3s is packaged as a single binary containing Kubernetes server and components (e.g., etcd, kube-proxy). This simplifies installation and reduces management complexity.
Easy InstallationK3s offers straightforward installation using a single command. This accessibility benefits users of all Kubernetes expertise levels.
ComponentsK3s includes essential Kubernetes components: API server, controller manager, scheduler, kubelet, and a lightweight container runtime (e.g., containerd).
Security FocusDespite its lightweight nature, K3s emphasizes security. It supports Docker and containerd runtimes, auto-generates TLS certificates, and provides simplified RBAC, ensuring a secure environment.
Automated OperationsK3s integrates automated tools for updates and upgrades, reducing manual intervention. Clusters remain up-to-date efficiently.
External Services IntegrationLike standard Kubernetes, K3s supports external service integration, enabling load balancing, storage, and networking features.
Local Path ProvisionerK3s features a local-path provisioner for dynamic local storage provisioning. This benefits single-node or small clusters needing persistent storage.
Process IsolationK3s uses process isolation to minimize attack vectors. Enhanced security is achieved without compromising functionality.
Support for ARM ArchitecturesK3s excels in ARM-based architectures, making it a top choice for edge devices, IoT platforms, and ARM-powered servers.
Use CasesK3s suits diverse scenarios, including edge computing, development environments, testing, and instances with resource constraints or a focus on simplicity.

2. Setting up Rancher

2.1 Installing Rancher

Rancher can be installed using Docker on a Linux server. Here’s a step-by-step guide:

Install Docker if it’s not already installed:

sudo apt update
sudo apt install docker.io

Run the Rancher Docker container:

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest

Access Rancher’s web interface by navigating to http://your-server-ip in a web browser.

2.2 Adding a Host

After installing Rancher, you’ll need to add a host to start deploying Kubernetes clusters:

  1. In the Rancher dashboard, click on “Add Cluster.”
  2. Choose the type of cluster you want to create (e.g., Amazon EKS, Google GKE, Custom).
  3. Follow the on-screen instructions to set up the cluster configuration.

In Rancher, a “host” typically refers to a physical or virtual machine (node) that is part of a Kubernetes cluster. These hosts provide the underlying compute resources and infrastructure where your Kubernetes workloads, applications, and services run. Hosts are essential components of your cluster, and Rancher helps you manage them efficiently.

3. Managing Clusters

3.1 Creating a RKE Cluster

Creating a Rancher Kubernetes Engine (RKE) cluster involves a series of steps to set up and configure the Kubernetes cluster using Rancher’s RKE tool. RKE simplifies the process of deploying and managing Kubernetes clusters. Below is a step-by-step guide on how to create an RKE cluster:

3.1.1 Install RKE

Before creating a cluster, you need to install the RKE tool. You can download RKE from the official GitHub repository or use a package manager like brew for macOS or apt for Linux.

For example, to install RKE on Linux using curl:

curl -LO https://github.com/rancher/rke/releases/download/v1.3.10/rke_linux-amd64
chmod +x rke_linux-amd64
sudo mv rke_linux-amd64 /usr/local/bin/rke

3.1.2 Create a Cluster Configuration File

Create a YAML configuration file that describes the cluster’s nodes, services, and other settings. This file specifies the cluster topology, authentication, network, and other important details. Here’s a basic example:

nodes:
  - address: your-master-node-ip
    user: ubuntu
    role:
      - controlplane
      - etcd
  - address: your-worker-node-ip
    user: ubuntu
    role:
      - worker

3.1.3 Generate Kubernetes Configuration

Use the RKE tool to generate the Kubernetes configuration files based on your cluster configuration. Run the following command:

rke config --filename cluster.yaml

This will create a kube_config_cluster.yml file that contains the Kubernetes cluster configuration.

3.1.4 Deploy the Cluster

Once you have the cluster configuration, deploy the Kubernetes cluster using the following command:

rke up --config cluster.yaml

This command will provision the cluster’s infrastructure, install Kubernetes components, and set up the cluster according to your configuration.

3.1.5 Configure Kubeconfig

After the cluster is deployed, you need to configure your kubectl tool to work with the new cluster. Replace cluster_name with the name you provided in your cluster configuration:

mv kube_config_cluster.yml ~/.kube/config

3.1.6 Verify the Cluster

You can verify the cluster’s status using the kubectl command:

kubectl get nodes

This will display the list of nodes in your newly created RKE cluster.

3.1.7 Access the Rancher UI

Optionally, you can use Rancher’s UI to manage your cluster. Install Rancher (if not already done) and create a new cluster using the “Custom” option. Enter the details from your cluster configuration, and Rancher will import and manage the cluster.

Keep in mind that these steps provide a basic overview of creating an RKE cluster. Depending on your requirements, you might need to adjust the configuration to include additional settings such as network plugins, authentication methods, and more. Always refer to the official RKE documentation for the most up-to-date and detailed instructions.

3.2 Rancher on a K3s cluster

Deploying Rancher on a K3s cluster allows you to manage and orchestrate Kubernetes clusters through Rancher’s user-friendly interface. Rancher simplifies the management of clusters, applications, and infrastructure, making it an excellent choice for centralizing Kubernetes operations. Here’s how you can deploy Rancher on a K3s cluster:

3.2.1 Creating a K3s cluster

Creating a K3s cluster involves several steps to deploy a lightweight Kubernetes cluster using the K3s tool. K3s is designed to be simple and efficient, making the process relatively straightforward. Here’s a step-by-step guide on how to create a K3s cluster:

Install K3s on the Master Node

You’ll need at least one node to act as the master node. On the master node, execute the following command to install K3s:

curl -sfL https://get.k3s.io | sh -

This command downloads and installs K3s. Upon installation, K3s starts automatically and sets up the Kubernetes components.

Retrieve the Cluster Configuration

After installation, the K3s cluster configuration can be found at /etc/rancher/k3s/k3s.yaml on the master node. Copy this configuration file to your local machine:

scp username@master-node-ip:/etc/rancher/k3s/k3s.yaml ~/.kube/config

Replace username with your username and master-node-ip with the IP address of your master node.

Access the K3s Cluster

You can now use kubectl to interact with your K3s cluster. For example, to list nodes:

kubectl get nodes

Join Worker Nodes

To add worker nodes to your K3s cluster, you’ll need to install K3s on each worker node. On each worker node, run the following command:

curl -sfL https://get.k3s.io | K3S_URL=https://<master-node-ip>:6443 K3S_TOKEN=<node-token> sh -

Replace <master-node-ip> with the IP address of your master node and <node-token> with the token generated during the initial master node installation. This command connects the worker node to the master node.

Verifying Nodes

Back on the master node, you can verify that the worker nodes have joined the cluster:

kubectl get nodes

You should see both the master and worker nodes listed.

Using K3s

You can now use kubectl commands to manage your K3s cluster, just like with any other Kubernetes cluster.

3.2.2 Install Rancher

Deploying Rancher on a K3s cluster allows you to manage and orchestrate Kubernetes clusters through Rancher’s user-friendly interface. Rancher simplifies the management of clusters, applications, and infrastructure, making it an excellent choice for centralizing Kubernetes operations. Here’s how you can deploy Rancher on a K3s cluster

Rancher provides a Helm chart for deploying Rancher on a Kubernetes cluster. Since K3s is a lightweight Kubernetes distribution, you can use Helm to install Rancher.

Helm is like a manager for Kubernetes applications. It helps you easily find, install, upgrade, and manage pre-configured packages (called charts) of Kubernetes applications. Think of it as a convenient way to package and deploy software on your Kubernetes clusters without having to manually write all the configuration files. It makes deploying complex applications a lot simpler.

# Add the Helm repository for Rancher
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

# Create a namespace for Rancher (optional)
kubectl create namespace cattle-system

# Install Rancher using Helm
helm install rancher rancher-stable/rancher \
  --namespace cattle-system \
  --set hostname=rancher.example.com

Replace rancher.example.com with your actual domain name or IP address. This command will deploy Rancher into the cattle-system namespace.

3.2.3 Access Rancher UI

After the installation is complete, you can access the Rancher UI using the domain name or IP address you provided. Open your web browser and navigate to http://rancher.example.com (use the domain you configured).

3.2.4 Set Up Rancher

Follow the on-screen instructions to set up Rancher. You’ll need to set an admin password, choose an authentication provider, and configure other settings.

3.2.5 Add K3s Cluster

In the Rancher UI, navigate to the “Global” view and click “Add Cluster.” Choose “Import an existing cluster” and provide the kubectl configuration file for your K3s cluster (usually located at ~/.kube/config). Rancher will use this configuration to connect to your K3s cluster.

3.2.6 Access Managed Clusters

Once you’ve added the K3s cluster to Rancher, you can manage it through the Rancher UI. This includes deploying applications, managing workloads, monitoring, and more.


 

4. Working with Environments

4.1 Creating Environments

Environments in Rancher allow you to logically organize your applications and services. To create an environment:

  1. In the Rancher dashboard, go to the cluster where you want to create the environment.
  2. Click on “Environments” and then “Add Environment.”
  3. Provide a name and description for the environment.
  4. Click “Create.”

4.2 Switching Between Environments

Switching between environments in Rancher is straightforward:

  1. In the Rancher dashboard, locate the current environment’s name at the top.
  2. Click on the environment’s name, then select the desired environment from the dropdown.

5. Deploying Applications

5.1 Deploying Using Catalog Templates

Rancher provides a catalog of predefined application templates that you can deploy. To deploy an application:

  1. In the Rancher dashboard, navigate to the desired cluster and environment.
  2. Click on “Catalogs” in the main menu.
  3. Browse available templates, select the desired one, and click “View Details.”
  4. Click “Launch” and follow the prompts to configure the application.

5.2 Deploying Using Kubernetes Manifests

If you have custom Kubernetes manifests, you can deploy them in Rancher:

  1. In the Rancher dashboard, navigate to the desired cluster and environment.
  2. Click on “Workloads” in the main menu.
  3. Click “Deploy.”
  4. Choose “Deploy YAML” and paste your Kubernetes manifest.
  5. Click “Continue” and adjust any settings as needed before deploying.

6. Monitoring and Scaling

6.1 Monitoring Applications

Rancher provides monitoring capabilities through integration with tools like Prometheus and Grafana. To set up monitoring:

  1. In the Rancher dashboard, navigate to the desired cluster and environment.
  2. Click on “Monitoring” in the main menu.
  3. Follow the prompts to enable monitoring for your applications.

6.2 Scaling Workloads

Scaling workloads in Rancher is easy:

  1. In the Rancher dashboard, go to the cluster and environment containing the workload.
  2. Click on “Workloads” in the main menu.
  3. Find the workload you want to scale and click on it.
  4. Click “Scale.”
  5. Adjust the number of replicas and click “Save.”

7. ETCD Backups

Backing up etcd is crucial for maintaining the integrity and recoverability of your Kubernetes cluster’s state. etcd is the distributed key-value store that stores all the configuration data and metadata for your cluster. In Rancher, etcd backups are essential for disaster recovery scenarios. Here’s how you can perform etcd backups in a Rancher-managed Kubernetes cluster:

7.1 Access the Master Node

SSH into one of the master nodes of your K3s cluster where etcd is running.

7.2 Identify etcd Data Directory

The etcd data directory typically resides at /var/lib/rancher/k3s/server/db.

7.3 Stop K3s

Before creating a backup, it’s recommended to stop K3s to ensure that etcd is in a consistent state:

sudo systemctl stop k3s

7.4 Create the Backup

To create a backup of the etcd data directory, you can use the rsync command to copy the directory and its contents to a backup location. Replace /path/to/backup with the actual path to your backup destination:

sudo rsync -avz /var/lib/rancher/k3s/server/db /path/to/backup

7.5 Start K3s

After the backup is created, start K3s again:

sudo systemctl start k3s

7.6 Test the Backup

It’s a good practice to periodically test your etcd backups to ensure they are valid and can be restored successfully. You can set up a test environment to restore the etcd backup and verify the cluster’s functionality.

8. Additional Resources

ResourceDescription
Rancher Official DocumentationComprehensive documentation for Rancher, including installation, configuration, and usage guides.
Kubernetes Official DocumentationOfficial documentation for Kubernetes, covering all aspects of cluster management and application deployment.
K3s DocumentationDocumentation for K3s, the lightweight Kubernetes distribution from Rancher.
Helm Official DocumentationHelm is a package manager for Kubernetes; its documentation provides insights into application packaging and deployment.
Kubernetes RBACUnderstand Kubernetes Role-Based Access Control for managing user access to resources.
Kubernetes Networking GuideLearn about Kubernetes networking concepts and best practices.
VeleroAn open-source tool to back up and restore Kubernetes resources and volumes.
PrometheusAn open-source monitoring and alerting toolkit for Kubernetes.
GrafanaA popular open-source analytics and monitoring solution that integrates well with Prometheus.
Kubernetes Namespace GuideLearn how to use Kubernetes namespaces to create isolated environments.
Kubernetes Storage GuideUnderstand Kubernetes storage concepts and how to manage persistent data.
Kubernetes Ingress ControllersLearn about exposing services to the internet using Kubernetes ingress controllers.

Odysseas Mourtzoukos

Mourtzoukos Odysseas is studying to become a software engineer, at Harokopio University of Athens. Along with his studies, he is getting involved with different projects on gaming development and web applications. He is looking forward to sharing his knowledge and experience with the world.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button