C7 Multi Node Cluster Deployment (Recommended) #
This guide provides a detailed walkthrough for deploying a multi-node K3s Kubernetes cluster to support Cloud7. The setup includes three master nodes for high availability and three worker nodes for resource-intensive workloads. Additional components such as MariaDB, MongoDB, and a private container registry are included to ensure a comprehensive, scalable environment.
Pre-Requisites #
This guide assumes the following:
- You have SSH access to all nodes and are working in a Linux environment.
- All nodes can communicate within the same private network.
- Each node must run Ubuntu 22.04 as the operating system.
Networking and Firewall Configuration
Ensure the following ports are open to allow proper communication between cluster components:
| Port | Usage | Nodes |
|---|---|---|
| 6443 | Kubernetes API server | Master nodes only |
| 8472 | VXLAN (Flannel networking) | All nodes |
| 2379-2380 | etcd (cluster state management) | Master nodes only |
| 10250-10252 | Kubelet and other components | All nodes |
Domain Requirements
- Two Sub-Domains:
- Use internal/private domain names for testing environments (e.g.,
internal.example.com). - For production, public sub-domains are recommended if external access is needed.
- Use internal/private domain names for testing environments (e.g.,
- Public IP Address:
- Required if the Cloud7 platform or any application needs to be accessed from the internet.
System Requirements #
Below are the hardware requirements for each node. Ensure that all nodes meet the minimum specifications to guarantee performance and stability.
| Node | CPU (Cores) | RAM (GB) | Disk (GB) |
|---|---|---|---|
| Master Node 1 | 4 | 8 | 80 |
| Master Node 2 | 4 | 8 | 80 |
| Master Node 3 | 4 | 8 | 80 |
| Worker Node 1 | 16 | 32 | 150 |
| Worker Node 2 | 16 | 32 | 150 |
| Worker Node 3 | 16 | 32 | 150 |
| MariaDB Node 1 | 4 | 8 | 120 |
| MariaDB Node 2 (Optional) | 4 | 8 | 120 |
| MariaDB Node 3 (Optional) | 4 | 8 | 120 |
| MongoDB Node | 4 | 8 | 60 |
| Container Registry | 2 | 4 | 100 |
| Load Balancer | 2 | 4 | 50 |
#
Setup Kubernetes Cluster #
Follow this step-by-step process to establish a Kubernetes cluster using the defined master and worker nodes. This guide uses k3s, a lightweight Kubernetes distribution, and k3sup to simplify installation and joining nodes to the cluster.
1. Install k3sup Tool on Your Local Machine
Begin by installing the k3sup tool, which streamlines the k3s setup:
2. Install k3s on Master Nodes
Step 1: Setup Master Node 1
On the first master node (IP: 10.82.0.2), run the following command to install k3s:
Step 2: Join Master Node 2
To join the second master node (IP: 10.82.0.3) to the cluster, use:
Step 3: Join Master Node 3
Repeat the process for the third master node with its specific IP address.
3. Prevent Workloads from Running on Master Nodes
To ensure that workloads are only scheduled on worker nodes, taint each master node:
Step 1: Add Worker Node 1 (IP: 10.82.0.7)
Use the following command to join the first worker node to the cluster:
Step 2: Add Worker Node 2 (IP: 10.82.0.6)
Join the second worker node with this command:
5. Verify Cluster Setup
Once all nodes have been added, confirm the cluster is operational by listing all nodes:
#
Setup Helm #
This section guides you through the installation and configuration of Helm, the package manager for Kubernetes. Helm simplifies the deployment and management of applications, such as Vault and Kafka, on your cluster.
Step 1: Download and Install Helm
Use the following commands to download and install the latest version of Helm:
- Explanation:
curl: Downloads the Helm installation script.chmod 700: Grants the necessary permissions to execute the script../get_helm.sh: Executes the script to install Helm.
Step 2: Verify Helm Installation
After installation, confirm that Helm is correctly installed by running:
This command should display the installed Helm version, indicating a successful setup.
Step 3: Configure Helm for Your Cluster
To ensure Helm interacts with your Kubernetes cluster, use:
- Add repositories: Adds the official Helm repository to pull charts.
- Update: Syncs the local cache with available Helm charts.
Step 4: Deploy Applications with Helm
You are now ready to deploy applications like Vault or Kafka using Helm charts. For example, deploy Vault with:
#
Setup Ingress Controller #
This section covers the deployment of the NGINX Ingress Controller using a static manifest. The NGINX controller will act as the gateway for HTTP(S) traffic, handling external requests and routing them to services within the cluster. We will configure it as the default ingress class, enable SSL passthrough for secure connections, and modify the service type to NodePort to allow external access.
Step 1: Install NGINX Ingress Controller
Use the following command to deploy the NGINX Ingress Controller using a static manifest:
- Explanation:
This command pulls the NGINX Ingress Controller manifest from the official Kubernetes repository and applies it to the cluster. The controller will run in the ingress-nginx namespace.
Step 2: Set NGINX as the Default Ingress Class
To configure NGINX as the default ingress class, run:
- Explanation:
This ensures that any ingress resource created without specifying a class will use the NGINX Ingress Controller by default.
Step 3: Enable SSL Passthrough
To allow encrypted traffic (HTTPS) to pass directly through to backend services, enable SSL passthrough in the NGINX deployment. Use the following command:
- How to Edit:
Add the following argument under theargssection in the deployment: - Explanation:
This step ensures that the ingress controller can forward encrypted HTTPS traffic to backend services without terminating it.
Step 4: Verify Deployment
Confirm the successful deployment and configuration of the ingress controller:
- Ensure all pods are running, and the services are correctly exposed.
Setup Vault (HashiCorp) #
This section details how to install and configure HashiCorp Vault in a Kubernetes cluster using Helm. We will configure Vault to use MySQL as the storage backend, enable High Availability (HA), and modify the service type to NodePort for external access.
Step 1: Add the HashiCorp Helm Repository
Begin by adding the official HashiCorp Helm repository:
Step 2: Download the Vault Helm Chart
Pull the latest Vault Helm chart and extract it:
Step 3: Configure Vault Using the Values.YAML File
Edit the values.yaml to customize Vault’s configuration:
- Enable High Availability (HA):
- Navigate to line 825 and ensure HA is enabled:
- Enable the Vault UI:
- Navigate to line 812 and configure the Vault UI:
- Set the MySQL Storage Backend:
- Add the following under the
config:section:
- Add the following under the
Step 4: Deploy Vault to the c7 Namespace
Create the c7 namespace and install Vault using the modified values.yaml file:
- Note: The dot (
.) at the end of thehelm installcommand is mandatory to specify the current directory.
Step 5: Initialize Vault
After deploying, initialize Vault to make it operational:
- Note: Save the unseal keys and root token provided during initialization. These are essential for managing and accessing Vault.
Step 6: Change the Service Type to NodePort
Edit the Vault service to expose it via NodePort:
- In the
specsection, change:
Verify the service configuration:
Access vault UI got secret engines enable secret engine named c7:

Select type KV for secret engine and enable.

Create secret and name path backend:

Switch to json mode and put the below keys and values:
{
“encrypt_key”: “&m7y#JfwVHkPEjymyjwwa$#bjyxL86GkDg7DQUbM”,
“keycloak_admin_password”: “letmein2C7!!”,
“keycloak_admin_username”: “admin”,
“keycloak.credentials.secret”: “aa8346f4-19ab-44e3-b032-a12ff8b4c725”,
“spring.datasource.password”: “letmein2C7!!”,
“spring.datasource.username”: “prod”
}
Change key of “keycloak.credentials.secret”,
Now click on ‘Save’.

Setup Kafka #
This section provides step-by-step instructions to deploy Apache Kafka in a Kubernetes cluster using the Bitnami Helm chart. To ensure ephemeral data handling, we’ll configure Kafka without persistence by disabling disk storage. The installation will take place within the designated c7 namespace.
Step 1: Add the Bitnami Helm Repository
Start by adding the Bitnami repository to Helm:
Verify that the repository was added successfully:
Update your Helm repositories to ensure you have the latest version of the chart:
Step 2: Download and Extract the Kafka Helm Chart
Pull the Kafka Helm chart and extract the contents:
Step 3: Modify values.yaml to Disable Persistence
Edit the values.yaml file to disable Kafka’s disk-based persistence:
- Set the following configuration to disable persistence:
This ensures that Kafka only stores data in memory and avoids writing to disk.
Step 4: Deploy Kafka to the c7 Namespace
Use Helm to install Kafka with the modified settings in the c7 namespace:
- Explanation:
kafka: This is the release name of your Kafka deployment..: The current directory containing the extracted chart files.-f values.yaml: The customized values file with persistence disabled.-n c7: Deploy Kafka in thec7namespace.
Step 5: Verify the Kafka Deployment
Check if the Kafka pods and services are running properly:
Optional: Configure NodePort or Ingress for Kafka Access
If external access to Kafka is required, modify the service type to NodePort or configure an ingress resource.
To change the service type to NodePort:
Modify the spec section:
Setup Databases (MariaDB & MongoDB) #
Setup Load Balancer (Ngnix) #
Implement an NGINX Load Balancer to distribute traffic effectively across services. Ensure that it is configured to handle incoming requests and route them to the appropriate backend services.
sudo apt install -y nginx
hostname -I | awk ‘{print $1}’ #Command for get private IP
sudo copy below configuration in /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
#include /etc/nginx/sites-enabled/*;
#include /etc/nginx/conf.d/*;
events {}
stream {
server{
listen 10.101.8.15:80;
proxy_pass k3s_http_ingress_backend;
}
server{
listen 10.101.8.15:443;
proxy_pass k3s_https_ingress_backend;
}
upstream k3s_http_ingress_backend {
server 10.101.8.15:32301;
}
upstream k3s_https_ingress_backend {
server 10.101.8.15:30683;
}
}
C7 Single Node Deployment #
This guide provides step-by-step instructions to deploy Cloud7 on a standalone node. It covers pre-requisites, system requirements, the deployment process, and the components involved. By following these steps, you can deploy Cloud7 efficiently and begin managing your multi-cloud infrastructure.
Pre-requisites #
Ensure that the following pre-requisites are fulfilled before initiating the deployment:
- VM Specifications
- OS: Ubuntu 20.04 or later
- CPU: 8 vCPUs or higher
- Memory: Minimum 24 GB RAM
- Disk: 300 GB available storage
- Network: Internet connectivity (for package downloads)
- Permissions
Ensure that you have sudo/root access to the VM. - SSH Access
Verify SSH access to the VM for remote operations.
Components Overview #
The deployment script installs and configures the following essential components:
- Docker: Container engine to run Cloud7 services
- Kubernetes: Orchestrates Cloud7 containers
- Helm: Package manager for Kubernetes
- Ingress Controller: Manages incoming requests to services
- NGINX Load Balancer: Distributes traffic among Cloud7 services
- HashiCorp Vault: Secures passwords and secrets
- Keycloak: Identity and access management
- MariaDB & MongoDB: Databases for Cloud7 data
These components will be installed via an automated script that ensures everything is configured correctly.
System Preparation & Deployment Steps #
Below is a step-by-step walkthrough of deploying Cloud7. Make sure to run these commands in sequence.
Step 1: Connect to the VM
Use the following command to SSH into the VM where Cloud7 will be deployed.
ssh ubuntu@<ip_address>
- Replace
<ip_address>with the IP of your VM. - Use a private key if SSH authentication requires it (e.g.,
ssh -i <key_file> ubuntu@<ip_address>).
Step 2: Install Unzip Tool
Install unzip to extract downloaded packages. Run the following command:
sudo apt update && sudo apt install unzip -y
Step 3: Install AWS CLI
The AWS CLI is required to download Cloud7 images and deployment scripts from the Wasabi Object Storage.
- Download the AWS CLI installation package:
wget https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
- Extract the downloaded package:
unzip awscli-exe-linux-x86_64.zip
- Install the AWS CLI:
sudo ./aws/install
- Verify the installation path:
./aws/install -i /usr/local/aws-cli -b /usr/local/bin
- Confirm AWS CLI installation:
aws --version
Step 4: Configure AWS CLI Profile
To access the required files from Wasabi Object Storage, configure an AWS CLI profile.
aws configure
When prompted, enter the following:
- AWS Access Key ID:
RERV1WS7Y73ISSPQMMSM - AWS Secret Access Key:
NrVsgBBefJmjYfCnNzZZLufTGKeUiz6JSKYnI4BC - Default region name: Leave this blank and press Enter.
- Default output format: Leave this blank and press Enter.
Note: Keep your credentials secure. These should be stored in a secure location and not shared.
Step 5: List Files in Object Storage
Verify that the necessary deployment files are present in the Wasabi Object Storage.
aws --endpoint-url https://s3.wasabisys.com s3 ls s3://cloud7-automation --human-readable --summarize --no-verify-ssl
This command lists all available files within the cloud7-automation bucket.
Step 6: Download Deployment Files
Download the Cloud7 deployment files from the Wasabi Object Storage.
aws --endpoint-url https://s3.wasabisys.com s3 cp s3://cloud7-automation cloud7-deployment-1.5 --recursive --no-verify-ssl
- This command will download the complete deployment package into a local directory named cloud7-deployment-1.5.
Step 7: Execute the Deployment Script
Navigate to the downloaded directory and execute the deployment script.
cd cloud7-deployment-1.5
./run-all-script.sh
- The script will install Docker, Kubernetes, Helm, and other Cloud7 components automatically.
- It might take several minutes for the installation to complete, depending on the network speed and system resources.
Step 8: Monitor the Deployment Process
During the deployment, the script will provide logs indicating the status of each component. Ensure that all steps complete successfully.
If there are any errors, consult the log files located within the logs/ directory inside the deployment folder.
Post-Deployment Checks #
Once the installation completes, perform the following checks to confirm that Cloud7 is deployed successfully:
- Check Kubernetes Cluster Status
Verify that all pods are running:kubectl get pods --all-namespaces
- Access Cloud7 Web Interface
Open your browser and enter the VM’s IP address or domain with the relevant port number (e.g.,http://<ip>:<port>). - Verify Services
Use the following commands to check if the essential services are up and running:- Docker Services:
docker ps
- Kubernetes Services:
kubectl get svc
- Docker Services:
- Access Vault and Keycloak
- Vault: Confirm the status of HashiCorp Vault using:
vault status
- Keycloak: Visit
http://<ip>:<keycloak_port>to manage users and authentication settings.
- Vault: Confirm the status of HashiCorp Vault using:
Troubleshooting Tips #
- Port Conflicts: If any service fails to start, ensure that the required ports (like 80, 443, etc.) are not being used by other applications.
- Firewall Rules: Ensure that necessary ports are open to allow access to Cloud7 services.
- Resource Constraints: If services fail due to resource issues, try increasing the VM’s CPU and memory.
- Kubernetes Debugging: Use the following to view logs of any failed pod:
kubectl logs <pod_name> -n <namespace>
Recommended Ports for Cloud7 Internet Access #
- API Gateway (Frontend Communication)
- Port:
443 (HTTPS) - Purpose: For secure access to Cloud7’s web frontend, API, and admin portal.
- Note: Block port 80 (HTTP) or redirect it to 443 to enforce encrypted communication.
- Port:
- Keycloak (Identity Layer for Authentication & SSO)
- Port:
443 (HTTPS) - Purpose: For user authentication and identity management via Keycloak.
- Port:
- Help Desk Service (Support and Ticketing System)
- Port:
443 (HTTPS) - Purpose: To allow customers to access the Cloud7 Help Desk system.
- Port:
- HashiCorp Vault (Secrets Management)
- Port:
8200 (HTTPS) - Purpose: For accessing the Vault API remotely (if required for key management via secure channels).
- Port:
- Payment Gateway Integration
- Port:
443 (HTTPS) - Purpose: For secure payment processing and communication with external payment services.
- Port:
- Kubernetes Node Communication (Optional for Admins)
- Port:
6443 (HTTPS) - Purpose: For access to Kubernetes control plane for managing Cloud7’s Kubernetes deployment.
- Note: This should be restricted to specific IPs or VPN connections only.
- Port:
Backend and Database Ports (Restricted to Internal Communication) #
- Backend Service Communication:
8080(Ensure internal access only, not open to the internet) - Billing Cron Service: Managed internally through Kubernetes cron jobs and backend. No external port required.
Firewall Best Practices for Cloud7 #
- Allow traffic only on necessary ports mentioned above.
- Restrict Vault (8200) access to specific IP addresses (e.g., Cloud7 admins or automation scripts).
- Deny inbound traffic on unused or unnecessary ports.
- Use IP whitelisting or VPN tunnels for privileged services like Kubernetes (6443) or Vault.
- Enable TLS/SSL certificates on all services exposed over the internet to ensure secure communication.
