This repository is designed for bare-metal nodes with very specific goals and requirements (see README). While some components may work under different circumstances, the entire stack is unlikely to function in a drastically different environment. Additionally, please note that this project is not primarily intended for use by anyone besides its owner. A fully functional deployment is not guaranteed.
- Two or more physical servers with Ubuntu 22.04 installed. This project was tested on x86-64 architecture.
- Passwordless SSH access to all servers.
- A user account with
sudoprivileges. - The control node (of ansible, i.e. your local machine) must have Ansible 8.0+ (ansible-core 2.15+).
- The assigned control plane (of kubernetes, i.e. one of your servers) must have an empty storage device available, which will be used to provide persistent storage for Kubernetes.
- Networking between the physical servers. A subnet for internal communication is recommended but not required.
- It is recommended that all managed nodes disable firewalls and swap. See K3s Requirements for more information.
This stack was developed and tested with the following software. Older versions are untested, but are likely to work.
- Ansible Core: Version 8.0+.
- Kubectl: Version v1.30.2+.
- Helm: Version v3.16.2+, initialized
git clone --recurse-submodules -j8 git@github.com:casparwackerle/PowerStack.gitcp ansible/configs/inventory_example.yml ansible/configs/inventory.yml
vi ansible/configs/inventory.ymlEdit the Ansible Inventory File to match your desired configuration, specifically:
- Internal and external IP addresses of each node. These may be the same if you are not using an internal network.
- Ansible user for SSH server access.
- Rancher hostname, which will expose the Rancher Kubernetes management platform.
- NFS network, path, and disk to be used. It is assumed that the specified disk belongs to the control node to enable the dynamic shutoff of worker nodes.
- Size of Kubernetes PV and PVC, ensuring it fits within the NFS disk capacity.
Disclaimer: The selected disk for NFS will be reformatted, resulting in the loss of any existing data.
cp ansible/configs/vault-template.yml ansible/configs/vault.yml
vi ansible/configs/vault.ymlEdit the vault file to replace placeholder tokens and passwords with your own. You can generate secure tokens using:
pwgen -s 64 1
# OR
openssl rand -base64 64After updating the tokens, encrypt the vault file:
ansible-vault encrypt ansible/configs/vault.yml⚠ DISCLAIMER: This process will reformat several disks and may result in data loss. Proceed with caution. At the very least, check the Ansible inventory file Run the Deploy All Script to initiate the installation process:
. scripts/deploy_all.shDisclaimer: You will be prompted multiple times for the Ansible Vault password during the installation process.
NOTE: In the event of failure, logs can be found in the /logs directory.
After the cluster is successfully deployed, the kubeconfig file will be copied to your local machine at ~/.kube/config.newwith the powerstack context. Assuming you have kubectl installed, follow these steps:
- Copy the kubeconfig file:
cp ~/.kube/config.new ~/.kube/config- Switch to the
powerstackcontext:
kubectl config use-context powerstack- Verify cluster access:
kubectl get nodes -o wideEnsure that kubectl is using the correct configuration file:
export KUBECONFIG=~/.kube/configTo access rancher, update your local DNS or hosts file:
echo "<control_node_expernal_IP> <rancher_hostname>" | sudo tee -a /etc/hostsWhen accessing the Rancher interface for the first time, you will be asked for the Bootstrap Password, which you defined and encrypted in the Ansible Vault.
- This installation process assumes familiarity with basic Linux commands and networking.
- Use a testing environment to experiment before deploying on production servers.
NOTE: ALL HELM CHARTS ARE INSTALLED AS ROOT. KUBECTL IS CONFIGURED FOR USER UBUNTU HOWEVER, WHICH MEANS THAT HELM CAN NOT BE USED FROM A REMOTE MACHINE. LOG INTO TARGET MACHINE, THEN sudo helm repo list