Create a cluster
You can create a local or cloud cluster to deploy a Besu network using Kubernetes.
Prerequisites
- Clone the Quorum-Kubernetes repository
- Install Kubectl
- Install Helm3
- Install AWS CLI and
eksctl
for AWS EKS clusters - Install Azure CLI for Azure AKS clusters
- Install the cloud-specific CLI
Local Clusters
Use one of several options to create a local cluster. Select one listed below, or another that you're comfortable with.
Minikube
Minikube is one of the most popular options to spin up a local Kubernetes cluster for development. You can install a version based on your architecture.
We recommend at least 2 CPUs and 16GB of RAM.
To start the cluster, run the following command:
minikube start --cpus 2 --memory 16384 --cni auto
kind
kind (Kubernetes in Docker) is a lightweight tool for running local Kubernetes clusters. The installation is similar to Minikube.
To start the cluster, run the following command:
kind create cluster
Rancher
Rancher is a lightweight open source desktop application for Mac, Windows, and Linux. It provides Kubernetes and container management, and allows you to choose the version of Kubernetes to run.
It can build, push, pull, and run container images. Built container images can be run without needing a registry.
The official Docker-CLI is not supported but rather uses nerdctl which is a Docker-CLI compatible tool for containerd, and is automatically installed with Rancher Desktop.
For Windows, you must install Windows Subsystem for Linux (WSL) to install Rancher Desktop.
Refer to the official Rancher Desktop documentation for system requirements and installation instructions.
Cloud clusters
AWS EKS
AWS Elastic Kubernetes Service (AWS EKS) is one of the most popular platforms to deploy Hyperledger Besu.
To create a cluster in AWS, you must install the AWS CLI and eksctl
.
The template comprises the base infrastructure used to build the cluster and other resources in AWS. We also use some native services with the cluster for performance and best practices, these include:
Dynamic storage classes backed by AWS EBS. The volume claims are fixed sizes and can be updated as you grow via helm updates, and won't need to re-provision the underlying storage class.
CNI networking mode for EKS. By default, EKS clusters use
kubenet
to create a virtual network and subnet. Nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP.noteThis approach reduces the number of IP addresses that you must reserve in your network space for pods, but constrains what can connect to the nodes from outside the cluster (for example, on-premise nodes or those on another cloud provider).
AWS Container Networking Interface (CNI) provides each pod with an IP address from the subnet, and can be accessed directly. The IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and can lead to IP address exhaustion as your application demands grow, however makes it easier for external nodes to connect to your cluster.
EKS clusters must not use 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the Kubernetes service address range.
To provision the cluster:
Update cluster.yml
Deploy the template:
eksctl create cluster -f ./templates/cluster.yml
Your
.kube/config
should be connected to the cluster automatically, but if not, run the commands below and replaceAWS_REGION
andCLUSTER_NAME
with details that are specific to your deployment.aws sts get-caller-identity
aws eks --region AWS_REGION update-kubeconfig --name CLUSTER_NAMEAfter the deployment completes, provision the EBS drivers for the volumes. While it is possible to use the in-tree
aws-ebs
driver that's natively supported by Kubernetes, it is no longer being updated and does not support newer EBS features such as the cheaper and better gp3 volumes. Thecluster.yml
file (from the steps above) that is included in this folder automatically deploys the cluster with the EBS IAM policies, but you need to install the EBS CSI drivers. This can be done through the AWS Management Console for simplicity, or via a CLI command as below. ReplaceCLUSTER_NAME
,AWS_REGION
andAWS_ACCOUNT
with details that are specific to your deployment.eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster CLUSTER_NAME --region AWS_REGION --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve --role-only --role-name AmazonEKS_EBS_CSI_DriverRole
eksctl create addon --name aws-ebs-csi-driver --cluster CLUSTER_NAME --region AWS_REGION --service-account-role-arn arn:aws:iam::AWS_ACCOUNT:role/AmazonEKS_EBS_CSI_DriverRole --forceOnce the deployment is completed, provision the Secrets Manager IAM and CSI driver. Use
besu
(or equivalent) forNAMESPACE
and replaceCLUSTER_NAME
,AWS_REGION
andAWS_ACCOUNT
with details that are specific to your deployment.helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
helm install --namespace kube-system --create-namespace csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver
kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml
POLICY_ARN=$(aws --region AWS_REGION --query Policy.Arn --output text iam create-policy --policy-name quorum-node-secrets-mgr-policy --policy-document '{
"Version": "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action": ["secretsmanager:CreateSecret","secretsmanager:UpdateSecret","secretsmanager:DescribeSecret","secretsmanager:GetSecretValue","secretsmanager:PutSecretValue","secretsmanager:ReplicateSecretToRegions","secretsmanager:TagResource"],
"Resource": ["arn:aws:secretsmanager:AWS_REGION:AWS_ACCOUNT:secret:besu-node-*"]
} ]
}')
# If you have deployed the above policy before, you can acquire its ARN:
POLICY_ARN=$(aws iam list-policies --scope Local \
--query 'Policies[?PolicyName==`quorum-node-secrets-mgr-policy`].Arn' \
--output text)
eksctl create iamserviceaccount --name quorum-node-secrets-sa --namespace NAMESPACE --region=AWS_REGION --cluster CLUSTER_NAME --attach-policy-arn "$POLICY_ARN" --approve --override-existing-serviceaccountsdangerThe above command creates a service account called
quorum-node-secrets-sa
and is preconfigured in the helm charts overridevalues.yml
files, for ease of use.Optionally, deploy the kubernetes dashboard.
You can now use your cluster and you can deploy Helm charts to it.
Azure Kubernetes Service
Azure Kubernetes Service (AKS) is another popular cloud platform that you can use to deploy Besu. To create a cluster in Azure, you must install the Azure CLI and have admin rights on your Azure subscription to enable some preview features on AKS.
The template comprises the base infrastructure used to build the cluster and other resources in Azure. We also make use Azure native services and features after the cluster is created. These include:
Dynamic storage classes backed by Azure Files. The volume claims are fixed sizes and can be updated as you grow via helm updates, and won't need to re-provision the underlying storage class.
CNI networking mode for AKS. By default, AKS clusters use
kubenet
, to create a virtual network and subnet. Nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP.noteThis approach reduces the number of IP addresses you must reserve in your network space for pods to use, but constrains what can connect to the nodes from outside the cluster (for example, on-premise nodes or other cloud providers).
AKS Container Networking Interface (CNI) provides each pod with an IP address from the subnet, and can be accessed directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and can leads to IP address exhaustion as your application demands grow, however makes it easier for external nodes to connect to your cluster.
Please do not create more than one AKS cluster in the same subnet. AKS clusters may not use 169.254.0.0/16
, 172.30.0.0/16
, 172.31.0.0/16
, or 192.0.2.0/24
for the Kubernetes service address range.
To provision the cluster:
Enable the preview features that allow you to use AKS with CNI, and a managed identity to authenticate and run cluster operations with other services. We also enable AAD pod identities which use the managed identity. This is in preview, so you must enable this feature by registering the
EnablePodIdentityPreview
feature:az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService
This takes a little while and you can check on progress by running:
az feature list --namespace Microsoft.ContainerService -o table
Install or update your local Azure CLI with preview features:
az extension add --name aks-preview
az extension update --name aks-previewCreate a resource group if you don't already have one:
az group create --name BesuGroup --location "East US"
Deploy the template:
- Navigate to the Azure portal, select + Create a resource in the upper left corner.
- Search for
Template deployment (deploy using custom templates)
and select Create. - Select Build your own template in the editor.
- Remove the contents (JSON) in the editor and paste in the contents of
azuredeploy.json
- Select Save.
- Input provisioning parameters in the displayed user interface.
Provision the drivers:
Run the bootstrap script.
Use
besu
forAKS_NAMESPACE
, and updateAKS_RESOURCE_GROUP
,AKS_CLUSTER_NAME
, andAKS_MANAGED_IDENTITY
in the commands below to match your settings and deployed resources from step 3../scripts/bootstrap.sh "AKS_RESOURCE_GROUP" "AKS_CLUSTER_NAME" "AKS_MANAGED_IDENTITY" "AKS_NAMESPACE"
You can now use your cluster and you can deploy Helm charts to it.