*Requirements*
- Extend Kubespray with Ansible roles to setup a fully functioning GlusterFS cluster including Heketi on a configurable subset of cluster nodes,
- Ansible variables must be provided to configure the important aspects of the deployment (e.g. the block device on each cluster node used by GlusterFS as the storage device),
- Include functionality to setup a GlusterFS storage class in the deployed Kubernetes cluster,
- Ansible roles must support at least CentOS 7,
- Ansible roles must not break an existing deployment if run against it,
- GlusterFS deployment should follow Kubernetes and GlusterFS best practices and must be production ready,
- Ansible roles should follow Ansible best practices and Kubespray conventions,
- Ansible roles can be based on existing contrib roles but need to satisfy all criteria laid out here
- Quick response time regards communication with us (in within 24h)
*Acceptance criteria*
The base for the following operations is a VM cluster setup using Vagrant and the Kubespray Vagrantfile with the following parameters:
$num_instances = 3
$os = 'centos'
$network_plugin = 'calico'
$kube_node_instances_with_disks = true
$kube_node_instances_with_disks_number = 1
$vm_memory = 6144
$vm_cpus = 2
Then:
Run Kubespray to provision Kubernetes including GlusterFS on each VM
the additional VM disk is configured as the GlusterFS storage device
Create a persistent volume claim in Kubernetes
Create at least 2 different pods mounting the created persistent volume in read-write mode
Create some files in the mounted volume in one of the pods
Soft reboot the cluster node VMs
Expected result:
Kubernetes should be fully functional after reboot of VMs including GlusterFS
Created files (step 4 above) are accessable and readable in both pods
Changes done to the files in one pod are visible in the other pod
About the recuiterMember since Nov 11, 2022 Ceo Legend Guru
from Niederosterreich, Austria