In order to achieve this, they removed a lot of extra drivers that didn't need to be part of the core and are easily replaced with add-ons. ![]() K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. ![]() You may use Ceph Block Device images with Kubernetes v1.13 and later through ceph-csi, which dynamically provisions RBD images to back Kubernetes volumes and maps these RBD images as block devices (optionally mounting a file system contained within the image) on worker nodes running pods that reference an RBD-backed volume. Create a PoolBlock Devices and Kubernetes. Important ceph-csiuses the RBD kernel modules by default which may not support all Ceph CRUSH tunablesor RBD image features. The following diagram depicts the Kubernetes/Ceph technology stack. To use Ceph Block Devices with Kubernetes v1.13 and higher, you must install and configure ceph-csiwithin your Kubernetes environment. Create a specification for a PersistentVolumeClaim and use the storageClassName of. This means the volume is using storage on the host where the pod is located. K3s comes with a default Local Path Provisioner that allows creating a PersistentVolumeClaim backed by host-based storage. After node is Ready and pods are all Running, deploy service: Install k3s, I used the following EXEC arg during install: INSTALL_K3S_EXEC="server -write-kubeconfig-mode 644 -cluster-init -token secret". As the StorageClass is not included in the chart, a StorageClass needs to be. The following is an example of KubeKey add-on configurations for Ceph CSI RBD installed by Helm Charts. For details about compatibility, see Ceph CSI Support Matrix.
0 Comments
Leave a Reply. |