Kubernetes probe
With the proliferation of custom resources and operators, especially in the case of stateful applications, the steady state is manifested as status parameters or flags within Kubernetes resources. K8s Probe addresses verification of the desired resource state by allowing users to define the Kubernetes GVR (group-version-resource) with appropriate filters (field selectors or label selectors). The experiment makes use of the Kubernetes Dynamic Client to achieve this. The probe supports the following CRUD operations:
- create: Creates Kubernetes resource based on the data provided inside the probe.k8sProbe/inputs.data field.
- delete: Deletes matching kubernetes resource via GVR and filters (field selectors/label selectors).
- present: Checks for the presence of Kubernetes resource based on GVR and filters (field selectors or label selectors).
- absent: Checks for the absence of Kubernetes resource based on GVR and filters (field selectors or label selectors).
The Kubernetes probe is fully declarative in the way they are conceived
Probe definition
You can define the probes at .spec.experiments[].spec.probe path inside the chaos engine.
kind: Workflow
apiVersion: argoproj.io/v1alpha1
spec:
templates:
- inputs:
artifacts:
- raw:
data: |
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
spec:
experiments:
- spec:
probe:
####################################
Probes are defined here
####################################
Schema
Listed below is the probe schema for the Kubernetes probe, with properties shared across all the probes and properties unique to the Kubernetes probe.
Field | Description | Type | Range | Notes |
group | Flag to hold the group of the Kubernetes resource for the k8sProbe | Optional | N/A type: string | The group contains group of the Kubernetes resource on which k8sProbe performs the specified operation. |
version | Flag to hold the apiVersion of the Kubernetes resource for the k8sProbe | Mandatory | N/A type: string | The version contains apiVersion of the Kubernetes resource on which k8sProbe performs the specified operation |
resource | Flag to hold the Kubernetes resource name for the k8sProbe | Mandatory | N/A type: string | The resource contains the Kubernetes resource name on which k8sProbe performs the specified operation. |
namespace | Flag to hold the namespace of the Kubernetes resource for the k8sProbe | Optional | N/A type: string | The namespace contains namespace of the Kubernetes resource on which k8sProbe performs the specified operation. |
fieldSelector | Flag to hold the fieldSelectors of the Kubernetes resource for the k8sProbe | Optional | N/A type: string | The fieldSelector contains fieldSelector to derived the Kubernetes resource on which k8sProbe performs the specified operation. |
labelSelector | Flag to hold the labelSelectors of the Kubernetes resource for the k8sProbe | Optional | N/A type: string | The labelSelector contains labelSelector to derived the Kubernetes resource on which k8sProbe performs the specified operation. |
operation | Flag to hold the operation type for the k8sProbe | Mandatory | N/A type: string | The operation contains operation which should be applied on the Kubernetes resource as part of k8sProbe. It supports four type of operation. It can be one of create, delete, present, absent . |
resourceNames | Flag to hold the resourceNames of the Kubernetes resource | Optional | N/A type: string | The resourceNames contains Kubernetes resources used for many requests. |
Run properties
Field | Description | Type | Range | Notes |
probeTimeout | Flag to hold the timeout of the probe | Mandatory | N/A type: string | The probeTimeout represents the time limit for the probe to execute the specified check and return the expected data |
attempt | Flag to hold the attempt of the probe | Mandatory | N/A type: integer | The attempt contains the number of times a check is run upon failure in the previous attempts before declaring the probe status as failed. |
interval | Flag to hold the interval of the probe | Mandatory | N/A type: string | The interval contains the interval for which probes waits between subsequent retries |
probePollingInterval | Flag to hold the polling interval for the probes (applicable for all modes) | Optional | N/A type: string | The probePollingInterval contains the time interval for which continuous and onchaos probe should be sleep after each iteration |
initialDelaySeconds | Flag to hold the initial delay interval for the probes | Optional | N/A type: integer | The initialDelaySeconds represents the initial waiting time interval for the probes. |
stopOnFailure | Flags to hold the stop or continue the experiment on probe failure | Optional | N/A type: boolean | The stopOnFailure can be set to true/false to stop or continue the experiment execution after probe fails |
Definition
probe:
- name: "check-app-status"
type: "k8sProbe"
k8sProbe/inputs:
group: ""
version: "v1"
resource: "pods"
namespace: "default"
fieldSelector: "status.phase=Running"
labelSelector: "app=nginx"
operation: "present" # it can be present, absent, create, delete
mode: "EOT"
runProperties:
probeTimeout: 5s
interval: 2s
attempt: 1
Create operation
It creates the Kubernetes resources based on the data specified inside in the probe.k8sProbe/inputs.data
field.
Use the following example to tune this:
# create the given resource provided inside data field
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: litmus-admin
experiments:
- name: pod-delete
spec:
probe:
- name: "create-percona-pvc"
type: "k8sProbe"
k8sProbe/inputs:
# group of the resource
group: ""
# version of the resource
version: "v1"
# name of the resource
resource: "persistentvolumeclaims"
# namespace where the instance of resource should be created
namespace: "default"
# type of operation
# supports: create, delete, present, absent
operation: "create"
# contains manifest, which can be used to create the resource
data: |
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: percona-mysql-claim
labels:
target: percona
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
mode: "SOT"
runProperties:
probeTimeout: 5s
interval: 2s
attempt: 1
Delete operation
It deletes matching Kubernetes resources via GVR and filters (field selectors or label selectors) provided at probe.k8sProbe/inputs
.
Use the following example to tune this:
# delete the resource matched with the given inputs
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: litmus-admin
experiments:
- name: pod-delete
spec:
probe:
- name: "delete-percona-pvc"
type: "k8sProbe"
k8sProbe/inputs:
# group of the resource
group: ""
# version of the resource
version: "v1"
# name of the resource
resource: "persistentvolumeclaims"
# namespace of the instance, which needs to be deleted
namespace: "default"
# labels selectors for the k8s resource, which needs to be deleted
labelSelector: "openebs.io/target-affinity=percona"
# fieldselector for the k8s resource, which needs to be deleted
fieldSelector: ""
# type of operation
# supports: create, delete, present, absent
operation: "delete"
mode: "EOT"
runProperties:
probeTimeout: 5s
interval: 2s
attempt: 1
Present operation
It checks for the presence of Kubernetes resources based on GVR and filters (field selectors or labelselectors) provided at probe.k8sProbe/inputs
.
Use the following example to tune this:
# verify the existance of the resource matched with the given inputs inside cluster
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: litmus-admin
experiments:
- name: pod-delete
spec:
probe:
- name: "check-percona-pvc-presence"
type: "k8sProbe"
k8sProbe/inputs:
# group of the resource
group: ""
# version of the resource
version: "v1"
# name of the resource
resource: "persistentvolumeclaims"
# namespace where the instance of resource
namespace: "default"
# labels selectors for the k8s resource
labelSelector: "openebs.io/target-affinity=percona"
# fieldselector for the k8s resource
fieldSelector: ""
# type of operation
# supports: create, delete, present, absent
operation: "present"
mode: "SOT"
runProperties:
probeTimeout: 5s
interval: 2s
attempt: 1
Absent operation
It checks for the absence of Kubernetes resources based on GVR and filters (field selectors or labelselectors) provided at probe.k8sProbe/inputs
.
Use the following example to tune this:
# verify that the no resource should be present in cluster with the given inputs
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: engine-nginx
spec:
engineState: "active"
appinfo:
appns: "default"
applabel: "app=nginx"
appkind: "deployment"
chaosServiceAccount: litmus-admin
experiments:
- name: pod-delete
spec:
probe:
- name: "check-percona-pvc-absence"
type: "k8sProbe"
k8sProbe/inputs:
# group of the resource
group: ""
# version of the resource
version: "v1"
# name of the resource
resource: "persistentvolumeclaims"
# namespace where the instance of resource
namespace: "default"
# labels selectors for the k8s resource
labelSelector: "openebs.io/target-affinity=percona"
# fieldselector for the k8s resource
fieldSelector: ""
# type of operation
# supports: create, delete, present, absent
operation: "absent"
mode: "EOT"
runProperties:
probeTimeout: 5s
interval: 2s
attempt: 1