Configure Cluster Monitoring with the Operator
On this page
HTTPS Connections
Use the following instructions to configure monitoring with HTTPS connections.
Create an SSL Secret
Create a Secret containing SSL certificates that will be used for HTTPS connections.<cluster-name>-additional-secrets
to be automatically mounted to each pod of the cluster.
Option 1: Use kubectl
Use kubectl
to create the Secret.
kubectl create secret generic <cluster-name>-additional-secrets \--from-file=ssl-crt=<path_to_server-cert.pem> \--from-file=ssl-key=<path_to_server-key.pem> \--from-file=ssl-ca=<path_to_ca-cert.pem>
Option 2: Declare an SSL Secret in a YAML File
The data
section of the secret must have the following key/value pairs:
-
ssl-crt
: The Base64-encoded server certificate -
ssl-key
: The Base64-encoded server private key -
ssl-ca
: The Base64-encoded Certificate Authority (CA) certificate
For example:
apiVersion: v1kind: Secretmetadata:name: <cluster-name>-additional-secretstype: Opaquedata:ssl-ca: ...WdNQWtOQk1SWXdGQ...ssl-crt: ...U5wYzJOdk1ROHdEU...ssl-key: ...HaVBOTytQaEh2QSt...
Note: Replace <cluster-name>
with your SingleStore cluster name.
Confirm that the Keys are Mounted to the Cluster
-
Exec into the Master Aggregator (MA) pod.
kubectl exec node-<cluster-name>-master-0 -c node -
Confirm that the following files are present in the
/etc/memsql/extra-secret
directory.ssl-crt ssl-key ssl-ca
Refer to SSL Secure Connections for more information.
Add the Exporter SSL Args
-
In the
sdb-operator.
file on the Source cluster, add the following argument to theyaml args
list in thesdb-operator
section."--master-exporter-parameters","--config.ssl-cert=/etc/memsql/extra-secret/ssl-crt--config.ssl-key=/etc/memsql/extra-secret/ssl-key --config.use-https --config.user=root --no-cluster-collect.info_schema.tables--no-cluster-collect.info_schema.tablestats--no-collect.info_schema.tables --no-collect.info_schema.tablestats"Note that this is a single
master-exporter-parameters
argument and the remainder is its value.When modified, the file will resemble the following. If the cluster is configured to use the
root
user with SSL, an additional--config.
argument must be added into thessl-ca=/etc/memsql/ssl/ca-cert. pem --master-exporter-parameters
.apiVersion: apps/v1kind: Deploymentmetadata:name: sdb-operatorlabels:app.kubernetes.io/component: operatorspec:replicas: 1selector:matchLabels:name: sdb-operatortemplate:metadata:labels:name: sdb-operatorspec:serviceAccountName: sdb-operatorcontainers:- name: sdb-operatorimage: operator_image_tagimagePullPolicy: Alwaysargs: [# Cause the operator to merge rather than replace annotations on services"--merge-service-annotations",# Allow the process inside the container to have read/write access to the `/var/lib/memsql` volume."--fs-group-id", "5555","--cluster-id", "sdb-cluster""--master-exporter-parameters","--config.ssl-cert=/etc/memsql/extra-secret/ssl-crt --config.ssl-key=/etc/memsql/extra-secret/ssl-key --config.use-https --config.user=root --no-cluster-collect.info_schema.tables --no-cluster-collect.info_schema.tablestats --no-collect.info_schema.tables --no-collect.info_schema.tablestats" ]env:- name: WATCH_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: OPERATOR_NAMEvalue: "sdb-operator" -
Apply the changes to the cluster.
kubectl apply -f sdb-operator.yaml -
Confirm that the Operator pod is running.
kubectl get podsmemsql-operator-758ffb66c8-5sn4l 1/1 Running
-
Run the following command to force a restart of the
memsql_
container on the master pod.exporter kubectl exec -it node-<memsql-cluster-name>-master-0 -cexporter -- /bin/sh -c "kill 1"
Configure the Exporter Process
The monitoring exporter should already be running in a container in the Master node Pod on the Source cluster.
If Metrics and Source clusters are the same or are located in the same Kubernetes cluster (in different namespaces, for example), no further action is required, and you may skip to the next step.
If Metrics and Source clusters are located in different Kubernetes clusters, the exporter process must be exposed to outside of the cluster as a service (such as a Load Balancer service) and this service must be accessible from all nodes in the Metrics cluster.
For example:
-
Retrieve the
ownerReferences
UID.kubectl get svc svc-<cluster-name>-ddl -o jsonpath='{.metadata.ownerReferences}' -
Modify the
svc-k8s-cluster-exporter.
file using the UID value retrieved in the above step.yaml apiVersion: v1kind: Servicemetadata:annotations:custom: annotationsservice.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000"labels:app.kubernetes.io/component: masterapp.kubernetes.io/instance: <memsql-cluster-name>app.kubernetes.io/name: memsql-clustercustom: labelname: svc-<memsql-cluster-name>-exporternamespace: defaultownerReferences:- apiVersion: memsql.com/v1alpha1controller: truekind: MemsqlClustername: <memsql-cluster-name>uid: <ownerReferences-UID> # Update with ownerReferences UIDspec:externalTrafficPolicy: ClusteripFamilies:- IPv4ipFamilyPolicy: SingleStackports:- name: prometheusport: 9104protocol: TCPselector:app.kubernetes.io/instance: <memsql-cluster-name>app.kubernetes.io/name: memsql-clusterstatefulset.kubernetes.io/pod-name: node-<memsql-cluster-name>-master-0sessionAffinity: Nonetype: LoadBalancer -
Create the exporter service.
kubectl create -f svc-k8s-cluster-exporter.yaml
Configure the Metrics Database
Determine the Hostname of the Exporter
From the previous step, if the Metrics and Source clusters are the same or are located in the same Kubernetes cluster, then <name of the master pod>.
can be used as the exporter hostname in this section.
However, if the Metrics and Source clusters are located in different Kubernetes clusters, then a hostname/IP address of the created service that can be reached by each node of the Metrics cluster can be used as the exporter hostname in this section.
Automatically Configure the Metrics Database
Create and Apply the Tools RBAC
Either use an existing account with sufficient permissions or create a service account that can be used for running the configuration Pod.
-
Save the following to a
tools-rbac.
file.yaml apiVersion: v1kind: ServiceAccountmetadata:name: tools---apiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:namespace: defaultname: tools-rolerules:- apiGroups:- ""resources:- pods- services- namespacesverbs:- get- list- apiGroups: [ "" ]resources: [ "pods/exec" ]verbs: [ "create" ]- apiGroups:- appsresources:- statefulsetsverbs:- get- list- apiGroups:- memsql.comresources:- '*'verbs:- get- list---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:name: toolsnamespace: defaultsubjects:- kind: ServiceAccountname: toolsroleRef:kind: Rolename: tools-roleapiGroup: rbac.authorization.k8s.io -
Apply the
tools-rbac.
file to the cluster.yaml This creates a tools
service account with the required permissions.kubectl apply -f tools-rbac.yaml
Create and Apply the Start Monitoring Job
Note
Existing cluster monitoring instances can be configured to collect event traces after upgrading a cluster to SingleStore v8.
-
Add
--collect-event-traces
to your existingstart-monitoring-job.
file.yaml HTTP Connections
[...]command: ["sdb-admin","start-monitoring-kube","--user=<database-user>","--password=<database-user-password>","--collect-event-traces","--exporter-host=<exporter-hostname>","--yes"<other options…>][...]HTTPS Connections
[...]command: ["sdb-admin","start-monitoring-kube","--user=<database-user>","--password=<database-user-password>","--collect-event-traces","--exporter-host=<exporter-hostname>","--ssl-ca=/etc/memsql/extra-secret/ssl-ca","--yes"<other options…>][...] -
Restart monitoring.
kubectl apply -f start-monitoring-job.yaml
The following YAML creates a job that sets up the metrics
database and the associated pipelines.
With Internet Access
-
Modify the
start-monitoring-job.
file so that it resembles the following.yaml Note that: -
<database-user>
must be replaced with the desired database user, such as the admin user -
<database-user-password>
must be replaced with this database user’s password -
<exporter-hostname>
must be replaced with the exporter hostname from the Determine the Hostname of the Exporter step -
<other-options…>
must be removed or replaced with the options available in sdb-admin start-monitoring-kube
HTTP Connections
apiVersion: batch/v1kind: Jobmetadata:name: toolbox-start-monitoringspec:template:spec:serviceAccountName: toolscontainers:- name: toolbox-start-monitoringimage: singlestore/tools:alma-v1.11.6-1.17.2-cc87b449d97fd7cde78fdc4621c2aec45cc9a6cbimagePullPolicy: IfNotPresentcommand: ["sdb-admin","start-monitoring-kube","--user=<database-user>","--password=<database-user-password>","--collect-event-traces","--exporter-host=<exporter-hostname>","--yes"<other options…>]restartPolicy: NeverbackoffLimit: 2HTTPS Connections
Update the following lines from the above definition:
command: ["sdb-admin","start-monitoring-kube","--user=<database-user>","--password=<database-user-password>","--collect-event-traces","--exporter-host=<exporter-hostname>","--yes"<other options…>]to:
command: ["sdb-admin","start-monitoring-kube","--user=<database-user>","--password=<database-user-password>","--collect-event-traces","--exporter-host=<exporter-hostname>","--ssl-ca=/etc/memsql/extra-secret/ssl-ca","--yes"<other options…>] -
-
Run the following command to apply the changes in the
start-monitoring-job.
file.yaml kubectl apply -f start-monitoring-job.yaml
Confirm that the Start Monitoring Job is Running
Run the following command to confirm that the job has finished successfully.Completions 1/1
for toolbox-start-monitoring
.
kubectl get jobs
NAME COMPLETIONS DURATION AGE
toolbox-start-monitoring 1/1 13s 21s
You may terminate this job by running the following command.
kubectl delete -f start-monitoring-job.yaml
As of Kubernetes 1.ttlSecondsAfterFinished: <seconds>
may be added to the job spec to automatically remove the finished job within the specified number of seconds.
Last modified: September 20, 2024