Troubleshoot Your Monitoring Setup

Pipelines

Check the Monitoring Tables for Data

  1. Connect to the database.

  2. Run the following SQL. The default database name is metrics. If your database name is different from the default name, replace metrics with your database name.

    USE metrics;
    SELECT * FROM metrics LIMIT 10;

    Optional, run SELECT * FROM on all of the monitoring tables.

    If these queries return an empty set, review the pipelines error tables using the next step.

  3. Review the monitoring pipelines.

    SHOW PIPELINES;
  4. If a monitoring pipeline (with a name resembling *_metrics and *_blobs) is in a state other than running, start the pipeline.

    START PIPELINE <pipeline-name>;
  5. Check the information_schema.pipelines_errors table for errors.

    SELECT * FROM information_schema.pipelines_errors;

Resolve Pipeline Errors

If you receive an Cannot extract data for the pipeline error in the pipelines_error table, perform the following steps.

  1. Confirm that port 9104 is accessible from all hosts in the cluster. This is the default port used for monitoring. To test this, run the following command at the Linux command line and review the output.

    curl http://<endpoint>:9104/cluster-metrics

    For example:

    curl http://192.168.1.100:9104/cluster-metrics
  2. If the hostname of the Master Aggregator is localhost, and a pipeline was created using localhost, use the following SQL commands to recreate the pipeline with the Master Aggregator host’s IP addresses. For example:

    metrics pipeline:

    CREATE OR REPLACE PIPELINE `metrics` AS LOAD DATA prometheus_exporter
    "http://<host-ip-address>:9104/cluster-metrics" CONFIG '{"is_memsql_internal":true}'
    INTO PROCEDURE `load_metrics` FORMAT JSON;
    START PIPELINE IF NOT RUNNING metrics;

    blobs pipeline:

    CREATE OR REPLACE PIPELINE `blobs` AS LOAD DATA prometheus_exporter
    "http://<host-ip-address>:9104/samples" CONFIG '{"is_memsql_internal":true, "download_type":"samples"}'
    INTO PROCEDURE `load_blobs` FORMAT JSON;
    START PIPELINE IF NOT RUNNING blobs;

Last modified: April 17, 2023

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK