Skip to main content

Troubleshoot Your Monitoring Setup

Pipelines

Check the Monitoring Tables for Data

  1. Connect to the database.

  2. Run the following SQL. The default database name is metrics. If your database name is different from the default name, replace metrics with your database name.

    USE metrics;
    SELECT * FROM metrics LIMIT 10;
    

    Optional, run SELECT * FROM on all of the monitoring tables.

    If these queries return an empty set, review the pipelines error tables using the next step.

  3. Review the monitoring pipelines.

    SHOW PIPELINES;
    
  4. If a monitoring pipeline (with a name resembling *_metrics and *_blobs) is in a state other than running, start the pipeline.

    START PIPELINE <pipeline-name>;
    
  5. Check the information_schema.pipelines_errors table for errors.

    SELECT * FROM information_schema.pipelines_errors;
    

Resolve Pipeline Errors

If you receive an Cannot extract data for the pipeline error in the pipelines_error table, perform the following steps.

  1. Confirm that port 9104 is accessible from all hosts in the cluster. This is the default port used for monitoring. To test this, run the following command at the Linux command line and review the output.

    curl http://<endpoint>:9104/cluster-metrics
    

    For example:

    curl http://192.168.1.100:9104/cluster-metrics
    
  2. If the hostname of the Master Aggregator is localhost, and a pipeline was created using localhost, recreate the pipeline using the Master Aggregator host’s IP addresses. For example:

    metrics pipeline:

    create or replace pipeline `metrics` as load data prometheus_exporter 
    "http://<host-ip-address>:9104/cluster-metrics" 
    config '{"is_memsql_internal":true}' 
    into procedure `load_metrics` format json;
    
    start pipeline if not running metrics;
    

    blobs pipeline:

    create or replace pipeline `blobs` as load data prometheus_exporter 
    "http://<host-ip-address>:9104/samples" 
    config '{"is_memsql_internal":true, "download_type":"samples"}' 
    into procedure `load_blobs` format json;
    
    start pipeline if not running blobs;