MV_ EVENTS
On this page
This view contains information about events and is useful for monitoring events across clusters over time.
Information on the most recent 1,028 events is stored in the view.
Events from all nodes are stored in the view and when a node is restarted, events specific to that node are removed from the view.
Learn more about interpreting this view in the Components to Monitor guide.
|
Column Name |
Description |
|---|---|
|
|
The |
|
|
The timestamp of a given event. |
|
|
The severity of a given event: |
|
|
The type of event that occurred. |
|
|
Additional information about a given event in JSON format. |
MV_ EVENTS. EVENT_ TYPE
Provides descriptions for each potential result of SELECT DISTINCT EVENT_.
|
Event Type |
Description |
|---|---|
|
|
An aggregator is being added to the cluster. |
|
|
An aggregator is being removed from the cluster. |
|
|
Logged when a cluster node's hostname or port is changed via the This tracks changes to node network configuration in the cluster topology. |
|
|
Marks the end of a specific asynchronous upgrade step on a cluster. |
|
|
Marks the start of a specific asynchronous upgrade step on a cluster. |
|
|
The related database, and therefore the given node, has been backed up. |
|
|
The previous cause of the |
|
|
The blob cache on a node is at or near its disk usage limit. |
|
|
An even triggered by a periodic background task involved in bottomless storage management and, where supported, migration to the Bottle Service. |
|
|
Incoming writes have been slowed or halted because the system is unable to offload data to remote storage fast enough, or local resources (disk and cache) are threatened. |
|
|
The system has activated rate limiting for API calls to the bottomless storage system due to excessive requests or throttling signals. |
|
|
Operations involving the database's remote storage cannot proceed because a required lock is being held by another session—frequently arising after PITR or when certain processes are not properly cleaned up. |
|
|
Triggered when the system detects that uploads (of log chunks and blobs) are taking longer than expected. |
|
|
A failure occurred during the verification step of an upload to bottomless storage. |
|
|
Marks the end of a database attach operation. The database has been attached and is accessible. |
|
|
Marks the start of a database attach operation. The system has started attaching an existing database to the cluster. |
|
|
Marks the end of a database creation operation. The database has been created and is ready for use. |
|
|
Marks the start of a database creation operation. The system has started processing the creation of a new database. |
|
|
Marks the end of a database detach operation. The database has been detached and is no longer accessible. |
|
|
Marks the start of a database detach operation. The system has started detaching a database from the cluster. |
|
|
Marks the end of a database drop operation. The database has been removed from the system. |
|
|
Marks the start of a database drop operation. The system has started removing the database. Logged at the beginning of a database drop operation on the Master Aggregator. |
|
|
Replication of the database has started. |
|
|
Replication of the database has stopped. |
|
|
The related database is being reprovisioned. |
|
|
A heartbeat query failed. |
|
|
Ingest is failing due to low available disk space. |
|
|
A leaf node is being added to the cluster. |
|
|
A leaf node is being removed from the cluster. |
|
|
SingleStore ran out of cache space while attempting to create a blob, stopping replay. |
|
|
Maximum server memory has been reached. |
|
|
A node has run out of available memory specifically during a "replay" operation. Replay refers to processes where the system must load, reconstruct, or reapply database state from persisted logs or snapshots. |
|
|
Maximum table memory has been reached. |
|
|
The related node is attaching. |
|
|
The related node is detaching. |
|
|
A node has encountered a disk-related error or has exceeded a disk usage threshold. This can potentially result in error conditions, warnings, or operational limitations for the node. |
|
|
A node has encountered a memory-related error or has exceeded a memory usage threshold. This can potentially result in error conditions, warnings, or operational limitations for the node. |
|
|
The related node is offline. |
|
|
The related node is online. |
|
|
Node failed to respond to a ping / heartbeat check within the expected timeframe. This can potentially indicate a connectivity or health issue. |
|
|
The related node has become reachable. |
|
|
The related node is starting up. |
|
|
The related node has become unreachable. |
|
|
The related node has been notified of an aggregator being promoted from child to master. |
|
|
A partition is lost due to failure and can no longer be recovered. |
|
|
A pipeline stopped. |
|
|
A partition rebalance has finished. |
|
|
The rebalance process—which redistributes data partitions to optimize resource usage, recover from failures, or adjust for topology changes—began. |
|
|
A "repair job"—a background process responsible for maintaining data consistency, integrity, or recoverability—has stalled, which means it is taking longer to progress than expected. |
|
|
A backup of the related database, and therefore the given node, has been restored. |
|
|
An engine variable was reconfigured. |
|
|
Event emitted when write workload throttling starts or stops because a sync replica’s replay falls behind. |
Last modified: