Skip to main content


This view shows the state of partition instances relevant to the node, which are listed in cluster metadata.

This view is also available as an LMV (local view per node) view.

Column Name



The ID of the node that produced this replication management row, and for which this partition instance is relevant (as in, it’s either hosted on said node, or replicates from an instance on that node).


Internal ID for replication management. Useful for cross-referencing with tracelogs.


The ID of this instance in the partition instances table of cluster database metadata.


The ID in cluster metadata of the master instance for this partition. It is 0 if there is no master in metadata.


The ID of this instance's distributed database, in the sharded database table of cluster database metadata.


This partition's ordinal. Database partitions are assigned numeric identifiers. For example a database named testdb would have partitions named like testdb_0, testdb_1, etc. In these cases, 0 and 1 would be the ordinals.


The node ID of the node where this partition instance is hosted.


The role of this instance in cluster metadata. Possible values: Non-existent, Master, Sync Replica, Ready Replica, Async Replica, Unrecoverable, Paused Replica.


When NODE_ID is the same as INSTANCE_NODE_ID, possible values include: Master, Replica, Transitioning, Dropping Or Reprovisioning, Unrecoverable, Missing (disconnected).

When NODE_ID is not the same as INSTANCE_NODE_ID, possible values include: Missing (disconnected), Async Connected, Sync Connected, Disconnected (blocking commits).


Whether the database is a DR replica or not, and if so whether it is paused or not. Possible values: Not DR, Active DR, Paused DR.


Whether there is an incomplete asynchronous task still pending.


Whether this partition's sharded database is configured to use synchronous replication.


This partition master's term. Only defined if there is an instance that is a non-DR master, otherwise it is 0.


Term uniquely identifies a period during which an instance of a partition is a master. Only non-0 when NODE_ID is equal to INSTANCE_NODE_ID.


The outcome of the last action performed on this partition instance.


The elapsed time since the last failure.


Counts the number of times there has been a metadata state change for this partition instance.


If this is a non-local instance (INSTANCE_NODE_ID is not the same as NODE_ID, which means it’s a partition in a different node that replicates from an instance on this node), this contains the metadata count at the time this instance started replicating from this node. Used to detect if a synchronously replicated partition may need to reprovision.


The last iteration counter on which this state was updated - used to cross-check with the other replication management tables.


Only applies to masters. Indicates whether all cluster-metadata sync replicas have been added to this master.


Indicates whether the local master of this partition is ready for replicas to be added.


Only applies to remote replicas. Indicates how many milliseconds before the node is connectable again (nodes become non-connectable due to past connection failures).


Only applies to local replicas. Indicates whether a master is connected or pending.


Only applies to replicas, and only relevant for DR replicas. If a master is connected, this indicates its node ID.


Only applies to local DR replicas, and only relevant for DR masters. Indicates the remote reference database name.


Only applies to local DR replicas, and only relevant for DR masters. Indicates the remote cluster ID.


Name of the partition database.


Only applies to replicas in other nodes. Indicates whether the replica has acknowledged receipt of the snapshot and logs sent when the replica was added.


Whether the master instance for this partition is online in metadata.


Whether this is an unlimited storage database and its remote storage has been dropped, and thus it is not usable.


Which action the system will take on this partition to move it to a success state, or nothing if the partition is already in a success state, or if it is impossible to correct its state.


The success states of all partitions are aggregated to form a global success state. Possible values include: Success / Soft Failure / Hard Failure.

Success: the most recent iteration of the replication management thread ended successfully, with local state matching metadata state.

Soft Failure: the local state does not match metadata state, but there is no mismatch that impacts the correct functioning of the cluster (for example, if all that is wrong is a disconnected async replica, then the cluster is technically healthy, although it is replication management’s responsibility to reconnect it, so we’d be in this state), and thus it is not blocking the advancement of SUCCESSFUL_CLUSTER_LSN.

Hard Failure: the local state does not match metadata state in a way that impacts the correct functioning of the cluster (for example, if a sync replica is disconnected), and will block the advancement of SUCCESSFUL_CLUSTER_LSN.