Tables

Note: Review this section in conjunction with the Appendix.

You can configure tables to ingest data from source database to destination database. To select a table for transferring to SingleStore database, navigate to Dashboard > Tables, and then select the gear icon.

Note: Views cannot be selected in the Ingest UI. For full extract scenarios, you can configure a view in config.xml and load data from it. Test this configuration in a non-production environment before using it in production. For detailed steps, refer to Load Data from Views.

  1. The Table page contains two views:

    1. Tree view

    2. List view.

  2. In the List view, expand default to view the schemas under the connected database.

  3. Expand the schema to view the tables.

  4. Select the box next to the table to extract using Ingest.

  5. You can view the details for the selected table in the middle column in the page.

  6. Select the With History checkbox to create a history of records in the destination table. Leave the checkbox unselected if you want a mirror copy of the table instead.

  7. Partitioning is not applicable to SingleStore destination.

  8. Enable Skip Initial Extract to capture changes to a table without performing the initial bulk load.

  9. Enable Redo Initial Extract to perform the initial bulk load of a previously extracted table.

  10. You can add a simple WHERE clause to filter records during the initial bulk load. This option does not apply to delta extracts. In the Where field, enter only the condition, without the WHERE keyword. For example, enter:

    order_date >= '2024-01-01'
  11. Select the primary key column or unique indexed column by enabling PKey under Primary Key & Masking. If your table does not have a primary key or a unique indexed column, select multiple columns to form a natural key. Refer to Initial Load for Tables without Primary Keys for more information.

  12. To prevent the extraction of values from a specific column, you can mask it by enabling Mask under Primary Key & Masking. Ingest does not extract values for a masked column, and the column is created as varchar(1) in the SingleStore table.

  13. If necessary, select the TChange checkbox next to columns that require a datatype conversion.

  14. Select Apply to confirm and save the details.

Repeat this process for each table. Once completed, proceed with the next steps:

  1. Navigate to the Operations tab.

  2. Select Full Extract to initiate the initial extract and load process.

Column Type Change

This feature is commonly used in SAP environments to allow datatype changes for columns or fields, such as converting from character or numeric formats to Integer, Long, Float, Date, or Timestamp.

Ingest automatically converts data types during data replication or CDC to the appropriate destination formats. The destination data types are:

INTEGER

@I

LONG

@L

FLOAT

@F

DATE (including format clause e.g. yyyyMMdd)

@D(format)

TIMESTAMP (including format clause e.g. yyyy-MM-dd HH:mm:ss)

@T(format)

Note: The (format) part can vary based on the value in the source column.

Initial Load for Tables without Primary Keys

If the source table does not have a primary key or a uniquely indexed column, select a combination of multiple columns on the destination table to form a natural key.

Example: Full extract for a table without a primary key

Consider the following source table that does not define a primary key:

CREATE TABLE orders
(
order_id INT,
customer_id INT,
order_date DATE,
status VARCHAR(20),
amount NUMERIC(12, 2)
);

To perform a full extract for this table:

  1. Navigate to Dashboard > Tables and select the table.

  2. In the Primary Key & Masking section, select one or more columns that uniquely identify each row in combination (for example, order_id, or a combination such as customer_id and order_date).

  3. Select Apply to save the configuration.

  4. Navigate to the Operations tab.

  5. Select Full Extract to initiate the initial extract for the table.

Flow uses this configured key to uniquely identify rows in the destination table, even if the source table does not define a primary key.

Schedule an Ingest Job

You can schedule an Ingest job after setting up the tables. To schedule an Ingest job, enable the scheduler, navigate to Dashboard > Schedule, and then select the gear icon.

  1. Create a schedule as per your requirement.

    • In Automatic mode, replication is triggered when a new log file is detected.

    • In Periodic mode, replication occurs at fixed time intervals. You can configure the frequency in days, hours, minutes, and seconds, along with an offset.

    If you choose a file based driver (MySQL, Oracle Log Miner, etc), this setting is locked to Automatic and cannot be changed. For other drivers, Automatic is disabled and you can only choose Periodic.

  2. Select Apply to save the changes.

Add a New Table to Existing Extracts

If replication is running and you need to add a new table to the extraction process, perform the following:

  1. Disable the scheduler (top right of the screen under the Schedule tab).

  2. Navigate to Dashboard > Tables.

    1. Select the new table(s) by browsing to the database instance name, schema name, and table name(s).

    2. Configure the table with the following options:

      1. Transfer type

      2. Partitioning folder (refer to the Partitioning section for details)

      3. Primary key column(s)

      4. Columns to be masked (optional; masked columns are excluded from replication, e.g., salary data)

    3. Select Apply.

    4. Repeat the process for each additional table.

  3. Navigate to the Operations tab and select Sync New Tables.

This initiates the full extract for the new table(s). Once completed, Ingest automatically resumes processing deltas for both the new and previously configured tables.

Resync Data for Existing Tables

To resync data from the source, follow these steps depending on your requirements.

For Primary Key with History:

  1. Disable the scheduler (top right of the screen under the Schedule tab).

  2. For resyncing data for all configured tables:

    1. Navigate to the Operations tab.

    2. Select Full Extract.

  3. For resyncing data for selected tables:

    1. Navigate to Dashboard > Tables.

    2. Select the table(s) by browsing to the database instance name, schema name, and table name(s).

    3. Select Redo Initial Extract.

    4. Repeat the process for each table that requires resyncing.

  4. Navigate to the Operations tab and select Sync New Tables to resume processing deltas.

Last modified: March 26, 2026

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK

Try Out This Notebook to See What’s Possible in SingleStore

Get access to other groundbreaking datasets and engage with our community for expert advice.