INFER TABLE

Creates a DDL definition for a table based on input files and returns a CREATE TABLE statement that can be used to create a table to store the data from the file. The CREATE TABLE statement returned in the output of INFER TABLE can be reviewed, edited, and subsequently used to create the required table.

Syntax

INFER TABLE AS LOAD DATA {input_configuration}
[FORMAT [CSV | JSON | AVRO | PARQUET | ICEBERG]]
[AS JSON]

Remarks

  • The input_configuration specifies configuration for loading files from Apache Kafka, Amazon S3, a local filesystem, Microsoft Azure, HDFS, and Google Cloud Storage. Refer to CREATE PIPELINECREATE PIPELINE for more information on configuration specifications.

  • All options supported by CREATE PIPELINE are supported by INFER TABLE.

  • CSV, JSON, Avro, Parquet, and Iceberg formats are supported.

  • The default format is CSV.

  • TEXT and ENUM types use utf8mb4 charset and utf8mb4_bin collation by default.

  • The AS JSON keyword is used to produce pipeline and table definitions in JSON format.

Example

The following example demonstrates how to use the INFER TABLE command to infer the schema of a Avro-formatted file in an AWS S3 bucket.

This example uses data that conforms to the schema of the books table, as shown in the following.

{"namespace": "books.avro",
"type": "record",
"name": "Book",
"fields": [
{"name": "id", "type": "int"},
{"name": "name", "type": "string"},
{"name": "num_pages", "type": "int"},
{"name": "rating", "type": "double"},
{"name": "publish_timestamp", "type": "long",
"logicalType": "timestamp-micros"} ]}

Refer to Generate an Avro File for an example of generating an Avro file that conforms to this schema.

The following command reads the specified Avro file, infers the table definition, and returns the inferred schema in a CREATE TABLE query definition format:

INFER TABLE AS LOAD DATA S3 's3://data_folder/books.avro'
CONFIG '{"region":"<region_name>"}'
CREDENTIALS '{
"aws_access_key_id":"<your_access_key_id>",
"aws_secret_access_key":"<your_secret_access_key>",
"aws_session_token":"<your_session_token>"}'
FORMAT AVRO;
"CREATE TABLE `infer_example_table` (
    `id` int(11) NOT NULL,
    `name` longtext CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
    `num_pages` int(11) NOT NULL,
    `rating` double NULL,
    `publish_date` bigint(20) NOT NULL)"

Refer to Schema and Pipeline Inference - Examples for more examples.

Last modified: November 14, 2025

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK

Try Out This Notebook to See What’s Possible in SingleStore

Get access to other groundbreaking datasets and engage with our community for expert advice.