Skip to main content

MotherDuck Reference

Reference material for MotherDuck Publishing -- data type mappings, CLI arguments, querying, and troubleshooting.

Data Type Mapping

Source types map to DuckDB-compatible types automatically.

PostgreSQL to DuckDB/MotherDuck

PostgreSQL TypeDuckDB Type
INTEGER, INT4INTEGER
BIGINT, INT8BIGINT
SMALLINT, INT2SMALLINT
NUMERIC(p,s)DECIMAL(p,s)
REAL, FLOAT4REAL
DOUBLE PRECISIONDOUBLE
VARCHAR(n), TEXTVARCHAR
DATEDATE
TIMESTAMPTIMESTAMP
TIMESTAMPTZTIMESTAMPTZ
TIMETIME
BOOLEANBOOLEAN
BYTEABLOB
JSON, JSONBJSON
UUIDUUID

SQL Server to DuckDB/MotherDuck

SQL Server TypeDuckDB Type
INTINTEGER
BIGINTBIGINT
SMALLINTSMALLINT
TINYINTTINYINT
DECIMAL(p,s)DECIMAL(p,s)
MONEYDECIMAL(19,4)
FLOATDOUBLE
REALREAL
VARCHAR(n), NVARCHAR(n)VARCHAR
DATEDATE
DATETIME, DATETIME2TIMESTAMP
DATETIMEOFFSETTIMESTAMPTZ
TIMETIME
BITBOOLEAN
VARBINARYBLOB
UNIQUEIDENTIFIERUUID

Oracle to DuckDB/MotherDuck

Oracle TypeDuckDB Type
NUMBERDECIMAL(38,9)
NUMBER(p,s)DECIMAL(p,s)
FLOATDOUBLE
VARCHAR2, NVARCHAR2VARCHAR
CLOB, NCLOBVARCHAR
DATETIMESTAMP
TIMESTAMPTIMESTAMP
RAW, BLOBBLOB

CLI Reference

MotherDuck Publishing Arguments

OptionTypeDescription
--publish_target IDStringCredential ID for MotherDuck publishing (required)
--publish_schema_pattern PATTERNStringDynamic schema naming pattern (default: {schema})
--publish_table_pattern PATTERNStringDynamic table naming pattern (default: {table})
--publish_method TYPEStringTable type: external (default) or internal
--n_jobs NIntegerNumber of parallel workers for table creation (default: 1)

Querying MotherDuck Tables

Once published, query tables via any of these methods.

MotherDuck Web UI:

Log in to app.motherduck.com and run:

SELECT * FROM my_analytics.lx_tpch_1.customer LIMIT 10;

DuckDB CLI:

duckdb "md:my_analytics?motherduck_token=your_token"
SELECT * FROM lx_tpch_1.customer LIMIT 10;

Python (duckdb):

import duckdb

conn = duckdb.connect("md:my_analytics?motherduck_token=your_token")
df = conn.execute("SELECT * FROM lx_tpch_1.customer LIMIT 10").df()
print(df)

Python with environment variable:

import os
import duckdb

os.environ["motherduck_token"] = "your_token"
conn = duckdb.connect("md:my_analytics")
df = conn.execute("SELECT * FROM lx_tpch_1.customer LIMIT 10").df()
print(df)

Troubleshooting

Common Issues

"Authentication failed":

  • Verify your token is valid and not expired
  • Check token permissions for the target database
  • For env mode, ensure motherduck_token is set

"Database not found":

  • Verify the database name matches an existing MotherDuck database
  • Create the database in MotherDuck UI before publishing

"IO Error: No files found that match the pattern" or "Permission error: Missing or invalid credentials":

  • This means MotherDuck cannot access your private S3/GCS/Azure bucket
  • Create a secret in MotherDuck with your cloud storage credentials (see Storage Access)
  • Verify secrets are configured: SELECT * FROM duckdb_secrets();
  • Test access: SELECT count(*) FROM read_parquet('s3://your-bucket/path/*.parquet');

"Cannot read file" (external views):

  • Verify cloud storage credentials in MotherDuck
  • Check that the S3/GCS/Azure path contains valid Parquet files
  • Ensure the bucket allows access from MotherDuck
  • Verify the files exist: aws s3 ls s3://your-bucket/path/ --recursive | head

"Connection timeout":

  • Check network connectivity
  • MotherDuck requires outbound HTTPS
  • Verify no firewall is blocking the connection

Verifying Setup

Test MotherDuck connectivity:

duckdb "md:my_analytics?motherduck_token=your_token" -c "SELECT 1"

Test cloud storage access from MotherDuck:

SELECT * FROM read_parquet('s3://your-bucket/path/to/file.parquet') LIMIT 1;

Configuring Storage Secrets

See Storage Access from MotherDuck in Prerequisites for detailed instructions on configuring S3, GCS, or Azure secrets.

Comparison with Other Targets

FeatureMotherDuckBigQuerySnowflakeDuckLake
Query EngineDuckDBBigQuerySnowflakeDuckDB
External TablesViews with read_parquet()External tablesExternal tablesDuckLake catalog
Native TablesCREATE TABLE AS SELECTLoad jobsCOPY INTONot supported
ServerlessYesYesNo (warehouse required)N/A (local)
SQL DialectDuckDB/PostgreSQLBigQuery SQLSnowflake SQLDuckDB/PostgreSQL
Best ForAnalytics, explorationLarge-scale analyticsEnterprise data warehouseLocal development

See Also

Copyright © 2026 Architecture & Performance.