Microsoft Fabric Reference
Reference material for Microsoft Fabric Publishing -- data type mappings, querying, troubleshooting, and CLI arguments.
Data Type Mapping
Source types are mapped to Fabric-compatible Delta Lake types.
PostgreSQL to Fabric
| PostgreSQL Type | Delta Lake Type |
|---|---|
INTEGER, INT4 | INT |
BIGINT, INT8 | BIGINT |
SMALLINT, INT2 | SMALLINT |
NUMERIC(p,s) | DECIMAL(p,s) |
REAL, FLOAT4 | FLOAT |
DOUBLE PRECISION | DOUBLE |
VARCHAR(n), TEXT | STRING |
DATE | DATE |
TIMESTAMP | TIMESTAMP |
TIMESTAMPTZ | TIMESTAMP |
BOOLEAN | BOOLEAN |
BYTEA | BINARY |
SQL Server to Fabric
| SQL Server Type | Delta Lake Type |
|---|---|
INT | INT |
BIGINT | BIGINT |
SMALLINT | SMALLINT |
TINYINT | TINYINT |
DECIMAL(p,s) | DECIMAL(p,s) |
FLOAT | DOUBLE |
REAL | FLOAT |
VARCHAR(n), NVARCHAR(n) | STRING |
DATE | DATE |
DATETIME, DATETIME2 | TIMESTAMP |
DATETIMEOFFSET | TIMESTAMP |
BIT | BOOLEAN |
VARBINARY | BINARY |
Querying Tables
SQL Analytics Endpoint
-- Query a Delta table
SELECT * FROM your_lakehouse.dbo.customer LIMIT 10;
-- Aggregation
SELECT
c_nationkey,
COUNT(*) as customer_count,
SUM(c_acctbal) as total_balance
FROM your_lakehouse.dbo.customer
GROUP BY c_nationkey
ORDER BY customer_count DESC;
Spark Notebooks
# Read a Delta table
df = spark.read.table("customer")
df.show(10)
# Query with SQL
spark.sql("""
SELECT c_nationkey, COUNT(*) as cnt
FROM customer
GROUP BY c_nationkey
""").show()
# Write results back
result_df.write.mode("overwrite").saveAsTable("customer_summary")
Troubleshooting
Common Issues
Authentication Errors
Error: AADSTS7000215: Invalid client secret provided
Regenerate your client secret in Azure AD and update credentials.json.
Error: The user or service principal does not have access to the workspace
- Verify the Service Principal is added to the workspace
- Ensure it has Member or Contributor role
- Wait a few minutes for permissions to propagate
OneLake Connection Issues
Error: Unable to connect to OneLake storage
- Verify the directory format:
onelake://workspace-name/lakehouse-name/ - Check that the Service Principal has Storage Blob Data Contributor role
- Ensure workspace and lakehouse names are correct (case-sensitive)
Table Creation Failures
Error: Failed to create table in Lakehouse
- Verify the SQL analytics endpoint is correct
- Check that the Lakehouse is active (not paused)
- Ensure sufficient capacity units are available
- Confirm Parquet files were exported to OneLake
SQL Endpoint Connection Issues
Error: Cannot connect to SQL analytics endpoint
- Verify the SQL endpoint hostname
- Ensure the SQL analytics endpoint is enabled
- Check firewall rules if connecting from on-premises
Verifying Export Success
Check files in OneLake:
- Open your Lakehouse in Fabric portal
- Go to the Files section
- Verify Parquet files exist in the expected folders
Check tables:
- Open the SQL analytics endpoint
- Expand Tables in the object explorer
- Verify your tables appear
Debug Mode
./LakeXpress sync --log_lev DEBUG
CLI Reference
Fabric Publishing Arguments
| Option | Type | Description |
|---|---|---|
--publish_target ID | String | Credential ID for Fabric publishing (required) |
--publish_method METHOD | Enum | internal (Delta tables) or external (SQL views) |
--publish_table_pattern PATTERN | String | Table naming pattern (default: {table}) |
--n_jobs N | Integer | Parallel workers for table creation (default: 1) |
Full Example
./LakeXpress config create \
-a credentials.json \
--lxdb_auth_id lxdb_postgres \
--source_db_auth_id source_postgres \
--source_db_name tpch \
--source_schema_name tpch_1 \
--fastbcp_dir_path ./FastBCP_linux-x64/latest/ \
--fastbcp_p 2 \
--n_jobs 4 \
--target_storage_id onelake_storage \
--publish_target fabric_lakehouse \
--publish_method internal \
--publish_table_pattern "{schema}_{table}" \
--generate_metadata
./LakeXpress sync
See Also
- Microsoft Fabric Publishing - Setup and usage guide
- Intermediate Storage - OneLake configuration
- CLI Reference - All command-line options