Skip to main content

Microsoft Fabric Publishing

LakeXpress creates Microsoft Fabric Lakehouse tables from exported Parquet files. Query data in Fabric using SQL, Spark, or Power BI via managed Delta tables or SQL analytics views.

Table of Contents

Prerequisites

1. Microsoft Fabric Setup

Required Resources:

  • Microsoft Fabric workspace with a Lakehouse
  • Service Principal with appropriate permissions
  • OneLake storage access

Required Permissions:

  • Workspace: Member or Contributor role
  • Lakehouse: Read and Write access
  • OneLake: Storage Blob Data Contributor

2. Azure AD App Registration

Create a Service Principal:

  1. Azure Portal > Azure Active Directory > App registrations
  2. Click New registration
  3. Name the app (e.g., "LakeXpress-Fabric")
  4. Select Accounts in this organizational directory only
  5. Click Register
  6. Note the Application (client) ID and Directory (tenant) ID
  7. Go to Certificates & secrets > New client secret
  8. Create a secret and save the Value (shown only once)

3. Grant Fabric Permissions

Add the Service Principal to your workspace:

  1. Open your Fabric workspace
  2. Click Manage access
  3. Click Add people or groups
  4. Search for your Service Principal
  5. Assign Member or Contributor role
  6. Click Add

4. Find Fabric Configuration Values

Workspace ID and Lakehouse ID:

  1. Open your Lakehouse in the Fabric portal
  2. Extract from the URL:
    https://app.fabric.microsoft.com/groups/{workspace_id}/lakehouses/{lakehouse_id}
  3. Copy workspace_id (GUID after /groups/)
  4. Copy lakehouse_id (GUID after /lakehouses/)

Lakehouse Name:

The display name shown in the Fabric portal.

SQL Analytics Endpoint:

  1. In your Lakehouse, click SQL analytics endpoint in the bottom pane
  2. Click Copy SQL connection string
  3. Extract the hostname (e.g., abc123xyz.datawarehouse.fabric.microsoft.com)

5. Credentials Configuration

Both OneLake storage and Fabric publishing credentials go in credentials.json:

{
"onelake_storage": {
"ds_type": "onelake",
"auth_mode": "service_principal",
"info": {
"directory": "onelake://your-workspace-name/your-lakehouse-name/",
"azure_client_id": "your-application-client-id",
"azure_tenant_id": "your-directory-tenant-id",
"azure_client_secret": "your-client-secret"
}
},
"fabric_lakehouse": {
"ds_type": "fabric",
"auth_mode": "service_principal",
"fabric_target": "lakehouse",
"info": {
"workspace_id": "your-workspace-id",
"lakehouse_id": "your-lakehouse-id",
"lakehouse_name": "your-lakehouse-name",
"sql_endpoint": "your-sql-endpoint.datawarehouse.fabric.microsoft.com",
"azure_client_id": "your-application-client-id",
"azure_tenant_id": "your-directory-tenant-id",
"azure_client_secret": "your-client-secret"
}
}
}

Configuration Fields:

FieldDescriptionRequired
ds_typeMust be "fabric"Yes
auth_modeMust be "service_principal"Yes
fabric_targetMust be "lakehouse"Yes
workspace_idFabric workspace GUIDYes
lakehouse_idLakehouse GUIDYes
lakehouse_nameLakehouse display nameYes
sql_endpointSQL analytics endpoint hostnameYes
azure_client_idApplication (client) IDYes
azure_tenant_idDirectory (tenant) IDYes
azure_client_secretClient secret valueYes

Authentication Setup

Fabric authenticates via Service Principal (Azure AD).

Service Principal Authentication

{
"fabric_lakehouse": {
"ds_type": "fabric",
"auth_mode": "service_principal",
"fabric_target": "lakehouse",
"info": {
"workspace_id": "your-workspace-id",
"lakehouse_id": "your-lakehouse-id",
"lakehouse_name": "your-lakehouse-name",
"sql_endpoint": "your-sql-endpoint.datawarehouse.fabric.microsoft.com",
"azure_client_id": "your-application-client-id",
"azure_tenant_id": "your-directory-tenant-id",
"azure_client_secret": "your-client-secret"
}
}
}

OneLake Directory Format

The OneLake directory must match your Fabric workspace and Lakehouse:

onelake://workspace-name/lakehouse-name/

Or using the Files path:

onelake://workspace-name/lakehouse-name.Lakehouse/Files/

Table Types

LakeXpress supports two Fabric Lakehouse table modes.

Delta tables in the Lakehouse Tables section. Data is converted to Delta Lake format with ACID transactions, time travel, and automatic SQL analytics endpoint availability.

  • Full Delta Lake features (time travel, ACID, versioning)
  • Optimized query performance
  • Native Power BI integration
  • Best for production workloads
./LakeXpress config create \
-a credentials.json \
--lxdb_auth_id lxdb \
--source_db_auth_id postgres_prod \
--target_storage_id onelake_storage \
--publish_target fabric_lakehouse \
--publish_method internal \
...

./LakeXpress sync

External Tables (SQL Analytics Views)

SQL views referencing Parquet files in the Files section. Data stays as Parquet; queries read files directly. Available via the SQL analytics endpoint only.

  • No data conversion overhead
  • Available immediately after export
  • Suited for exploration and ad-hoc analysis
  • Preserves original Parquet format
./LakeXpress config create \
-a credentials.json \
--lxdb_auth_id lxdb \
--source_db_auth_id postgres_prod \
--target_storage_id onelake_storage \
--publish_target fabric_lakehouse \
--publish_method external \
...

./LakeXpress sync

Configuration Options

OptionDescriptionDefault
--publish_target IDCredential ID for Fabric target (required)-
--publish_method METHODinternal (Delta) or external (views)internal
--publish_schema_pattern PATTERNNot applicable for Fabric (uses Lakehouse)-
--publish_table_pattern PATTERNTable naming pattern{table}
--n_jobs NParallel workers for table creation1

Dynamic Naming Patterns

Table names support token-based patterns.

Supported Tokens

TokenDescriptionExample Output
{schema}Source schema nametpch_1
{table}Source table namecustomer
{database}Source database nametpch
{date}Current date (YYYYMMDD)20251210
{timestamp}Current timestamp (YYYYMMDD_HHMMSS)20251210_143022
{uuid}UUID4 (consistent per run)a1b2c3d4-...
{subpath}CLI --sub_path parameterstaging

Common Patterns

Prefixed Tables

./LakeXpress config create \
--publish_table_pattern "src_{schema}_{table}" \
--publish_target fabric_lakehouse \
--publish_method internal \
...

# Result: src_tpch_1_customer, src_tpch_1_orders

Schema-Based Naming

./LakeXpress config create \
--publish_table_pattern "{schema}_{table}" \
--publish_target fabric_lakehouse \
--publish_method internal \
...

# Result: tpch_1_customer, tpch_1_orders

Date-Stamped Tables

./LakeXpress config create \
--publish_table_pattern "{table}_{date}" \
--publish_target fabric_lakehouse \
--publish_method internal \
...

# Result: customer_20251210, orders_20251210

SQL Views with Prefix

./LakeXpress config create \
--publish_table_pattern "VW_{schema}_{table}" \
--publish_target fabric_lakehouse \
--publish_method external \
...

# Result: VW_tpch_1_customer, VW_tpch_1_orders

Usage Examples

Example 1: PostgreSQL to Fabric Delta Tables

./LakeXpress config create \
-a credentials.json \
--lxdb_auth_id lxdb \
--source_db_auth_id postgres_prod \
--source_db_name tpch \
--source_schema_name tpch_1 \
--fastbcp_dir_path ./FastBCP_linux-x64/latest/ \
--fastbcp_p 2 \
--n_jobs 4 \
--target_storage_id onelake_storage \
--publish_target fabric_lakehouse \
--publish_method internal \
--generate_metadata

./LakeXpress sync

Tables (customer, orders, lineitem, etc.) appear in the Lakehouse Tables section, queryable via SQL analytics endpoint and Power BI.

Example 2: SQL Server to Fabric with Custom Naming

./LakeXpress config create \
-a credentials.json \
--lxdb_auth_id lxdb \
--source_db_auth_id mssql_prod \
--source_db_name adventureworksdw \
--source_schema_name dbo \
--fastbcp_dir_path ./FastBCP_linux-x64/latest/ \
--fastbcp_p 2 \
--n_jobs 4 \
--target_storage_id onelake_storage \
--publish_target fabric_lakehouse \
--publish_method internal \
--publish_table_pattern "dw_{table}"

./LakeXpress sync

Creates tables dw_DimCustomer, dw_DimProduct, dw_FactSales, etc.

Example 3: SQL Analytics Views (External)

./LakeXpress config create \
-a credentials.json \
--lxdb_auth_id lxdb \
--source_db_auth_id postgres_prod \
--source_db_name tpch \
--source_schema_name tpch_1 \
--fastbcp_dir_path ./FastBCP_linux-x64/latest/ \
--fastbcp_p 2 \
--n_jobs 4 \
--target_storage_id onelake_storage \
--publish_target fabric_lakehouse \
--publish_method external \
--publish_table_pattern "VW_{schema}_{table}"

./LakeXpress sync

Parquet files land in the Files section; views VW_tpch_1_customer, VW_tpch_1_orders, etc. appear in the SQL analytics endpoint.

Example 4: Parallel Table Creation

./LakeXpress config create \
-a credentials.json \
--lxdb_auth_id lxdb \
--source_db_auth_id postgres_prod \
--source_db_name tpch \
--source_schema_name tpch_1 \
--fastbcp_dir_path ./FastBCP_linux-x64/latest/ \
--fastbcp_p 4 \
--n_jobs 8 \
--target_storage_id onelake_storage \
--publish_target fabric_lakehouse \
--publish_method internal

./LakeXpress sync

Example 5: MySQL to Fabric

./LakeXpress config create \
-a credentials.json \
--lxdb_auth_id lxdb \
--source_db_auth_id mysql_prod \
--source_db_name ecommerce \
--source_schema_name ecommerce \
--fastbcp_dir_path ./FastBCP_linux-x64/latest/ \
--fastbcp_p 2 \
--n_jobs 4 \
--target_storage_id onelake_storage \
--publish_target fabric_lakehouse \
--publish_method internal \
--publish_table_pattern "mysql_{table}"

./LakeXpress sync

See Also

Copyright © 2026 Architecture & Performance.