Skip to main content
Version: 3.8 (Latest)

Migrate to Fusion in Fabric from Fusion v3.8

Overview

This guide walks through:

  1. Installing the Fusion in Fabric workload
  2. Redeploying terraform in migration mode
  3. Connecting the Fabric workload to Fusion
  4. Migrating historical data to Eventhouse
  5. Removing ADX when ready
note

Keep ADX online until Fabric ingestion and validation are complete.

Support Coordination

To minimize temporary duplicate costs during migration (running ADX and Fabric in parallel), contact Fusion Data Hub Support to coordinate with your team and Microsoft on timing and cutover planning.

Step 1: Install the Fabric workload

Install the Fusion in Fabric workload in the customer Fabric capacity.

  1. Register the customer in the Fabric workload administration site.
  2. Connect the workload to an Eventhouse (existing or newly created).
  3. Record the Eventhouse values needed later (fabricDatabase, query URI, and ingest URI).

Step 2: Redeploy terraform in migration mode

Fusion 3.8 terraform introduces these variables:

  • kqlMode
  • fabricIngestUri
  • fabricQueryUri
  • fabricDatabase
Existing deployments only

kqlMode = "migration" is only supported for existing Fusion deployments (first_run=false). Do not use migration mode during initial deployment (first_run=true).

  1. Update terraform variables (for example in main.tf):
  • kqlMode = "migration"
  • Set fabricIngestUri, fabricQueryUri, and fabricDatabase from Step 1.
  1. Run:
terraform apply

This adds migration-related resources (including Azure Data Factory and event frame relay) and reconfigures existing resources.

Step 3: Connect Fabric workload to Fusion

  1. In the Fabric workload, use Connect to Fusion.
  2. Select the Azure resources for the Fusion 3.8 instance and confirm.
  3. Trigger a model read.
  4. Validate that data from Elevate flows to both ADX and Fabric.

Step 4: Migrate historical data

By default, Fusion continuously exports ADX data to backup storage (typically ADLS). If backup data is unavailable, complete the Export Data section first.

Use Fusion.LightIngest.exe to load backup parquet files into Fabric Eventhouse.

Example:

Fusion.LightIngest.exe "<EVENTHOUSE_INGEST_URI>" -db:<EVENTHOUSE_DATABASE> -table:ProcessedIngestion -source:<BACKUP_STORAGE_SAS_URI> -pattern:*.parquet -format:parquet -creationTimePattern:"'data-lake/'yyyy/MM/dd'/'" -mappingPath:mapping.json -sort -report:ingest.csv -resume:true

Set the following values for your environment:

  • <EVENTHOUSE_INGEST_URI>: Eventhouse ingestion URI
  • <EVENTHOUSE_DATABASE>: Eventhouse database name
  • <BACKUP_STORAGE_SAS_URI>: SAS URI to backed-up data

SAS guidance:

  • Allowed services: blob
  • Allowed resource types: service, container, object
  • Allowed permissions: read, list
  • Include data-lake in the path when needed
note

You may want to provide a full output path for -report if the runtime directory is not writable.

Behavior notes:

  • File discovery and sorting can take significant time for large backups.
  • Progress is reported every 10 seconds.
  • Re-running with -resume:true imports only files not already ingested successfully.
  • If recovery is required, you can delete and recreate Eventhouse and restart migration.
  • ADX continues serving production workloads until cutover is complete.

Step 5: Remove ADX (after cutover)

When Fabric is fully validated and ADX is no longer required:

  1. Set in main.tf:
kqlMode = "fabric"
  1. In the Fusion Azure resource group, delete locks:
  • ADXFollowerClusterLock
  • ADXLeaderClusterLock
  • st{company}
  1. Run:
terraform apply

Resources used only for ADX will be removed, including the ADX instance.

warning

This action removes ADX and ADLS-backed data associated with that ADX deployment. Confirm backup and retention requirements before running this step.

Export Data (if backup data is unavailable)

If continuous export/backup data is unavailable, export directly from ADX before migration.

Use Fusion.LightExtract.exe with a target Azure Storage account (ideally same region as ADX; cool tier recommended).

Example:

Fusion.LightExtract.exe --adx-endpoint <ADX_QUERY_URI> --database <DATABASE_NAME> --storage-connection <BLOB_SAS_URL_WITH_CONTAINER>

Additional options are available for authentication and execution control:

  • --tenant-id
  • --client-id
  • --client-secret
  • --interactive
  • End-date and parallelism controls
  • Table selection controls

Fusion.LightExtract queues ADX export commands and writes data by creation date to storage, with progress logged in both human-readable and CSV formats.

If export is interrupted, restart using the same configured end date to continue safely.