Fusion 3.6 to Fusion 3.8 in Fabric
Overview
This guide walks through:
- Upgrading Fusion and Elevate to 3.8
- Installing the Fusion in Fabric workload
- Redeploying terraform in migration mode
- Connecting the Fabric workload to Fusion
- Migrating historical data to Eventhouse
- Removing ADX when ready
Keep ADX online until Fabric ingestion and validation are complete.
To minimize temporary duplicate costs during migration (running ADX and Fabric in parallel), contact Fusion Data Hub Support to coordinate with your team and Microsoft on timing and cutover planning.
Step 1: Upgrade to Fusion and Elevate 3.8
Follow the standard 3.8 upgrade process for Fusion and Elevate.
- In
main.tf, set:
kqlMode = "adx"
- In the Fusion Azure resource group, delete ADX locks:
ADXFollowerClusterLockADXLeaderClusterLock
- Run terraform initialization (Terraform
1.14.3is recommended):
terraform init
- Refresh state for moved resources:
terraform apply -refresh-only
- Stop Elevate StoreAndForward before apply:
Configuration.Console.exe stop -service StoreAndForward
If Elevate will be stopped for more than one day, stop all Elevate services:
Configuration.Console.exe stop
- Deploy/update resources:
terraform apply -var lock=false
- Re-add ADX locks:
terraform apply
- Run the Elevate 3.8 installer.
If Elevate cannot be upgraded immediately, Fusion 3.8 is backward-compatible with Elevate 3.7.
- Start StoreAndForward:
Configuration.Console.exe start -service StoreAndForward
- Validate the upgrade before continuing.
Step 2: Install the Fabric workload
Install the Fusion in Fabric workload in the customer Fabric capacity.
- Register the customer in the Fabric workload administration site.
- Connect the workload to an Eventhouse (existing or newly created).
- Record the Eventhouse values needed later (
fabricDatabase, query URI, and ingest URI).
Step 3: Redeploy terraform in migration mode
Fusion 3.8 terraform introduces these variables:
kqlModefabricIngestUrifabricQueryUrifabricDatabase
kqlMode = "migration" is only supported for existing Fusion deployments (first_run=false).
Do not use migration mode during initial deployment (first_run=true).
- Update terraform variables (for example in
main.tf):
kqlMode = "migration"- Set
fabricIngestUri,fabricQueryUri, andfabricDatabasefrom Step 2.
- Run:
terraform apply
This adds migration-related resources (including Azure Data Factory and event frame relay) and reconfigures existing resources.
Step 4: Connect Fabric workload to Fusion
- In the Fabric workload, use Connect to Fusion.
- Select the Azure resources for the Fusion 3.8 instance and confirm.
- Trigger a model read.
- Validate that data from Elevate flows to both ADX and Fabric.
Step 5: Migrate historical data
By default, Fusion continuously exports ADX data to backup storage (typically ADLS). If backup data is unavailable, complete the Export Data section first.
Use Fusion.LightIngest.exe to load backup parquet files into Fabric Eventhouse.
Example:
Fusion.LightIngest.exe "<EVENTHOUSE_INGEST_URI>" -db:<EVENTHOUSE_DATABASE> -table:ProcessedIngestion -source:<BACKUP_STORAGE_SAS_URI> -pattern:*.parquet -format:parquet -creationTimePattern:"'data-lake/'yyyy/MM/dd'/'" -mappingPath:mapping.json -sort -report:ingest.csv -resume:true
Set the following values for your environment:
<EVENTHOUSE_INGEST_URI>: Eventhouse ingestion URI<EVENTHOUSE_DATABASE>: Eventhouse database name<BACKUP_STORAGE_SAS_URI>: SAS URI to backed-up data
SAS guidance:
- Allowed services:
blob - Allowed resource types:
service, container, object - Allowed permissions:
read, list - Include
data-lakein the path when needed
You may want to provide a full output path for -report if the runtime directory is not writable.
Behavior notes:
- File discovery and sorting can take significant time for large backups.
- Progress is reported every 10 seconds.
- Re-running with
-resume:trueimports only files not already ingested successfully. - If recovery is required, you can delete and recreate Eventhouse and restart migration.
- ADX continues serving production workloads until cutover is complete.
Step 6: Remove ADX (after cutover)
When Fabric is fully validated and ADX is no longer required:
- Set in
main.tf:
kqlMode = "fabric"
- In the Fusion Azure resource group, delete locks:
ADXFollowerClusterLockADXLeaderClusterLockst{company}
- Run:
terraform apply
Resources used only for ADX will be removed, including the ADX instance.
This action removes ADX and ADLS-backed data associated with that ADX deployment. Confirm backup and retention requirements before running this step.
Export Data (if backup data is unavailable)
If continuous export/backup data is unavailable, export directly from ADX before migration.
Use Fusion.LightExtract.exe with a target Azure Storage account (ideally same region as ADX; cool tier recommended).
Example:
Fusion.LightExtract.exe --adx-endpoint <ADX_QUERY_URI> --database <DATABASE_NAME> --storage-connection <BLOB_SAS_URL_WITH_CONTAINER>
Additional options are available for authentication and execution control:
--tenant-id--client-id--client-secret--interactive- End-date and parallelism controls
- Table selection controls
Fusion.LightExtract queues ADX export commands and writes data by creation date to storage, with progress logged in both human-readable and CSV formats.
If export is interrupted, restart using the same configured end date to continue safely.