Fusion 3.4 and 3.5 to Fusion 3.6
Upgrade can be done from Fusion 3.4 or 3.5 To Upgrade from previous versions select the version you are currently on from the version drop down and view the upgrade paths.
Resource changes:
func-adxrunner-... was added during this release. This Function App performs scheduled data aggregations and sorting of indeterminate tags on data stored in the Fusion ADX database.
Overview
This upgrade guide contains all the necessary steps to upgrade both Fusion and the Elevate services.
Upgrade Steps
- Fusion upgrade and data migration
- Elevate upgrade
Pre-upgrade tasks:
Run the following two queries on the Fusion ADX database and take note of the results:
TimeSeriesData | count
.show capacity ingestions
Plan on the migration taking at least (Rows / 100,000 / 3600 / IngestThreads) x2 hours. IngestThreads is defined in the ADX Migration Tool. The ADX cluster will not need to be offline for this time, but will be under heavy load. Note that this is the minimum time required: you may not want to use all ingestion capacity to handle the migration. You may want to check to see if you can scale out the cluster somewhat during migration.
Fusion Upgrade Steps
The terraform for Fusion includes three KQL scripts in the modules/Fusion/kql directory, these are needed as part of the data migration to the new database schema:
- fusion3.6_migration-step1.kql
- fusion3.6_migration-step2.kql
- fusion3.6_migration-step3.kql
Perform the following steps against the Fusion ADX database - ADX Script 1
- Copy the contents of fusion3.6_migration-step1.kql to the clipboard
- Open a query window in ADX on the database you are migrating
- In the query window type
.execute database script <|, hit enter and paste the contents of fusion3.6_migration-step1.kql - Highlight the whole query and execute it.
- The result table should say completed for each row.
Fusion ADX Migration Tool
-
Run the Fusion ADX Migration Tool installer found in the installation bundle.
-
Navigate to the installation directory for the tool,
C:\ProgramFiles\FusionDataHub\ADX Data Migration -
Run the ADXDataMigration.exe from a Powershell command window with the following parameters
ADXDataMigration.exe tenantID /tenant id -
Copy the URL of the ADX instance from the Azure resource group and paste it in the Instance URI field.
-
Click the Database dropdown and wait for the list of database instance to be populated, select the database to migrate.
noteIf you haven't logged in to Azure on the machine yet, this will open a browser window to get you to provide your Azure credentials.
-
Click the Migrate button.
infoThe progress bar will advance (slowly) as it migrates the TimeSeriesData table, and log entries will appear in the window. You can change the number of simultaneous ingestions using the +/- buttons - you can change this to up to the number of ingestion nodes available in the cluster. More ingestion threads will make the migration go faster. It will also increase the load on the ADX cluster. To find the number of ingestion threads of the cluster the following command can be executed
.show capacity ingestionsYou can cancel the migration at any time and it will save its progress. After cancelling, you can hit the Finish button to return to the previous screen and pause the program for a bit, or the Wipe button to clear the data that has been ingested so far and start over. When the program completes, only the Finish button appears. Logs are also written to the hard drive, by default inC:\ProgramData\Fusion Data Hub\ADX Data Migration\ADXMigrationTool.log. To change where the logs are written or their detail level, modify the appsettings.json file in the same directory as the executable. -
Click Finish
Monitoring the migration progress
During the migration you can run the following query to monitor the progress in ADX outside of the progress bar in the Migration utility. These counts will increase as the migration runs.
union
(datatable(Count: long, Source: string) []),
(TimeSeriesDataTemp | summarize Count = count() | project Count, Source = "TimeSeriesDataTemp"),
(TimeSeriesData | summarize Count = count() | project Count, Source = "TimeSeriesData"),
(ProcessedIngestion | summarize Count = count() | project Count, Source = "ProcessedIngestion"),
(TimeSeriesDataStaging | summarize Count = count() | project Count, Source = "TimeSeriesDataStaging"),
(OneHourAggregationsRaw | summarize Count = count() | project Count, Source = "OneHourAggregationsRaw"),
(CurrentValuesRaw | summarize Count = count() | project Count, Source = "CurrentValuesRaw"),
(CurrentValueTemp | summarize Count = count() | project Count, Source = "CurrentValueTemp"),
(OneHourAggregationTemp | summarize Count = count() | project Count, Source = "OneHourAggregationTemp")
Perform the following steps against the Fusion ADX database - ADX Script 2
- Copy the contents of fusion3.6_migration-step2.kql to the clipboard
- Open a query window in ADX on the database you are migrating
- In the query window type
.execute database script <|, hit enter and paste the contents of fusion3.6_migration-step2.kql - Highlight the whole query and execute it.
- The result table should say completed for each row.
Fusion Upgrade
Before running the following terraform commands, an upgrade of terraform packages is required. The upgrade can be performed by running the following command.
terraform init -upgrade
Fusion terraform upgrade was performed with version 1.5.7.
Connect to Azure
Run the following commands in PowerShell to connect to Azure:
$env:ARM_TENANT_ID = "<Tenant ID>"
$env:ARM_SUBSCRIPTION_ID = "<Subscription ID>"
az login --tenant $env:ARM_TENANT_ID
az account set --subscription $env:ARM_SUBSCRIPTION_ID
Connect-AzureAD -TenantId $env:ARM_TENANT_ID
Connect-AZAccount -TenantId $env:ARM_TENANT_ID
Set-AzContext -Subscription $env:ARM_SUBSCRIPTION_ID -Tenant $env:ARM_TENANT_ID
- If upgrading a "dev" environment and preserving SQL Server is desired, set "tagControllerDevSqlServer = false" (in the main tf file; see Configuration Instructions above)
terraform refresh -var pre_install=true- If upgrading an Elevate system, to prevent data loss, stop all Elevate services.
terraform apply -var lock=falseterraform apply- Run the following commands in PowerShell to restart all Azure WebApps
$allSites = Get-AzWebApp -ResourceGroupName <RESOURCE_GROUP_NAME>@($allSites).GetEnumerator() | Restart-AzWebApp
Validate Upgrade
If the Elevate Store and Forward service was stopped, this can be restarted to resume streaming data to be ingested. If the dpp function app was stopped, this can also be restarted to resume data ingestion.
- Data should be flowing into the RawIngestion table, from there to the ProcessedIngestion table, and from there to TimeSeriesData.
- The functions ProcessCurrentValues and ProcessOneHourAggregates should be taking values from TimeSeriesData, aggregating them, and pushing them into CurrentValuesRaw and OneHourAggregationsRaw for eventual ingestion into the CurrentValue and OneHourAggregates MVs.
- By default ProcessCurrentValues is run every 5 minutes, and ProcessOneHourAggregates is run every hour at 2 minutes past the hour. These schedules are configurable on the new func-adxrunner function app in the OneHourAggregateSchedule and CurrentValueSchedule configuration settings.
Perform the following steps against the Fusion ADX database - ADX Script 3
If everything is looking good and the client is happy with you doing so, you can perform the steps below with the fusion3.6_migration-step3.kql. You should make sure the client is happy with this first though, as this DROPS THE OLD TIMESERIESDATA TABLE (renamed to TimeSeriesDataToDelete). Migration should have moved all the data to the new TimeSeriesData MV and ProcessedIngestion table, however the client may not want to keep three copies of their data in ADX. TimeSeriesData will retain all undeleted / unrefreshed data. Data which is removed from the PI archive using delete / refresh will remain in ProcessedIngestion in raw format, but as that table largely duplicates TimeSeriesData, the client may want to have a retention policy which removes it after a certain amount of time.
- Copy the contents of fusion3.6_migration-step3.kql to the clipboard
- Open a query window in ADX on the database you are migrating
- In the query window type
.execute database script <|, hit enter and paste the contents of fusion3.6_migration-step3.kql - Highlight the whole query and execute it.
- The result table should say completed for each row.
Elevate Upgrade Steps
To upgrade elevate run the Elevate.exe contained in the installation bundle.