Skip to main content
Version: 3.2

Changelog

3.2

Features/Improvements

  • FUSION-2687 Minor ADX Changes Scope:
    • Using Names (DataStreamMetaData table Name = historian tag name) instead of Id’s for all the functions
      • For example, Use FT-001 to GetCurrentValue
    • GetEvents & Trend function update needed with summarize arg_max(ingestion_time(), *)
    • CurrentValue view to be sourced from TSD data table
      • GetCurrentValue function to get rid of materialized_view operator
    • Move DeviceQualityCodeMap table to Ingestion folder and fully cache it (36500d)
      • Users do not need to ever interact with this table.
    • Disable streamingestion on RawIngestion table
    • Delete Device table
  • FUSION-3039, FUSION-3084 Re-add StarSync back to Fusion; Update StarSync to sync new tags on model reader changes, not on an independent schedule
  • FUSION-3086, FUSION-3148 Replace references to TSID with Tag Name in Trender scope:
    • Propagate Property Name from Hierarchy on left to table below.
    • Rename column names to be Property Name, Tag Name, Tag Description
    • Update KQL Function to return Property Description.
    • Tooltip in Hierarchy has Tag Name, Tag Description, Property Description
    • Remove label "Manage Columns"
  • FUSION-3108, FUSION-3051 Export KQL using a standardized KQL template (replaces the old DataBroker URL Generator)
  • FUSION-2876, FUSION-2878 New Aggregate functions available for KQL queries

Bugs

  • FUSION-2560 Finish moving the Licensing service to go through the IoT Hub
  • FUSION-3075 Tags Service has been removed from Fusion Elevate. This now exists fully within the Azure Cloud now.
  • FUSION-3038 Logging and metrics are now sent exclusively via the IoT hub

Known Issues

  • FUSION-3001 Upload Service: Large files timeout the security token and fail to upload. Workaround: In the IoT Hub File Upload settings, increase the SAS TTL and Default TTL from 1 hour to 2 hours.
  • FUSION-3006, FUSION-3007, FUSION-3009 Some parts of the Fusion Upload Service may fail when dealing with incredibly large models. Workaround: For large source systems, we currently recommend increasing the RAM on the Elevate Virtual Machine from 8 GB to 16 GB to prevent these issues from occurring.