Skip to main content
Version: 3.4

Elevate Installation

About Elevate

Elevate retrieves data from PI, and uploads it to the Azure cloud. Some Elevate components are in the cloud and some run on a VM or an on-premises computer. This article provides instruction only for the computer-based components, which are installed via a single install executable file.

The install file includes nine components:

  • Configuration Console client creates devices and configuration for services in an Elevate instance. Also migrates configuration from older versions.
  • Model Reader fetches an asset model from PI, transforms it into something Fusion can understand, and uploads it into Fusion (via a storage account). This happens on a scheduled basis so that any changes to the asset model in PI are automatically reflected in Fusion. Tags are saved to a file that is read by Tags Service.
  • Geo SCADA Collector uploads real-time and historical data from Geo SCADA servers into ADX. Short requests are uploaded via Store and Forward (IoT hub); long requests are uploaded as parquet files to a storage account.
  • IP.21 Collector uploads real-time and historical data from IP.21 servers into ADX. Short requests are uploaded via Store and Forward (IoT hub); long requests are uploaded as parquet files (via Upload Service) to a storage account.
  • OPCUA Collector uploads real-time data from OpcUA servers into ADX. Requests are uploaded via Store and Forward (IoT hub).
  • PI Collector uploads historical time series data from PI into ADX. Short requests are uploaded via Store and Forward (IoT hub); long requests are uploaded as parquet files to a storage account.
  • PI Data Pipe Collector uploads real-time series data from PI into ADX via Store and Forward (IoT hub).
  • PI Event Frames Collector uploads event frame data from PI into ADX via Store and Forward (IoT hub).
  • PI Outage Handler fills in any gaps in real-time data that may have been created by an outage of some kind.
  • Store and Forward uploads data (received from a collector) to an IoT hub. If the IoT hub is unavailable or unable to handle the current data flow rate, Store and Forward will store the data until it can be uploaded.
  • Upload Service uploads files (received from a collector or Model Reader) to an IoT hub. The files are then routed to a storage account for ingestion by ADX.

Software Prerequisites

  • .NET Framework 4.8
  • .NET 6.0
  • For PI: PI client software, e.g. PI system explorer; must be PI 2018 (2.10.8) or later
  • For Geo SCADA: DLLs from Geo SCADA Expert 2020
  • For IP.21 over ODBC: install AspenTech SQLplus V12.0 or V12.2, or be prepared to manually copy the AspenTech SQLplus data provider DLL (file version 17.0.0.266 or 17.2.0.235)

Prerequisites

  • A service account that will run the services, and, if applicable, has read access to PI.
  • 450 GB of free hard-drive space (though the exact amount will vary depending on the number of tags and the duration of high event rate cycles)
  • Fusion has been deployed to Azure (probably via Terraform with the deploymentType set to "Elevate", and tagControllerCount and duplicatorCount both set to 1).

Instructions

Installing Elevate involves first installing the application files (by running Elevate installer), followed by installing and configuring an instance of the Elevate services.

Install Elevate files

  1. Copy the Elevate install (Elevate.exe) to the machine on which it will be installed.
  2. Double-click Elevate.exe.
  3. Accept the license agreement and click Install.
  4. In rare circumstances, the grpc.core.api.dll is not installed. To fix this, try re-running the Elevate installer and selecting the Repair option, or uninstall and re-install.
  5. If using the IP.21 Collector over ODBC and SQLplus hasn’t been installed, there are two ways to copy the AspenTech DLLs:
  • Method 1:
    • Install AspenTech SQLplus (V12.0 or V12.2) on the Elevate machine.
    • Open PowerShell as an administrator and run the ip21binaryfix.ps1 script found in the install folder (e.g., C:\Program Files\Elevate\IP21 Collector\ip21binaryfix.ps1)
    • Restart Model Reader and IP21 Collector.
  • Method 2:
    • Find AspenTech.SQLplus.DataProvider.dll on an IP.21 server machine (V12.0 or V12.2). E.g., C:\Windows\Microsoft.NET\assembly\GAC_MSIL\AspenTech.SQLplus.DataProvider
    • Copy the DLL to the Elevate machine in the Elevate install folder for the collector and Model Reader. E.g., C:\Program Files\Elevate\IP21 Collector\ and C:\Program Files\Elevate\Model Reader\
    • If using IP.21 V12.2, also copy AspenTech.ADSA.Locator.dll. E.g., from C:\Windows\Microsoft.NET\assembly\GAC_MSIL\AspenTech.ADSA.Locator to C:\Program Files\Elevate\IP21 Collector\ and C:\Program Files\Elevate\Model Reader\
    • Restart Model Reader and IP21 Collector.

Install a new instance of Elevate

Running the Configuration Console requires a user that has the right to create new devices in the IoT hub, send data to the IoT hub, and modify configuration of existing devices in the IoT hub.

  1. Open a command prompt as administrator and go to the Elevate\Configuration directory in Program Files.
  2. Run Configuration.Console.exe add -host <IoT hub name or hostname> (see Configuration Console authentication)
  • For example: Configuration.Console.exe add -host iot-fusion-acme-prod -browser -tenantId 12345678-1234-1234-1234-123456789abc
Note

Note that in the above example, "-browser" forces Azure authentication through the default web browser This command creates multiple IoT hub devices, each beginning with “Elevate”. To use a different name, specify -deviceroot <name>. When configuring multiple Elevate instances to push data into a single IoT hub, each instance must use a unique deviceroot.

  1. For PI, run Configuration.Console.exe install -user <service user> -password <service user password> -start -service ModelReader,StoreAndForward,PICollector,PipesCollector,PIOutageHandler,EventFramesCollector,UploadService
  • For example: Configuration.Console.exe install -user .\elevateserv -password abc -browser -tenantId 12345678-1234-1234-1234-123456789abc -start -service Model,Store,PICol,PipesCol,PIOut,EventFramesCol,Upload
Note

In the above example, you can replace the “.” in “.\elevateserv” with the network domain “networkdomain\elevateserv”.

Note

Note that in the above example, "-browser" forces Azure authentication through the default web browser

Note

Some characters in usernames and passwords may cause issues. Ensure that the equals sign is used and wrap the text in quotes. E.g., -password="x = y"

Note

If any services fail to start, try starting them from Windows Services or run Configuration.Console.exe start again. If a service fails to start by timing out, try again. If a service fails immediately, try re-running the install with the Repair option.

  1. For OPCUA, run Configuration.Console.exe install -user <service user> -password <service user password> -start -service ModelReader,StoreAndForward,OPCUA,UploadService

  2. For IP.21, run Configuration.Console.exe install -user=<service user> -password=<service user password> -start -service ModelReader,StoreAndForward,IP21,UploadService

  3. For Geo SCADA, run Configuration.Console.exe install -user <service user> -password <service user password> -start -service ModelReader,StoreAndForward,GeoSCADACollector,UploadService

  4. Navigate to the storage account, configuration container, and edit the configuration.json blob (see Cloud Configuration). At a minimum, fill in PIServers, PIAFServers, and/or GeoScadaServers and save the blob. For example:

"PIServers": [
{
"Server": "hostname, IP address, or GUID",
"Alias": "PI1"
}
],
"PIAFServers": [
{
"Server": "hostname or IP address",
"PIServer": "hostname, IP address, or GUID of the PI archive server (the same as a PIServers[].Server)"
}
],
"IP21Servers": [
{
"Server": "hostname or IP address",
"Type": "ODBC",
"Alias": "IP211"
}
],
"OPCUAServers": [
{
"Server": "hostname or IP address",
"Type": "ODBC",
"Alias": "OPCUA1"
}
],
"GeoScadaServers": [
{
"Server": "hostname or IP address",
"Alias": "GEO1",
"Username": "user1",
"Password": "password1",
"Roots": [ "$Root" ]
}
],

For more information and configuration options, see Historian Configuration.