Configure Elevate
Configuration Console Authentication
There are three methods to authenticate the Configuration Console with Azure: managed identity, client secret, or interactive. When using managed identity, no further information is required.
To use client secret, specify the tenant ID, client ID, and client secret on the Configuration Console command-line. E.g.:
Configuration.Console.exe add -host iot-fusion-abc-dev -tenantId 12345678-1234-1234-1234-123456789abc -clientId 12345678-1234-1234-1234-123456789abc -clientSecret abc
To use interactive authentication, specify "-browser" on the command-line. If the Azure account has access to multiple tenants, also specify the tenant ID. E.g.:
Configuration.Console.exe add -host iot-fusion-abc-dev -browser -tenantId 12345678-1234-1234-1234-123456789abc
Not all commands require authentication, but these do: add, install, remove, uninstall
| Example Managed Identity | Example Client Secret | Example Interactive |
|---|---|---|
| add -host iot-fusion-abc-dev | add -host iot-fusion-abc-dev -tenantId 12345678-1234-1234-1234-123456789abc -clientId 12345678-1234-1234-1234-123456789abc -clientSecret abc | add -host iot-fusion-abc-dev -browser -tenantId 12345678-1234-1234-1234-123456789abc |
| install -user piact -password secret -start -service Model,Tags,Store,PICol,PipesCol,PIOut,EventFramesCol | install -user piact -password secret -start -service Model,Tags,Store,PICol,PipesCol,PIOut,EventFramesCol -tenantId 12345678-1234-1234-1234-123456789abc -clientId 12345678-1234-1234-1234-123456789abc -clientSecret abc | install -user piact -password secret -start -service Model,Tags,Store,PICol,PipesCol,PIOut,EventFramesCol -browser -tenantId 12345678-1234-1234-1234-123456789abc |
| remove -host iot-fusion-abc-dev -device x1 | remove -host iot-fusion-abc-dev -device x1 -tenantId 12345678-1234-1234-1234-123456789abc -clientId 12345678-1234-1234-1234-123456789abc -clientSecret abc | remove -host iot-fusion-abc-dev -device x1 -browser -tenantId 12345678-1234-1234-1234-123456789abc |
| uninstall | uninstall -tenantId 12345678-1234-1234-1234-123456789abc -clientId 12345678-1234-1234-1234-123456789abc -clientSecret abc | uninstall -browser -tenantId 12345678-1234-1234-1234-123456789abc |
Cloud configuration
Configure Elevate services by editing the configuration.json file in Elevate's storage account.
- Open the Azure portal (portal.azure.com) and navigate to the storage account for your Elevate instance. It will probably be called something starting with stmodels* in your Elevate resource group.
- Under the Data Storage heading on the tab at the left of the screen, click Containers, and then in the list of containers in the center of the screen, click the one named "configuration".
- From the list of blobs in the container, click on "configuration.json" and a pop-over tab will appear.
- Click the "Edit" sub tab and an edit screen will appear showing you the contents of the configuration.json file and allowing you to make changes.
- Follow the steps below to make changes you want for various services. When you're done, click the Save button (or Discard to discard your changes). Important: any time you click Save, before making further changes, close the editor, refresh the page, and re-open the editor.
Logging
Most of the time you won't need to change logging details and can skip this section.
To configure logging for all of the Elevate services on the machine at once:
- Scroll down until you find the section starting with
"Logging": { - Immediately under that, there will be a heading of
"Default:" { - Under that, there is a "WriteTo" section to let you control where logging will be written to, and a "MinimumLevel" section to let you control how much to log.
- Change the value of "Default" under "MinimumLevel" to "Debug", "Warning", or "Error" to set logging levels lower or higher. The normal value is "Information".
To configure logging for a particular instance:
- There is an "Overrides" section under "Logging". Look under "Overrides" to see if there is a section whose name is the name of the instance of the service you want to change. For example, if you were looking for the store and forward service instance named "1", you would look for "storeandforward-1" under "Overrides".
- If you find a matching section, the entries under it will look like the entries under the "Logging" section above. Modify them in a similar way to change the logging for that particular instance.
- If you don't find a matching section, copy the entire "Default" section, and paste it as a new section under the "Overrides" section, changing "Default" to the name of your service instance. Service instance names are one of "geoscadacollector", "modelreader", "pipipescollector", "pieventframescollector", "pioutagehandler", "storeandforward", or "uploadservice" followed by a dash, then followed by the instance name.
Historians
You will need to configure at least one PI, PI AF, or Geo SCADA server for Elevate to fetch data.
To configure a new server, look for the appropriate section ("PIServers: [" for PI servers, "PIAFServers: [" for PI AF servers, or "GeoScadaServers: [" for GeoSCADA servers) and add an entry under it that looks like this:
{
"Server": "<server name or identifier, e.g. PIServer.CustomerNetwork.com>",
"Alias": "<alias for the server, e.g. PIServer>",
"Username": "<user name to log in to the server as>",
"Password": "<password for the user>",
"MaxTransactions": <maximum number of transactions that may be run simultaneously>,
"MonitoringTags": [
{
"Tag": "<name of tag to monitor>",
"HistoricalLimit": <tag value above which historical data retrieval should stop>,
"Realtimelimit": <tag value above which real time data retrieval should stop>,
"HistoricalDelay": <minimum number of seconds to pause after the HistoricalLimit is exceeded, default 60>,
"RealtimeDelay": <minimum number of seconds to pause after the RealtimeLimit is exceeded, default 60>
}
]
}
Username and password are optional for servers that support Windows authentication, e.g. PI and PI AF. The Alias will be used to name tags from this server, and should be globally unique within the set of all Elevate instances feeding into the ADX database in Azure. You can specify as many monitoring tags as you like, including specifying none at all.
In addition to the above, PI AF and Geo SCADA servers support additional configuration for reading model information.
Geo SCADA Servers
Geo SCADA servers support two additional settings at the same level as Server and Alias: Roots and Port. Port allows you to specify which port to connect to the Geo SCADA server on - if it's not specified, services will attempt to connect on port 5481. Roots allows you to specify the path to elements in the Geo SCADA hierarchy: the hierarchy under those elements will be extracted as the model for this server. e.g. "Roots": [ "Path1/Path2/Path3", "Area1/Area2/Area3" ]. Roots is required; use "Roots": [ "$Root" ] to fetch the entire server.
For Geo SCADA servers, the Server field should be the network or machine name of the server to connect to.
{
…
"Port": 5481,
"Roots": [ "$Root" ],
…
}
IP.21 Servers
IP.21 servers support several additional settings at the same level as Server and Alias. Roots specifies the root folders within IP.21 from which to fetch the hierarchy: everything under these elements will be extracted as the model for this server. E.g. "Roots": [ "RootFolder" ]; if not specified, all folders are retrieved.
For IP.21 servers, the Server field should be the network or machine name, or IP address of the server to connect to.
{
…
"OdbcPort": <port when using ODBC type connections, defaults to 10014>,
"RestApiPort": <port when using REST type connections, defaults to 443>,
"UseHttps": <determines whether to use https (true) or http (false) when calling the REST service, defaults to true>,
"DataSource": <the name of the IP.21 data source to connect to>,
"Type": <the type of connection: "ODBC" or "REST", defaults to REST>,
"RequestTimeoutSeconds": <the number of seconds before a REST request will time out, defaults to 120>,
"RowsPerFetch": <the number of rows to fetch at once from IP.21, defaults to 30000>,
"GroupSize": <the number of tags to query at once from IP.21, defaults to 50>,
"RequestRetries": <the number of times the Model Reader will retry a request before failing, defaults to 3>,
"RequestRetryDelayInSeconds": <the number of seconds Model Reader will wait before retrying a failed request, defaults to 30>,
"RequestsBeforeThrottling": <the number of REST requests the Model Reader will make before throttling subsequent requests, defaults to 20>,
"ThrottleDelayInMilliseconds": <the amount of time to wait between throttled requests, defaults to 500>,
"TagNameLimit": <the max length of a tag name, defaults to 256>,
"Roots": [ "RootFolder" ],
"Alias": <uniquely identifies the IP.21 system>
…
}
PI AF Servers
PI AF servers support a PIServer setting to specify the server that their monitoring tags live on (if any), and Except to provide a list of databases to exclude from scanning.
{
…
"PIServer": "hostname, IP address, or GUID",
"Except": [ "NotThisDatabase", "AlsoNotThisDatabase" ]
}
PI AF servers also support configuration for individual PI AF databases to read models from. If no databases are provided, it's assumed that all databases should be read.
{
…
"Databases": [
{
"Name": "<name of the database to scan>",
"Roots": [ "Optional list", "Of", "Elements", "To", "Start", "Scanning", "At", "or leave out to start scanning at the root of the database" ],
"StartAtRoots": <true or false; true indicates that only the roots (and possibly descendants) will be output; false indicates that root ancestors will also be included>,
"AddAllDescendants": <true or false; true indicates that all descendants of roots will be output; false indicates that only the roots (and possibly the ancestors) will be included>
},
{
"Name": "<name of the next database to scan>"
}
]
}
Storage Account
A storage account will receive model and bulk time-series data from Elevate. It is configured automatically during the initial configuration, but the following is provided for reference.
- Scroll down until you find the section starting with "StorageAccount"
- Specify the configuration as appropriate.
| ConnectionString | The Storage Account connection string. After it's saved in the configuration.json, the value will be moved to the key vault. |
|---|---|
| Retries | The number of times to retry storage account uploads. Defaults to 6. |
| TimeoutSeconds | The number of seconds to wait before timing out on a storage account upload. Defaults to 100. |
License
Elevate collectors require a license key. It is configured (in the key vault) during Fusion deployment, but the following is provided for reference.
- Scroll down until you find the section starting with "License"
- Specify the configuration as appropriate.
| KeygenLicense | The license key. After it's saved in the configuration.json, the value will be moved to the key vault. |
|---|
Drive Space Monitoring
Elevate services may stop processing when disk space falls below a threshold.
- Scroll down until you find the section starting with "DriveSpaceMonitoring"
- Specify the configuration as appropriate.
| MinFreeSpace | The minimum free space required on all drives in MinFreeSpaceDrives. This is an integer followed by either %, T, G, or M: %: the value must be between 0 and 100 and represents the percentage of the total drive space that must be free on each drive T: the number of terabytes of space that must be free on each drive G: gigabytes M: megabytes Defaults to 5%. Can be disabled by setting this value to null. |
|---|---|
| MinFreeSpaceDrives | The drives which will be monitored for free space. Defaults to ["C"]. If this is set to null or an empty list, the default will be used. |
Queues
Queues are staging directories on the Elevate server where data will live until it can be transmitted to Azure. Typically there will be three queues defined for a given Elevate instance, but you can define more to change where data is sent.
- Scroll down until you find the section starting with
"Queues": [ - Specify the configuration as appropriate.
| Identifier | A unique name for the queue. |
|---|---|
| IOTHubConnectionString | The connection string to the IoT hub device that data will be sent to. After it's saved in the configuration.json, the value will be moved to the key vault. |
| QueueDirectory | Where the data will live on the Elevate server's hard drives. The larger the server, the more space you should ensure exists on the drive for this directory. |
| ContentEncoding | The encoding that has been applied to the message body. |
| ContentType | The original type of the message body. |
| Properties | A list of custom properties with values. E.g., [ "Color": "blue", "Destination": "1985" ] |
Store and Forward
The Store and Forward service receives data from other services and uploads it to IoT hub devices.
- Scroll down until you find the section starting with "StoreAndForwardInstances"
- Within this section, find the instance to modify. Typically, "1". The "Default" section is the configuration given to any new instances that get created.
- Specify the configuration as appropriate.
| WorkingDirectory | The directory in which Store and Forward will store it's working files (of which it currently has none). Defaults to C:\ProgramData\Uptake\StoreAndForward |
|---|---|
| MaxConcurrentThreads | The number of threads that may be used to pull messages from the queue and send them to the IoT hub. Defaults to 8. |
PI Event Frames Collector
- Scroll down until you find the section starting with "EventFramesInstances".
- Within this section, find the instance to modify. Typically, "1". The "Default" section is the configuration given to any new instances that get created.
- Specify the configuration as appropriate.
| WorkingDirectory | The directory in which PI Event Frames Collector will store its data. Databases will be created here, as will directories for each instance to store working files. Defaults to C:\ProgramData\Uptake\PIEventFrames |
|---|---|
| PacketFrequency | The maximum number of seconds a piece of data will have to wait before being transmitted. Defaults to 10. |
| ExpiryDays | How long to wait (in days) after a transaction has been completed before removing it. Defaults to 7. |
| SmallWindowSize | For historical queries, event frames will be read in batches of SmallWindowSize minutes until either the LargeWindowSize is reached or the number of event frames read is larger than MaxBatchSize. Defaults to 60. |
| LargeWindowSize | Defaults to 1440 |
| RealtimeWindowSize | Real-time queries update event frames on a scheduled basis, every few minutes. This sets the number of minutes between scans. Defaults to 10. |
| MaxTransactions | The maximum number of transactions which can be running at once. Note that all real-time transactions are merged into a single running transaction. If set to null, there is no limit to the maximum number of transactions that can be run at once. Defaults to null. |
| MaxTransactionsPerServer | The maximum number of transactions which can be running at once on any one server. If set to null, there is no limit to the maximum number of transactions which can be run at once on any one server. Defaults to null. |
PI Data Pipe Outage Handler
- Scroll down until you find the section starting with "PIOutageHandlerInstances".
- Within this section, find the instance to modify. Typically, "1". The "Default" section is the configuration given to any new instances that get created.
- Specify the configuration as appropriate.
| WorkingDirectory | Ensure that the drive used has a large amount of free space. The exact amount needed will vary depending on the number of tags and the duration of high rate event cycles, but 450 GB may be enough in most cases. |
|---|---|
| DataRetrievingThreads | The number of threads that will be used to retrieve data. Defaults to 8. |
| DataSendingThreads | The number of threads that will be used to send data. Defaults to 2. |
| ReconciliationThreads | The number of threads that will be used to reconcile data. Defaults to 2. |
| QueueProcessingThreads | The number of threads that will be used to process the suspect data queue. Defaults to 2. |
| MaxMessageSize | The size (in bytes) of the largest message to send to the IoT hub. Defaults to 200000 bytes. Cannot be set to larger than 230000 or less than 4000. |
| MaximumLatency | How often reconciled data should be sent. If an event for data reconciliation is received, it will be sent in at most this many milliseconds. Defaults to 5000 (5 seconds). |
| PointsPerMessage | The number of events that should be included in a data reconciliation message. Events will be buffered until this many are received, and then they will be sent in a single message to the IoT hub. |
| EventsPerBatch | The number of events that should be read at a time when backfilling an outage. Every time this many events are read, they will be flushed to disk. The higher this value is, the more memory will be consumed during an outage fetch, but the faster it will go. Defaults to 1 million. Minimum is 50k. |
| CompressOutput | Enable or disable gzip compression of IoT hub messages. |
| OutageMaximumAge | The maximum number of hours it should take to fill a gap. Defaults to 12. |
| OutageGracePeriod | The longest number of hours that PI Data Pipe Collector might be down for. Defaults to 24. |
PI Data Pipe Collector
- Scroll down until you find the section starting with "PIDataPipesInstances".
- Within this section, find the instance to modify. Typically, "1". The "Default" section is the configuration given to any new instances that get created.
- Specify the configuration as appropriate.
| WorkingDirectory | The directory PI Data Pipes Collector will create instance databases in. Defaults to %PROGRAMDATA%\Uptake\PIDataPipes |
|---|---|
| OutageTimeout | |
| PipeInactivityLimit | The amount of time in minutes after which no data Received on a pipe results in an outage Defaults to 60 |
| PipeAutoSendTimer | The number of seconds the pipe will wait before sending a batch. Set this to be about 30-60s for large data throughput to ensure maximum compression. For smaller data throughput use a smaller interval to reduce latency in batch building. |
| PipeType | The type of pipe to collect data from. Can be Snapshot, Archive or TimeSeries. Defaults to TimeSeries. |
| MaxPipeSize | The number of tags to put in each PI data pipe. |
| MaxPullSize | The number of messages to pull from the PI buffer each cycle. |
| MessageBatchSize | The number of PI events to batch before sending. For large throughput configure for 45000-60000 message batches to get best compression. If not using compression set to 1350. |
| OutputPipeTagsToFile | Set to false |
| RPCPingFrequency | |
| RPCTimeout | |
| ResetWait | Number of seconds to wait for data to come through a pipe before deciding it needs resetting. Defaults to 120. |
| MaximumResetWait | Pipe resets get exponentially longer while waiting for the sever to respond. This is the maximum number of minutes to wait for any pipe to reset before flagging it as unresponsive. If all pipes reach a state where they've been flagged as unresponsive, then the connection to the server will be reset and all pipes will be recreated. |
| UnresponsiveResetMax | If a given pipe reaches the unresponsive level this number of times, the collector will reset its connection to that PI server regardless of the connection state of other pipes. Defaults to 5. |
| CompressOutput | Enable or disable gzip compression of IoT hub messages. |
PI Collector
- Scroll down until you find the section starting with "PICollectorInstances".
- Within this section, find the instance to modify. Typically, "1". The "Default" section is the configuration given to any new instances that get created.
- Specify the configuration as appropriate.
| WorkingDirectory | Directory in which the PI collector will store its data. Databases will be created here, as will directories for each instance to store working files. Defaults to C:\ProgramData\Uptake\PICollector |
|---|---|
| MaxTransactions | Maximum number of transactions that will be allowed to be run simultaneously. Can be set to null if no limit should be imposed. Defaults to 50. |
| MaxTransactionsPerServer | Maximum number of transactions that should be allowed to be run simultaneously on a single server. Can be set to null if no limit should be imposed. Defaults to 4. |
| MessageSize | Maximum message size when sending data to the IoT hub, in bytes. Can be at most 230,000, must be at least 4,000. Any values outside that range will be ignored and the min/max used as appropriate. |
| WindowSize | The default window size for historical transactions sent to the IoT hub. This controls how much data is read at a time, in minutes. Defaults to 60. |
| LongWindowSize | The default window size for bulk history transactions. This controls how much data is read at a time, in minutes. Defaults to 1440. |
| ExpiryDays | How long to wait (in days) after a transaction has been completed before removing it. Defaults to 7 days. |
| MaxParquetSize | Parquet files will be allowed to grow until they have exceeded this value, at which point they'll be uploaded. This value is in MB. The value can also be set to null, meaning parquet files will be allowed to grow as large as needed to accommodate all the data from the request in one file. The default value is 500. |
| MemoryFootprint | Determines the amount of memory the program should attempt to limit itself to when MemoryStrategy is anything other than None. Can be either #M, #G, #%T, or #%A, where # is an integer greater than 0. #M: # is the number of megabytes of memory to limit the program to. #G: # is the number of gigabytes of memory to limit the program to. #%T: # is the percentage of the total physical memory on the system to limit the program to. #%A: # is the percentage of the available memory on the system to limit the program to. The program will not permit more than 90% of the total physical memory to be used and will always leave at least 1G of physical memory free. It will always request at least 200M of memory. It will not run on systems with 1G of RAM or less. Defaults to "50%A". Note that memory monitoring is more what you call "guidelines" than actual rules. |
| MemoryStrategy | Which memory guarding strategy should be used. Can be either None, Guard, or Tweak. None: no memory guarding strategy is used. Transactions will use as much or as little memory as the configured window size and parquet group row counts require for the data that the transaction retrieves. Guard: transactions that would cause the collector to use too much memory with the configured settings will have their window size and/or parquet group row counts reduced. Data spikes which look like they'll push the transaction over the estimated limit will cause re-evaluation of window size and/or parquet group row counts. Tweak: as with Guard, but transactions may also have their window size and/or parquet group row counts increased if the memory ceiling allows it. Defaults to Guard. |
| ParquetGroupRowCount | Determines how many rows of data to include in parquet row groups. Larger parquet row groups will provide better performance to systems reading the parquets, but will require more memory to write. Defaults to 100,000. Values less than 50,000 or greater than 1,000,000 will be adjusted to those limits. |
| CompressOutput | Enable or disable gzip compression of IoT hub messages. |
| ParquetContainer | The container in the storage account that bulk history parquets will be uploaded to. Defaults to bulk-history-parquets. |
| GapFillContainer | The container in the storage account that gap fill data will be uploaded to. Defaults to gap-fill-data. |
| GapFillQueue | The storage queue that will be used to signal uploaded gap fill data. Defaults to gap-fill-queue. |
Geo SCADA Collector
- Scroll down until you find the section starting with "GeoScadaCollectorInstances".
- Within this section, find the instance to modify. Typically, "1". The "Default" section is the configuration given to any new instances that get created.
- Specify the configuration as appropriate.
| WorkingDirectory | Directory in which Geo SCADA Collector will store its data. Databases will be created here, as will directories for each instance to store working files. Defaults to C:\ProgramData\Uptake\GeoScadaCollector |
|---|---|
| MaxTransactions | Maximum number of transactions that will be allowed to be run simultaneously. Can be set to null if no limit should be imposed. Defaults to 50. |
| MaxTransactionsPerServer | Maximum number of transactions that should be allowed to be run simultaneously on a single server. Can be set to null if no limit should be imposed. Defaults to 2. |
| ExpiryDays | How long to wait (in days) after a transaction has been completed before removing it. Defaults to 7 days. |
| MaxParquetSize | Parquet files will be allowed to grow until they have exceeded this value, at which point they'll be uploaded. This value is in MB. The value can also be set to null, meaning parquet files will be allowed to grow as large as needed to accommodate all the data from the request in one file. The default value is 500. |
| RealtimeWindowSize | The number of minutes between scans for real-time data collection. Defaults to 1. Minimum is 1. |
| TagBatchSize | The maximum number of tags to include in each read request to Geo SCADA. Defaults to 100. |
| CompressOutput | Enable or disable gzip compression of IoT hub messages. |
| ParquetContainer | The container in the storage account that bulk history parquets will be uploaded to. Defaults to bulk-history-parquets. |
| GapFillContainer | The container in the storage account that gap fill data will be uploaded to. Defaults to gap-fill-data. |
| GapFillQueue | The storage queue that will be used to signal uploaded gap fill data. Defaults to gap-fill-queue. |
IP.21 Collector
- Scroll down until you find the section starting with "IP21CollectorInstances".
- Within this section, find the instance to modify. Typically, "1". The "Default" section is the configuration given to any new instances that get created.
- Specify the configuration as appropriate.
| WorkingDirectory | Directory in which IP.21 Collector will store its data. Databases will be created here, as will directories for each instance to store working files. Defaults to C:\ProgramData\Uptake\IP21Collector |
|---|---|
| MaxTransactionsPerServer | Maximum number of transactions that should be allowed to be run simultaneously against a single IP.21 server. Defaults to 2. |
| MaxTransactions | Maximum total number of transactions that will be allowed to run simultaneously. Defaults to 50. |
| ExpiryDays | After a transaction has completed, the number of days to wait before removing it from the collector’s database. Defaults to 7. |
| MaxParquetSize | Parquet files will be allowed to grow until they have exceeded this value, at which point they’ll be uploaded. This value is in MB. Defaults to 500. |
| RealtimePollingRateInSeconds | Minimum number of seconds between polls of the IP.21 server for real-time data. Defaults to 10. |
| HistoricWindowSize | How many minutes of data will be read at one time when processing a historical transaction. Defaults to 60. |
| CompressOutput | Enable or disable gzip compression of IoT hub messages. Defaults to true. |
Model Reader
- Scroll down until you find the section starting with "ModelReaderInstances"
- Within this section, find the instance to modify. Typically, "1". The "Default" section is the configuration given to any new instances that get created.
- Specify the configuration as appropriate.
| WorkDirectory | Directory in which Model Reader will store its working files. Defaults to C:\ProgramData\Uptake\ModelReader |
|---|---|
| Schedule | Pick one schedule at which to run the Model Reader. Once per day is recommended. |
-
Open a Command Prompt as the user that is running the Tags Service, and go to the Model Reader directory. By default, C:\Program Files\Elevate\Model Reader
-
Run
ModelReader.exe -r <instance>to begin fetching the model and uploading it to Fusion (or wait for a scheduled run to happen).
Upload Service
- Scroll down until you find the section starting with "UploadServiceInstances"
- Within this section, find the instance to modify. Typically, "1". The "Default" section is the configuration given to any new instances that get created.
- Specify the configuration as appropriate.
| WorkingDirectory | Directory in which Upload Service will store its databases. Defaults to C:\ProgramData\Uptake\UploadService |
|---|---|
| ScanFrequency | The number of seconds between retrying failed upload attempts. |