The Client Gateway is an executable created by T6 Planning to be installed in the client's environment. It enables remote data load tasks. It acts as an intermediary between the server and the client, fetching data according to the configuration provided by the server and returning the requested data.
The Client Gateway is available for Windows and Linux operating systems. Install it according to your operating system. There is no need to install other packages as prerequisites.
The Client Gateway is an application that will run in the background in the terminal. To terminate the application, simply switch to the window where it is running and send the CTRL+C
command.
Before starting the installation, it is necessary to download the Client Gateway file according to your operating system, Linux or Windows. However, the OS must support one of the TLS 1.2 or 1.3 ciphers used by T6 Planning:
Client Gateway for version v11.11:
Client Gateway for version v12.0:
TLS 1.2:
TLS 1.3:
It is necessary to allow port 443 through the firewall.
In the unzipped folder where the Client Gateway files are located, look for the appsettings.json
file. Open it in a text editor, preferably one that reads JSON files.
Below is an image framing all the parameters currently in the Client Gateway, based on the version used in this manual.
{
"DataLoadGatewayConfig": {
"LogFile": "ppGateway.log",
"HealthFile": "ppHealth.log",
"HealthInterval": 20,
"Url": "http://localhost/Sysphera/dataloadhub",
"Key": "",
"Issuer": "https://sysphera.com/dataloadhub/issuer",
"Audience": "https://sysphera.com/dataloadhub/audience",
"ClientId": "",
"DataSources": [
{
"DataSourceName": "",
"DataSourceType": "",
"StringConnection": ""
},
]
}
}
Parameter Descriptions:
Both
LogFile
andHealthFile
accept full paths where the files are expected to be saved. However, relative paths are also accepted, and in this case, the folder from which the Client Gateway was called will be considered to form the absolute path.
/dataloadhub/
. This parameter informs where the server-side part is located, with which the Client Gateway will communicate.We have 3 parameters that do not need to be changed:
For the next parameters, we will need to access T6 and create a Remote Data Load object.
We can have multiple datasources. Just add them to the file respecting the JSON format.
We can connect through SQLserver, Oracle, REST, MySQL, PostgreSQL, and ODBC.
To obtain the data to fill in the parameters mentioned above, we will need to follow these steps:
00000000-0000-0000-0000-000000000000
because no datasource has been saved;DataSourceName
parameter of the appsettings.json
file;StringConnection
parameter of the appsettings.json
file;StringConnection
parameter of the appsettings.json
file;We will assign these values to the mentioned parameters. Below is an example using SQLSERVER, ORACLE, POSTGRE, MySQL, CosmosDB, and DATALOADREST data sources:
{
"ClientId": "A1A1A1A1-B2B2-C3C3-D4D4-E5E5E5E5E5E5",
"DataSources": [
{
"DataSourceName": "SQLServer",
"DataSourceType": "SQLSERVER",
"StringConnection": "Data Source=SQLServerConnection;Initial Catalog=DataBaseName;Trusted_Connection=false;Persist Security Info=True;User ID=User;Password=pwd"
},
{
"DataSourceName": "Oracle",
"DataSourceType": "ORACLE",
"StringConnection": "Data Source=OracleConnection;User ID=user;Password=pwd"
},
{
"DataSourceName": "PostgreSQL",
"DataSourceType": "POSTGRESQL",
"StringConnection": "Server=host.docker.internal;Port=5432;Database=mydb;User Id=user;Password=pwd;"
},
{
"DataSourceName": "MySQL",
"DataSourceType": "MYSQL",
"StringConnection": "Server=host.docker.internal;Database=myDataBase;Uid=user;Pwd=pwd;"
},
{
"DataSourceName": "CosmosDB",
"DataSourceType": "COSMOSDB",
"StringConnection": "AccountEndpoint=https://cosmodb-example.documents.azure.com:443/;AccountKey=bpuRR4tLLQO6M2RB1Jop1KBJbByDm1zLGp2LJHeZJLaSSkft5aCApY9T2tUKx9qg591XyaSLZ0hRDDiACDb44k1vkA==",
"Configuration": {
"Database": "SampleDB",
"Container": "SampleContainer"
}
},
{
"DataSourceName": "Data Load Rest",
"DataSourceType": "DATALOADREST"
}
]
}
Note: To configure CosmosDB in T6, the configurations remain the same. You only need to specify the Name field (the same as the
DataSourceName
parameter).
Using remote data loading with a connection to a NoSQL database (CosmosDB), we can ensure the existence of a column, even if it contains no data:
To illustrate, let’s create a Workflow process:
Create a Workflow object and add a task;
Assign any name to the task;
In Task Type, select Remote Data Load and click Next;
In Remote Data Load, select the previously created remote data load object (CosmosDB);
In SQL Command, insert the desired SQL statement to execute, for example:
SELECT
Container.ID AS itemId, Container.price, Container.tags, Container.custom ? (Container.custom) : null AS custom
FROM Container;
This syntax will query the database and include a column named "custom," whether it exists in the database or not. If it exists and contains data, the data will be returned. If not, the column will return a value of "null."
In Destination, select New:
Click Finish to create a Data Table with the name of the connection established in the remote data load object.
You can also select an existing data table in the system as the Destination, but the number of columns and their headers must match those of the source table.
By checking the Automatically Restructure Data Table checkbox, the destination table will be automatically adjusted without requiring the same number and names of columns as the source table.
Save and publish the workflow process.
First, we need to prepare a container for the gateway;
src
folder of the PowerPlanning project cd C:...\powerplanning\src;PublicImage/ClientGateway:latest
);Replace
PublicImage/ClientGateway:latest
with the correct name of the public image you are using.
.env
file cp ClientGateway/.env.sample ./;
src
folder;.env.sample
file with your configurations code .env.sample;The clientID is obtained through T6. When we create a "Remote Data Load" object, upon opening it, we will have a code that should be inserted in the configuration part
DataLoadGatewayConfig__ClientId=********-****-****-****-***********
;
docker run --env-file .env.sample --name ClientGateway -d PublicImage/ClientGateway:latest
A Data Table will be created within T6 with the name of the connection created in the Remote Data Load Load Postgres object;
We can check the Automatically restructure data table checkbox, which will automatically adjust the destination table without needing to have the same number and name of columns as the source table;
By adding data to the table externally (through the database) and executing the trigger again, these changes will be reflected in the data table created in T6.
appsettings.json
file inside the ClientGateway folder;In the configuration, we have 2 main parameters that we must pay attention to:
- URL: It must necessarily contain the endpoint /dataloadhub
- ClientID: This is the code generated when we save the Remote Data Load object in T6;
appsettings.json
file, we will insert another block of information:{
"DataSourceName": "ODBC MSSQL",
"DataSourceType": "ODBC",
"StringConnection": "Driver={ODBC Driver 17 for SQL Server};Server=myServerAddress;Database=myDataBase;UID=myUsername;PWD=myPassword;"
}
After configuring, we will return to T6, where we will create a Workflow Process to visualize the operation.
We can select in Destination a data table already present in our system, but it is necessary that the number of columns and the names in their headers are the same as the source table;
We can check the Automatically restructure data table checkbox, which will automatically adjust the destination table without needing to have the same number and name of columns as the source table;
We will create a Remote Data Load object in T6 (or add another connection if you already have such an object);
appsettings.json
file inside the ClientGateway folder;appsettings.json
file, we will insert another block of information:{
"DataSourceName": "ODBC EXCEL",
"DataSourceType": "ODBC",
"StringConnection": "Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=C:\\MyExcel.xls;"
}
In StringConnection, DBQ is the location where the Excel file is saved
After configuring, we will return to T6, where we will create a Workflow Process to visualize the operation.
An error will occur if there are changes to the existing columns in the Excel file. The same column names must be maintained in the header, and the number of columns should not be altered.
DataLoadGatewayConfig
configuration file, it is necessary to include a new entry:DataSourceName: "Data Load Rest",
DataSourceType: "DATALOADREST"
Steps to register the Client Gateway as a Windows service.
SC.exe
has other parameters that can be used when creating a service. In the step-by-step described above, these are the minimum required parameters to register a service. You can learn more about the other parameters by clicking the link: Microsoft Learn - SC.exe create
To remove the created service, the command is: sc delete ServiceName, where ServiceName is the name of the service we want to remove.
After downloading the ClientGateway file for Linux in section 2.1 of this manual, we will need to create a service in the /etc/systemd/system/
directory, which we will name ClientGateway.service
.
nano
, vi
, vim
can be used. In this manual, we will use nano
;
sudo nano /etc/systemd/system/ClientGateway.service
# ClientGateway.service
[Unit]
Description=T6 Enterprise Client Gateway
[Service]
ExecStart=dotnet /home/azureuser/ClientGateway/ClientGateway.dll
SyslogIdentifier=T6ClientGateway
User=azureuser # Name of the Linux user who will run the service
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
nano
we can do this by pressing CTRL + O);nano
we can do this by pressing CTRL + X);To ensure that the service is working correctly, we can check its status by running the following command: sudo systemctl status ClientGateway.
The incremental data load replaces the data in the database table by first deleting the data and then inserting it according to the selected period.
To use the incremental data load, we will follow these steps:
TableName
, StartDate
, and EndDate
; (For additional information on Workflow, Triggers, and Parameters, visit: Workflow BPM)We can select in Destination a data table already present in our system, but it is necessary that the number of columns and the names in their headers are the same as the source table;
We can check the Automatically restructure data table checkbox, which will automatically adjust the destination table without needing to have the same number and name of columns as the source table;
StartDate
parameter, referring to the start date of the records we want;EndDate
parameter, referring to the end date of the selected records;These settings will only define the records that will be deleted from the destination table. The insertion part will depend on what is defined in the SQL command above (if it only does the "SELECT", all records will be loaded, regardless of the quantity).
Example of the SQL command applying the parameters:
SELECT * FROM REP_EXPLORER_OBJECT
WHERE
datModified >= COALESCE(NULLIF('Instance(StartDate)',"),'1970-01-01')
AND
datModified < COALESCE(NULLIF('Instance(EndDate)',"),'5000-01-01')
StartDate
and EndDate
parameters, with the start date and end date, respectively;Whenever the T6 version is updated, it is strictly necessary to update the Client Gateway version.
To use data loading for dynamically creating a data table, follow these steps:
Create a Workflow Process object:
TableName
(This parameter will be used in step 8 of this topic).Create a new task and assign any name to it.
In Task Type, select Remote Data Load and click Next.
In Remote Data Load, select the previously created remote data load object.
In SQL Command, insert the desired SQL query to execute.
In Destination, select any option from the dropdown menu (regardless of the data table selected, it will only be created when the process trigger is fired).
Check the Automatically Restructure Data Table checkbox to enable automatic adjustments to the destination table.
In Parameter for Dynamic Table Name, select the previously created TableName
text parameter:
TableName
parameter will define the name of the destination data table.If no data table exists with the defined name, it will be dynamically created and saved in the Root folder of T6.
Warning: If a data table already exists with the name defined in the parameter, its data will be overwritten.
Once the process configuration is complete, click Save and publish the cube.
In the process trigger, set the TableName
parameter to the desired name of the destination data table.
Finally, after defining the parameter value, fire the trigger.