Deployment and Initial Usage
This guide will instruct you how to set up and invoke the Model9 transform service from the mainframe, using JCL. The service will transform a Model9 data set backup copy / archive / import into a readable file in the cloud. Once transformed, the readable file can be accessed directly or via data analytics tools.
If using the Model 9 transform service as an on-premises service, refer to the Transform On-Premises Deployment section below.

Using the Transformation Service

Step 1: Prerequisites

Verify that Model 9 Cloud Backup and Recovery for z/OS is installed

Model9 is responsible for delivering the data set from the mainframe to the cloud / on-premises storage. The data set is delivered as a backup copy, an archive or imported tape data set, and provides the input to the transform service.

Download z/OS cURL

This free tool will allow you to invoke the transform service from z/OS. If cURL is not installed under /usr/bin, edit line 4 and add the path where the cURL module resides.

Step 2: Copy the script

Copy the following script to /usr/lpp/model9/transformService.sh:
1
#!/bin/sh
2
json=$1
3
url='https://<transform-service-url>:<port>/transform'
4
export PATH=/usr/bin:/bin
5
export _EDC_ADD_ERRNO2=1
6
cnvto="$(locale codeset)"
7
headers="Content-Type:application/json"
8
echo "Running Model9 transform service"
9
output=$(curl -H "$headers" -s -k -X POST --data "$json" $url)
10
if ! [ -z "$output" ]; then
11
echo "Transform ended with the following output:"
12
# If the answer is in ASCII then convert to EBCDIC
13
firstChar=$(echo $output | cut -c1)
14
if [ "$firstChar" = "#" ]; then
15
convOutput="$(echo $output | iconv -f ISO8859-1 -t $cnvto)"
16
else
17
convOutput=$output
18
fi
19
echo "$convOutput"
20
fi
21
status=$(echo $convOutput | tr -s " " | cut -d, -f1 | cut -d" " -f3)
22
echo "Transform ended with status: $status"
23
if [ "$status" = '"OK"' ];then
24
exit 0
25
else if [ "$status" = '"WARNING"' ]; then
26
exit 4
27
else
28
exit 8
29
fi
30
fi
Copied!

Step 3: Copy the JCL

Copy the following JCL to a local library, update the JOBCARD according to your site standards:
1
//M9TRNSFM JOB 'ACCT#',REGION=0M,CLASS=A,NOTIFY=&SYSUID
2
//EXTRACT EXEC PGM=BPXBATCH
3
//STDOUT DD SYSOUT=*
4
//STDERR DD SYSOUT=*
5
//STDPARM DD *
6
SH /usr/lpp/model9/transformService.sh
7
// DD *,SYMBOLS=EXECSYS
8
'{
9
"input": {
10
"name" : "<DATA-SET>",
11
"complex": "group-&SYSPLEX",
12
"type": "<BACKUP|ARCHIVE|IMPORT>"
13
},
14
"output": {
15
"prefix" : "/transform/&LYR4/&LMON/&LDAY",
16
"compression" : "none",
17
"format" : "text"
18
},
19
"source": {
20
"url" : "<URL>",
21
"api" : "<API>",
22
"bucket" : "<BUCKET>",
23
"user" : "<USER>",
24
"password": "<PASSWORD>"
25
}
26
}'
27
/*
28
//
Copied!

Step 4: Customize the JCL

Update the object storage details

Copy the following object storage variables from the Model9 agent configuration file:
    <URL>
    <API>
    <BUCKET>
    <USER>
    <PASSWORD>

Update the complex name

The “complex” name represents the group of resources that the Model9 agent can access. By default, this group is named group-<SYSPLEX> and it is shared by all the agents in the same sysplex. The transform JCL specifies the default, using the z/OS system symbol “&SYSPLEX”.
Note
If the default was kept for “complex” in the Model9 agent configuration file, no change is needed
If the “complex” name was changed in the Model9 agent configuration file, change the “complex” in the JCL accordingly.

Update the transform prefix

By default, the JCL will create a transformed copy of your input data set, in the same bucket, with the prefix: /transform/&LYR4/&LMON/&LDAY. The prefix is using the following z/OS system symbols:
    &LYR4 - The year in 4 digits, e.g. 2019
    &LMON - The month in 2 digits, e.g. 08
    &LDAY - The day in the month in 2 digits, e.g. 10
You can change the prefix according to your needs.

Step 5: Choose the data set to be transformed

The data set to be transformed should be a backup copy, an archive or imported tape data set, delivered by the Model9 agent:
    <DATA-SET> - the name of the data set
    <BACKUP|ARCHIVE|IMPORT> - if the data set is an Model9 backup, archive or import
To change the attributes of the input and the output, and for a full description of the service parameters, see Service parameters.

Step 6: Run the JCL

Submit the job and view the output. See Service response and log samples for sample output.

Step 7: Access the transformed data

Based on the returned response, the outputName will point to the path inside the bucket where the transformed data resides. See Service response and log samples.

Transform On-Premises Deployment

This guide will instruct you how to set up the Model9 transform service as an on-premise service.
Once the service is up and running, see the Using the Transformation Service for instructions on how to invoke the service from the mainframe, using JCL.
Note
This guide describes how to implement the Model9 transform service on a Model9 Cloud Data Manager installation only.

Step 1: Upload the Model9 transform service installation file

The transform service installation file will be provided by your Model9 representative according to your environment. Upload the relevant file to the server using binary mode.
Table: Available installation files:
Environment
Installation File
x86
model9-app-transform_<version>_build_<id>.docker
Linux on Z
model9-app-transform_<version>_build_<id>-s390x.docker
Note
<version> represents the version number. <id> represents the specific release ID.
Create a work directory under $MODEL9_HOME. The directory should be able to hold at least 20 GB of data:
1
# Change user to root
2
sudo su -
3
# If you haven't done so already, set the model9 target installation path
4
export MODEL9_HOME=/data/model9
5
# Change the directory to $MODEL9_HOME
6
cd $MODEL9_HOME
7
mkdir $MODEL9_HOME/extract-work
Copied!

Step 2: Deploy the Model9 transform service's component

Deploy the application components using the following commands:
1
# On Linux issue:
2
docker load -i $MODEL9_HOME/model9-app-transform_<version>_build_<id>.docker
3
4
# On Linux on z issue:
5
docker load -i $MODEL9_HOME/model9-app-transform_<version>_build_<id>-s390x.docker
Copied!
Note
<version> represents the version number. <id> represents the specific release ID.

Step 3: Start the service

Start the service using the following command:
1
docker run -d -p 8443:8443 -v $MODEL9_HOME/extract-work:/data/model9/extract-work:z \
2
-v $MODEL9_HOME/keys:/data/model9/keys:z \-e "JAVA_OPTS=-Xmx8g" \
3
-e "SECURE=true" \
4
-e "WORK_DIRECTORY=/data/model9/extract-work" \
5
-e "LICENSE_KEY=<license-key>" \
6
--restart unless-stopped \
7
--name model9cg-v<version> model9/transform:<version>.<id>
Copied!
Note
<version> represents the version number. <id> represents the specific release ID. <licence-key> represents the license key obtained from the Model9 representative or support team.
Table: Supported environment variables:
Environment Variable
Description
Default
JAVA_OPTS
JVM options for the transform service process
None
PORT
The port the service will listen for requests on
5000 / 8443 (when SECURE)
WORK_DIRECTORY
The work directory used by the service
/tmp
SECURE
Whether the service will listen for TLS connections
false
KEYSTORE_FILE_PATH
The file path to the keystore file the service will use when running in secure mode
/data/model9/keys/model9-backup-server.p12
KEYSTORE_PASSWORD
Keystore password
model9
TRUSTSTORE_FILE_PATH
The file path to the truststore file the service will use when running in secure mode
/data/model9/keys/model9-backup-truststore.jks
TRUSTSTORE_PASSWORD
Truststore password
model9
LICENSE_KEY
The Model9 license key
None
PROXY_HOST
Proxy host for the service to use for outgoing connections
None
PROXY_PORT
Proxy port
None
PROXY_EXCLUDE_LIST_FILE_PATH
The path to a file containing URI regular expressions (one per line) which will not be routed via the proxy
None
CHUNK_READAHEAD_BUFFER_SIZE
The size of a single chunk readahead buffer
209715200 (200mb)
CHUNK_READAHEAD_TOTAL_BUFFERS_SIZE
The total size of all chunk readahead buffers
3221225472 (3gb)
CHUNK_READAHEAD_THREAD_POOL_SIZE
Chunk readahead thread pool size
15
COMPRESSION_MANAGER_THREAD_POOL_SIZE
Compression manager thread pool size
15
Last modified 1mo ago