Performance considerations
Parallelism-related parameters
The
model9-local.yml
file residing in the $MODEL9_HOME/conf/
path contains all default parameters. The main section is ‘model9
’ (lower-case letters), and all parameters should be indented under the model9 title as shown in the following example:model9.parallelism.datasets.numberOfThreads: 10
model9.parallelism.volumes.numberOfThreads: 10
model9.parallelism.unix.numberOfThreads: 10
model9.parallelism.numOfFailuresPerAgent: 5
Parameter | Description | Default |
model9.parallelism.dataset.numberOfThreads | Number of parallel threads running during dataset backup or archive | 10 |
model9.parallelism.volumes.numberOfThreads | Number of parallel threads running during volume full dumps | 10 |
model9.parallelism.unix.numberOfThreads | Number of parallel threads running during z/OS UNIX files backups | 10 |
model9.parallelism.numOfFailuresPerAgent | Number of tolerated failures before removing an Agent from a policy run | 5 |
Linux Server Resources
The Linux server resources such as number of cores and memory can affect policy run times, as the management server is orchestrating the policy run.
For high performance the following resources are recommended:
Linux Resource | Value |
Number of CPU Cores | 8+ |
Memory | 16GB |
Docker Container Memory
In order for the Model9 management docker container to be able to utilize the Linux server memory make sure to run the container with 8GB of memory.
This can be done using the following parameter when starting the server:
CATALINA_OPTS=-Xmx8g
For example:
docker run -d -p 0.0.0.0:443:443 -p 0.0.0.0:80:80 \
-v $MODEL9_HOME:/model9:z -h $(hostname) --restart unless-stopped \
-e "TZ=America/New_York"
-e "CATALINA_OPTS=-Xmx8g -Djdk.nativeCBC=false" \
--link model9db:model9db \
--name model9-vx.y.z model9:vx.y.z.bbbbbbbb
If the container is already up with a lower value, you can stop and remove the current container and use the "docker run" command mentioned above to start it again:
docker stop model9-vx.y.z
docker rm model9-vx.y.z
Simultaneous Multithreading (SMT)
Working in multi-threading (MT) mode allows you to run multiple threads per zIIP, where thread is comparable to the definition of a CP core in a pre-multi-threading environment, resulting in increased zIIP processing capacity. To enable zIIP MT mode, define the
PROCVIEW
parameter of the LOADxx
member of SYS1.IPLPARM
in order to utilize the SMT function of z/OS. It defines a processor view of the core, which supports from 1 to ‘n’ threads. Related parameters are MT_ZIIP_MODE and HIPERDISPATCH in IEAOPTxx
. See z/OS MVS Initialization and Tuning Reference for more information.WLM service class considerations
- The agent utilizes zIIP engines. If the production workload also utilizes zIIP, associate the agent with a service class of a lower priority than the production workload service class, to avoid slowing down the production workload.
- When issuing CLI commands in a highly-constrained CPU environment, verify that the issuer - whether it is a TSO userid or a batch job - has at least the same priority as the agent.
zIIP on CP reporting
Turning zIIP on CP monitoring provides information on zIIP-eligible work that was overflowed to CP. The monitoring is enabled by default only when zIIP processors are configured to the system. If no zIIP processors are configured and you would like to see how much CP would be saved by configuring zIIP processors in the system, you can set the PROJECTCPU parameter to YES in IEAOPTxx. This would enable monitoring and cause the zIIP on CP chart to be displayed in the agent screen. See z/OS MVS Initialization and Tuning Reference for more information.
Number of zIIP engines
Model9 performance scales linearly. The more zIIP engines available the higher the throughput.
System wide settings
The system-wide settings of whether to allow spill of zIIP-eligible work to CP is defined in the IIPHONORPRIORITY parameter of IEAOPTxx. The default is YES, allowing standard CPs to execute zIIP and non-zIIP-eligible work in priority order. See z/OS MVS Initialization and Tuning Reference for more information.
Individual service class settings
The "honor priority" parameter allows limiting individual work from overflowing to CP regardless of the system-wide settings. Using the parameter with a value of NO, will ensure zIIP eligible work does not overflow to CP.
In some cases (such as sub-capacity CPs, CP capping, etc) overflowing to CPs can result in degraded performance due to lacking CP resources.
See z/OS MVS Planning: Workload Management for more information.
Reusing TCPIP connections to Cloud storage when using HTTPS
Connection reuse happens automatically for HTTP sessions (unencrypted sessions).
In order to enable connection reuse for HTTPS sessions (encrypted sessions), you must enable SSL trust between the agent and the object storage.
Once the Object Storage CA certificates are installed on the MF, you can use the following parameter in the agent.yml file to enable the trust and connection reuse:
objstore.endpoint.no.verify.ssl: false
When using this parameters, the agent will fail any connection to the object storage if the proper CA certificates are not installed or trusted on z/OS.
For more information see Enabling trust between the Model9 agent and object storage when working https
Segmentation offloading
TCPIP supports offloading the work of segmentation to the OSA Express card. This feature reduces CPU usage and increases network throughput. It can be enabled via the IPCONFIG SEGMENTATIONOFFLOAD on the TCPIP profile.
Maximum and Default Send/Receive Buffer
TCPIP send/receive buffers are used to improve general write/read throughput.
This is especially important for public cloud or any far object storage with high latency.
Use 2M+ buffer sizes for your send/receive buffers
TCPCONFIG
TCPMAXRCVBUFRSIZE 2M
TCPMAXSENDBUFRSIZE 2M
TCPRCVBUFRSIZE 2M
TCPSENDBFRSIZE 2M
MTU Maximum Transmission Unit size
Every TCPIP frame is broken down into the MTU defined by the system. The z/OS default MTU value of 512 is very small and introduces unnecessary TCPIP CPU overhead. The minimum value to be used as MTU when writing to object storage should be 1492.
Check with your network administrator whether jumbo frames can be utilized to further reduce the CPU overhead and improve throughput. Display the current MTU value using the commands:
Command | Description |
TSO NETSTAT GATE | The “Pkt Sz” column represents the MTU size for each configured route. Verify the MTU size used by the route to the object storage. If no specific route to your object storage exists, the “Default” route value is used. This value should be equal or greater than 1492. |
TSO PING <object-storage-ip> (PMTU YES LENGTH 1400 | This command verifies whether the entire path from this TCPIP stack to the object storage supports at least 1400 sized frames. If the output of this command includes “Ping #1 needs fragmentation”, contact your network administrator in order to resolve this issue. |
Last modified 4mo ago