Tuning the WAS proxy server for long poll
The proxy server must be tuned to allow for the high number of connections in the long poll test scenario.
Before you begin
Make sure you have completed the steps in Setting up a WAS proxy server for long polling.
About this task
If the WebSphere® Application Server (WAS) proxy server is used without additional configuration, it is possible that for the requests that get passed to the HTTP Server (IHS) server, that the browser URL is changed to that of the IHS server and does not remain that of the proxy server. To prevent this issue and other issues from occurring, you must set some custom properties within the WAS proxy server settings. You must also configure the WAS proxy Java™ Virtual Machine (JVM) settings to allow for enough memory to work the high connections required for long poll testing.
Procedure
- Stop the proxy server via the Deployment Manager (DM) console.
-
Set custom properties.
- On your DM console, navigate to .
- Click New....
-
In the fields provided, enter
true
for the cache.query.string property. - Click Apply and save the configuration file if prompted.
- Repeat this procedure for the http.forwarded.as.was.managed and http.routing.sendReverseProxyNameInHost properties.
- Stop, synchronize, and restart your proxy nodes.
-
Configure WAS proxy JVM Settings.
-
Disable automatic restart.
- On your DM console, navigate to .
- Remove the check from the Automatic restart check box.
- Synchronize your proxy node.
-
Set the WAS proxy thread pool values.
- On your DM console, navigate to .
-
Click WebContainer and change the values of the minimum size and maximum
size to
100
. Click Apply and save the configuration. -
Click Proxy and change the values of the minimum size and maximum size to
100
. Click Apply and save the configuration. - Stop, synchronize, and restart your proxy nodes.
-
Increase the Transmission Control Protocol (TCP) transport values for the WAS proxy to allow
for increased connections.
- On your DM console, navigate to .
- Adjacent to PROXY_HTTP_ADDRESS, click View associated transports.
- In the Transport Chain section, click the name of the setting, which in this case is PROXY_HTTP_ADDRESS.
- Click TCP inbound channel (TCP #), where # is an arbitrary value.
-
Change the value of Maximum open connections to
50000
. - Click Apply and save the configuration if prompted.
- Repeat this procedure for the PROXY_HTTPS_ADDRESS,WC_defaulthost, and WC_defaulthost_secure port list values.
- Stop, synchronize, and restart your proxy nodes.
-
Set the WAS proxy logging.
What to do next
Read the information in this section for additional tuning information. These steps are not mandatory, and are provided for reference only.
- A proxy server creates a connection to a PUSH application server to process a push request. This connection is persistent. It lasts for 90 seconds and then gets recreated, and utilizes a unique port on the proxmuly server. The number of ports on any Linux™ server is finite and can be configured to a maximum of approximately 64000. This means that the absolute theoretical maximum is approximately 64000 concurrent connections to the proxy, which equates to approximately 64000 NC/FS push notification users per proxy server. This is reduced somewhat because Transmission Control Protocol (TCP) connections remain unusable for a period of time after they are disconnected. So at any given time, there are a number of ports that are in this CLOSE_WAIT state and are unavailable to be used in fresh connections.
-
There was a thread pool growth observed in the various application servers. If the proxy-related thread pools, default and proxy, were allowed to grow unbounded, they seem to grow when WAS could not create more connections to the appropriate JVMs. This failed silently. To get around this, the number of TCP connections on the WC_adminhost, WC_adminhost_secure, WC_defaulthost, and WC_defaulthost_secure ports for Push, Files, and News JVMs was increased to 50000.
To view the relevant ports on the JVM and increase the TCP connections to 50000 for each JVM, for each JVM, under Communication, click and edit the required ports.
- The ulimit setting for open files on every machine was increased to 500000.
The following updates were added to /etc/security/limits.conf and a reboot was
performed when
finished:
@* soft nofile 500000 @* hard nofile 500000 @root soft nofile 500000 @root hard nofile 500000
- The following Linux™ kernel settings were applied to
various machines in the deployment. It is unclear whether these make any appreciable difference:
- Connections
machines:
net.core.somaxconn = 8192 net.ipv4.tcp_max_orphans = 200000 net.ipv4.tcp_max_syn_backlog = 8192 net.core.netdev_max_backlog = 262144 net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_fin_timeout = 5 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 # Discourage Linux from swapping idle processes to disk (default = 60) vm.swappiness = 10 # Increase Linux autotuning TCP buffer limits # Set max to 16MB for 1GE and 32M (33554432) or 54M (56623104) for 10GE # Don't set tcp_mem itself! Let the kernel scale it based on RAM. net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.rmem_default = 16777216 net.core.wmem_default = 16777216 net.core.optmem_max = 40960 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 # Make room for more TIME_WAIT sockets due to more clients, # and allow them to be reused if we run out of sockets # Also increase the max packet backlog net.core.netdev_max_backlog = 50000 net.ipv4.tcp_max_syn_backlog = 30000 net.ipv4.tcp_max_tw_buckets = 2000000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_fin_timeout = 10 # Disable TCP slow start on idle connections net.ipv4.tcp_slow_start_after_idle = 0 # If your servers talk UDP, also up these limits net.ipv4.udp_rmem_min = 8192 net.ipv4.udp_wmem_min = 8192 # Disable source routing and redirects net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.all.accept_source_route = 0 # Log packets with impossible addresses for security net.ipv4.conf.all.log_martians = 1
- Client
machines
net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.rmem_default = 16777216 net.core.wmem_default = 16777216 net.core.optmem_max = 40960 net.ipv4.tcp_rmem = 1024 4096 16384 net.ipv4.tcp_wmem = 1024 4096 16384
- Connections
machines:
- The proxy JVM heap was increased to 16 GB. It is unclear whether this made any appreciable difference.
- For WAS tuning, the authentication cache was set to 300000. This was done because failures were observed that coincided with messages relating to the growth and subsequent overflow of this cache.