Windows Multipoint takes port 80 - windows

In a Windows Server 2016 Standard,
I had to install Multipont to add licenses
But Multipoint takes the port 80
With the following command
netsh http show servicestate
you can see
Server session ID: FF00000520000001
Version: 2.0
State: Active
Properties:
Max bandwidth: 4294967295
Timeouts:
Entity body timeout (secs): 120
Drain entity body timeout (secs): 120
Request queue timeout (secs): 120
Idle connection timeout (secs): 120
Header wait timeout (secs): 120
Minimum send rate (bytes/sec): 150
URL groups:
URL group ID: FE00000540000001
State: Active
Request queue name: Request queue is unnamed.
Properties:
Max bandwidth: inherited
Max connections: inherited
Timeouts:
Timeout values inherited
Number of registered URLs: 1
Registered URLs:
HTTP://+:80/MULTIPOINT/IMULTIPOINTCERTIFICATEREQUEST/
How can I change the port number?

Related

Postgres connect time delay on Windows

There is a long delay between "forked new backend" and "connection received", from about 200 to 13000 ms. Postgres 12.2, Windows Server 2016.
During this delay the client is waiting for the network packet to start the authentication. Example:
14:26:33.312 CEST 3184 DEBUG: forked new backend, pid=4904 socket=5340
14:26:33.771 CEST 172.30.100.238 [unknown] 4904 LOG: connection received: host=* port=56983
This was discussed earlier here:
Postegresql slow connect time on Windows
But I have not found a solution.
After rebooting the server the delay is much shorter, about 50 ms. Then it gradually increases in the course of a few hours. There are about 100 clients connected.
I use ip addresses only in "pg_hba.conf". "log_hostname" is off.
There is BitDefender running on the server but switching it off did not help. Further, Postgres files are excluded from BitDefender checks.
I used Process Monitor which revealed the following: Forking the postgres.exe process needs 3 to 4 ms. Then, after loading DLLs, postgres.exe is looking for custom and extended locale info of 648 locales. It finds none of these. This locale search takes 560 ms (there is a gap of 420 ms, though). Perhaps this step can be skipped by setting a connection parameter. After reading some TCP/IP parameters, there are no events for 388 ms. This time period overlaps the 420 ms mentioned above. Then postgres.exe creates a thread. The total connection time measured by the client was 823 ms.
Locale example, performed 648 times:
"02.9760160","RegOpenKey","HKLM\System\CurrentControlSet\Control\Nls\CustomLocale","REPARSE","Desired Access: Read"
"02.9760500","RegOpenKey","HKLM\System\CurrentControlSet\Control\Nls\CustomLocale","SUCCESS","Desired Access: Read"
"02.9760673","RegQueryValue","HKLM\System\CurrentControlSet\Control\Nls\CustomLocale\bg-BG","NAME NOT FOUND","Length: 532"
"02.9760827","RegCloseKey","HKLM\System\CurrentControlSet\Control\Nls\CustomLocale","SUCCESS",""
"02.9761052","RegOpenKey","HKLM\System\CurrentControlSet\Control\Nls\ExtendedLocale","REPARSE","Desired Access: Read"
"02.9761309","RegOpenKey","HKLM\System\CurrentControlSet\Control\Nls\ExtendedLocale","SUCCESS","Desired Access: Read"
"02.9761502","RegQueryValue","HKLM\System\CurrentControlSet\Control\Nls\ExtendedLocale\bg-BG","NAME NOT FOUND","Length: 532"
"02.9761688","RegCloseKey","HKLM\System\CurrentControlSet\Control\Nls\ExtendedLocale","SUCCESS",""
No events for 388 ms:
"03.0988152","RegCloseKey","HKLM\System\CurrentControlSet\Services\Tcpip6\Parameters\Winsock","SUCCESS",""
"03.4869332","Thread Create","","SUCCESS","Thread ID: 2036"

Does HikariCP supports command timeout in Spring Boot application similar to C#

Does HikariCP supports command timeout in Spring Boot application similar to C#
I am using Hikari Connection Pool in my Spring boot application. I have enabled connectionTimeout using the following configuration
spring.datasource.hikari.connectionTimeout: 30000
If I increase the number of concurrent users I'm getting the following exception in logs
Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection;
nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
I am perfectly find with the above exception. I can increase the number of connections. But my concern is, there are few endpoints which are taking more than 2 minutest to respond. Those endpoints got DB Connection from pool but took more time to process. Is there a setting in which I can mentioned some timeout so that if DB takes more than some time (Operation time -Say 40 seconds) then it should send SQL exception. Similar to command timeout in C#
Application.properties
# A list of all Hikari parameters with a good explanation is available on https://github.com/brettwooldridge/HikariCP#configuration-knobs-baby
# This property controls the minimum number of idle connections that HikariCP tries to maintain in the pool. Default: same as maximumPoolSize
spring.datasource.hikari.minimumIdle: 10
# This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections. Basically this value will determine the maximum number of actual connections to the database backend.
# Default: 10
spring.datasource.hikari.maximumPoolSize: 20
#This property controls the maximum number of milliseconds that a client (that's you) will wait for a connection from the pool. If this time is exceeded without a connection becoming available, a SQLException will be thrown.
#Lowest acceptable connection timeout is 250 ms. Default: 30000 (30 seconds)
spring.datasource.hikari.connectionTimeout: 30000
# This property controls the maximum amount of time that a connection is allowed to sit idle in the pool. This setting only applies when minimumIdle is defined to be less than maximumPoolSize
# Default: 600000 (10 minutes)
spring.datasource.hikari.idleTimeout: 600000
# This property controls the maximum lifetime of a connection in the pool. An in-use connection will never be retired, only when it is closed will it then be removed.
# Default: 1800000 (30 minutes)
spring.datasource.hikari.maxLifetime: 1800000
# This property sets a SQL statement that will be executed after every new connection creation before adding it to the pool. Default: none
spring.datasource.hikari.connectionInitSql: SELECT 1 FROM DUAL
As explained in this comment, seems that connectionTimeout value is used for acquire connection timeout.
You can try setting that to a higher value than 30 secs and see if that helps.
Druid connection pool have this setting that allows you to set up the timeout:
spring.datasource.druid.query-timeout=10000

Samples failing on increasing number of users and decreasing Ramp-up Period

For Jmeter script, when I use
Number of Threads = 10
Ramp-up Period = 40
Loop Count = 1
Then 6 out of 40 samples failed.
When I increase the Ramp-up Period to 60 then all the samples pass.
For the failed requests, the response code returned is 522:
Sampler result
Thread Name: Liberty Insight 1-4
Sample Start: 2018-02-23 20:43:12 IST
Load time: 1
Connect Time: 0
Latency: 1
Size in bytes: 112
Sent bytes:584
Headers size in bytes: 112
Body size in bytes: 0
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""):
Response code: 522
Response message:
Response headers:
HTTP/1.1 522
Server: nginx
Date: Fri, 23 Feb 2018 15:13:12 GMT
Content-Length: 0
Connection: keep-alive
HTTPSampleResult fields:
ContentType:
DataEncoding: null
I am unable to figure out the reason for this type of behaviour. Any pointers what could be issue for such type of behaviour?
If you choose Ramp-up Period = 40 with 10 threads calls to server are about 4 transaction a second.
When you use cloudflare services one of its feature is to prevent overload the server
There are a few main causes of this:
The origin server was too overloaded to respond.
The origin web server has a firewall that is blocking our requests, or packets are being dropped within the host’s network.
Error 522:
(source: cloudflare.com)
If you need to load test your server use different route than cloudflare, Consult your IT for such option.
If not reduce transaction per second rate
Ensure that the origin server isn’t overloaded. If it is, it could be dropping requests.

WebSphere MQ Connection Tuning

I have an application which uses MDB, activation specifications and Queue Connection Factories to get/put Messages from WMQ. The application expects a max load of 80 tps. Both Websphere Application Server and WMQ are clustered and each application server connects to seperate host and channel. The application onMessage method is implemented in way so that both session and connection gets closed after message is consumed and response is sent.
As per our configuration, we have WAS version is 8.5, IBM MQ queue manager version 7, max server sessions for act spec set to 40 for each node. max connection count in Connection Factory to 40 for each node and max session in session pool of connection factory to 10.
Now on peak load we expect to make max 80 MQ Channel Instance but as per the investigation we can see it goes above 200 which is causing an issue as max instance limit is reached.
Is this happening because max session in session pool of connection factory is set to 10?
Is it possible that even though we are closing the session and connection in onMessage, still one connection can have more than one session. If that is the case, is it wise to set this property to 1?
Also can there be some property set at WMQ which could cause this increase in MQ Channel Instances.
You don't mention specific versions of WAS or MQ, and there could be known problems at a specific version that would change the behavior, but in general it should work as described below.
IBM has a nice Technote "TCP/IP Connection usage between WebSphere Application Server V7 and V8, and WebSphere MQ V7 (and later) explained" which goes into detail on this subject.
You do not mention what you have the SVRCONN channel's SHARECNV set to, as per below this will impact the number of channel instances observed, I'll assume the default of 10 for the calculations.
Note that block quotes below are from the Technote
we have set max server sessions for act spec to 40 for each node
The link above states:
Maximum number of conversations = Maximum server sessions + 1
Maximum number of conversations = 40 + 1 = 41
The link also states:
Maximum number of TCP/IP channel instances = Maximum number of conversations / SHARECNV for the channel being used
Maximum number of TCP/IP channel instances = 41 / 10 = 5 (rounded up to nearest connection)
max connection count in Connection Factory to 40 for each node
max session in session pool of connection factory to 10.
Maximum number of conversations = Connection Pool Maximum Connections + (Connection Pool Maximum Connections * Session Pool Maximum Connections)
Maximum number of conversations = 40 + (40 * 10) = 440
Maximum number of TCP/IP channel instances = Maximum number of conversations / SHARECNV for the channel being used
Maximum number of TCP/IP channel instances = 440 / 10 = 44
If your MQ SVRCONN channel's SHARECNV was set to 10, then you should have no more than 49 channel instances for each channel based on each node connecting to a separate channel.
If you are hitting 200 channel instances I would suspect your SHARECNV is less than 10. If it was 1 the the maximum number of channel instances WAS would try to create would go up to 481 which would be limited by the MAXINST of the the channel to 200.
After an application has finished with a JMS Connection and closed it off, it is moved from the Active Pool to the Free Pool, where it is available for reuse. The Connection Pool property Unused timeout defines how long a JMS Connection will stay in the Free Pool before it is disconnected. This property has the default value of 1800 seconds, which is 30 minutes.
Every JMS Connection that is created from a WebSphere MQ messaging provider Connection Factory has an associated JMS Session Pool, which work in the same way as Connection Pools. The maximum number of JMS Sessions that can be created from a single JMS Connection is determined by the Connection Factory Session Pool property Maximum connections. The default value of this property is 10.
A conversation is started when a JMS Session is first created, and will remain active until the JMS Session is closed because it has remained in the Free Pool for longer than the value of the Session Pool's Unused timeout property.
When your app closes the session and connection in onMessage, the connection is moved to the free pool for reuse and the session is moved to the free pool for reuse, the MQ Channel instance will not be closed until the respective timeout is hit.
If you want to keep your maximum channel count less than 200 then you could tune your Session Pool Maximum Connections) to 1 which combined with your Activation Specifications and a SHARECNV(1) would max out at 121 channel instances.
You can also increase the SHARECNV value of the channel which will result in dividing the channel instances by that number.
It is possible that your connections or sessions are not getting closed properly and you have a "leak".

Response code: Non HTTP response code: java.net.ConnectException Response message: Non HTTP response message: Connection timed out: connect

I am executing one script using Jmeter for load testing.I am getting error in between lets say for eg. If I applied load of 500users, till 250 users threads are running successfully then error comes of connection timed out error.Then, again it running successful for some of threads then error.
Code is as follow:-
Thread Name: Thread Group 1-1274
Sample Start: 2016-09-15 15:02:13 IST
Load time: 21004
Connect Time: 21004
Latency: 0
Size in bytes: 2206
Headers size in bytes: 0
Body size in bytes: 2206
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""): text
Response code: Non HTTP response code: java.net.ConnectException
Response message: Non HTTP response message: Connection timed out: connect
Response headers:
HTTPSampleResult fields:
ContentType:
DataEncoding: null
I need to break the server.
Can anyone help me for this?
Issue might be due to server hanging, so check the server components health.
Otherwise you could be consuming all ephemeral ports which you can extend by tuning the tcp stacks.
As per Kiril S. answer, on windows it would be:
On Windows:
Follow this guide to check if ports might be an issue:
https://msdn.microsoft.com/en-us/library/aa560610(v=bts.20).aspx.
Check 2 parameters:
MaxUserPort: increase it to max 65534.
TcpTimedWaitDelay, which defines how long the ports are staying in TIME_WAIT state after use. Change that value to 30
On Linux:
Set in sysctl.conf:
net.ipv4.ip_local_port_range=1025 65000

Resources