I have setup a simple proxy on an EC2 instance using Tinyproxy (default config listening/allowing all incoming connections). This works well. If I, for debugging, fill in IP address and the port to proxy settings in my browser, I can brows through the proxy without any issues. Everything works. However if I create an EC2 load balancer in front of the instance (making sure to forward the http port correctly) it just hangs when I browse through the load balancers IP. This seems like a puzzle to me. The instance is running, the load balancer reports "in service", and going around the load balancer works, but going through it just hangs. What am I missing out, and where should I look for the error?
UPDATE
I have now had a look at the logs of Tinyproxy: When trying to access google.com directly through the instances proxy, I see logs like this:
CONNECT Apr 30 20:41:33 [1862]: Request (file descriptor 6): GET http://google.com/ HTTP/1.1
INFO Apr 30 20:41:33 [1862]: No upstream proxy for google.com
CONNECT Apr 30 20:41:33 [1862]: Established connection to host "google.com" using file descriptor 7.
INFO Apr 30 20:41:33 [1862]: Closed connection between local client (fd:6) and remote client (fd:7)
CONNECT Apr 30 20:41:33 [1901]: Connect (file descriptor 6): x1-6-84-1b-ADDJF-20-07-92.fsdfe [430.12327.65117.615]
CONNECT Apr 30 20:41:33 [1901]: Request (file descriptor 6): GET http://www.google.ie/?gws_rd=cr&ei=_V9hU8DeFMTpPJjygIgC HTTP/1.1
INFO Apr 30 20:41:33 [1901]: No upstream proxy for www.google.ie
CONNECT Apr 30 20:41:33 [1901]: Established connection to host "www.google.ie" using file descriptor 7.
However if i try to access google through the load balancer, that then forwards to the instance, then I see logs like this:
CONNECT Apr 30 20:42:54 [1860]: Request (file descriptor 6): GET / HTTP/1.1
CONNECT Apr 30 20:42:54 [1869]: Connect (file descriptor 6): ip-432-2383245-53.eu-west-1.compute.internal [10.238.155.237]
CONNECT Apr 30 20:42:54 [2037]: Connect (file descriptor 6): ip-432-2383245-53.eu-west-1.compute.internal [10.238.155.237]
INFO Apr 30 20:42:54 [1860]: process_request: trans Host GET http://google.com:8888/ for 6
INFO Apr 30 20:42:54 [1860]: No upstream proxy for google.com
CONNECT Apr 30 20:43:12 [1861]: Connect (file descriptor 6): ip-432-2383245-53.eu-west-1.compute.internal [1230.23845.515.2537]
CONNECT Apr 30 20:43:12 [2035]: Connect (file descriptor 6): ip-432-2383245-53.eu-west-1.compute.internal [143.238.12345.117]
ERROR Apr 30 20:43:12 [2035]: read_request_line: Client (file descriptor: 6) closed socket before read.
ERROR Apr 30 20:43:12 [1861]: read_request_line: Client (file descriptor: 6) closed socket before read.
ERROR Apr 30 20:43:12 [2035]: Error reading readble client_fd 6
ERROR Apr 30 20:43:12 [1861]: Error reading readble client_fd 6
WARNING Apr 30 20:43:12 [2035]: Could not retrieve request entity
WARNING Apr 30 20:43:12 [1861]: Could not retrieve request entity
From what I notice, then the ELB is trying to send the request through port 8888
You can get ELB access logs. These Access logs can help you determine the time taken for a request at different intervals. e.g:
request_processing_time: Total time elapsed (in seconds) from the time the load balancer receives the request and sends the request to a registered instance.
backend_processing_time: Total time elapsed (in seconds) from the time the load balancer sends the request to a registered instance and the instance begins sending the response headers.
response_processing_time: Total time elapsed (in seconds) from the time the load balancer receives the response header from the registered instance and starts sending the response to the client. this processing time includes both queuing time at the load balancer and the connection acquisition time from the load balancer to the backend.
...and a lot more information. You need to configure the access logs first. Please follow below articles to get more understanding around using ELB access logs:
Access Logs for Elastic Load Balancers
Access Logs
These logs may/may not solve your problem but is certainly a good point to start with. Besides, you can always check with AWS Technical support for more in depth analysis.
It sounds like you're trying to use ELB in HTTP mode as, essentially, something resembling a forward proxy, or at least a gateway to a forward proxy that you're running behind it. That's not ELB's intended application, so it isn't surprising that it wouldn't work in that configuration.
ELB in HTTP listener mode is intended to be used as a reverse proxy in front of a web endpoint/origin server.
Configuring your ELB listener to use TCP mode instead of HTTP mode should allow your intended configuration to work, but this is somewhat outside the optimum application of ELB.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html
Related
I am using Jmeter with MQTT JMeter Plugin to do loading test.
Here is my use cas:
Started 8000 users(threads) during 30 minutes
Each user do one mqtt connect message
Each user do 720 loops to publish a message with 5 seconds timer
Here is my jmeter test plan
My threads
My loop controller:
My timer:
After starting Jmeter, every thing is good:
But after 20 minutes, i am getting many errors for my pub messags:
Here is the error message:
My mqtt server is up and no pb with it.
Jmeter logs:
Aug 01, 2021 3:04:33 PM java.util.Optional ifPresent
INFO: MQTT client is not connected.
Aug 01, 2021 3:04:33 PM net.xmeter.samplers.PubSampler sample
INFO: ** [clientId: ps303411a2200c4e1ca4f34, topic: /test/, payload: 1627830273593ts Publish failed for connection HiveMQTTConnection{clientId='ps303411a2200c4e1ca4f34'}.
Aug 01, 2021 3:04:33 PM java.util.Optional ifPresent
INFO: MQTT client is not connected.
What is the pb ? is this related to Jmeter test plan ? or to my local machine ? i am using EC2 x3 large machine to start Jmeter in background.
since your ramp up period is 1800 sec, you have nearly 5.6k threads at 20th minute where, i think your server starts to saturate. The 501 return code may indicate that some kind of fallback mechanism can give more details about the error, but not sure...
MQTT client is not connected.
It indicates that the connection is down, if you don't see anything suspicious in JMeter logs - most probably it means that your server gets overloaded and cannot handle that many concurrent connections/messages.
Use a combination of listeners like Active Threads Over Time and Response Codes per Second to see what is exact number of users where the problems start occurring
Monitor resources usage like CPU, RAM, Network sockets, Disk IO, etc. to ensure that the MQTT server has enough space to operate, you can use JMeter PerfMon Plugin for this
Check your server logs
Increase JMeter logging verbosity for the MQTT plugin by adding the next line to log4j2.xml file:
<Logger name="net.xmeter" level="debug" />
I'm creating a VPN using StrongSwan. It's my first time using this tool. I followed a tutorial to set up. I've hit a blocker whereby the peer connection times out. The status is 0 up, 1 connecting.
I have tried on different servers, the same issue happends.
ipsec.conf
conn conec-example
authby=secret
left=%defaultroute
leftid=<public_IP_1>
leftsubnet=<private_ip_1>/20
right=<public_IP_2>
rightsubnet=<private_ip_2>/20
ike=aes256-sha2_256-modp1024!
esp=aes256-sha2_256!
keyingtries=0
ikelifetime=1h
lifetime=8h
dpddelay=30
dpdtimeout=120
dpdaction=restart
auto=start
ipsec.secrets
public_IP_1 public_IP_2 : PSK "randomprivatesharedkey"
Here is part of the logs:
Aug 18 17:29:01 ip-x charon: 10[IKE] retransmit 2 of request with message ID 0
Aug 18 17:29:01 ip-x charon: 10[NET] sending packet: from x.x[500] to x.x.x.x[500] (334 bytes)
Aug 18 17:30:19 ip-x charon: 13[IKE] retransmit 5 of request with message ID 0
Aug 18 17:30:19 ip-xcharon: 13[NET] sending packet: from x.x[500] tox.x.x.129[500] (334 bytes)
Aug 18 17:31:35 charon: 16[IKE] giving up after 5 retransmits
Aug 18 17:31:35 charon: 16[IKE] peer not responding, trying again (2/0)
I expected a successful connection after setting up this, though no success. How can I resolve this? Any ideas?
Based on the log excerpt, strongswan has an issue to reach the other peer.
There is way too little information to provide an exact answer; topology and addressing plan, relevant AWS security groups settings and both VPN peers configuration are needed.
Still please let me offer a few hints what to do in order to successfully connect via VPN:
UDP ports 500 and 4500 must be open on both VPN peers. In AWS, it means an AWS security group associated with the EC2 instance running strongswan must contain explicit rules to allow incoming UDP traffic on ports 500 and 4500. EC2 instance is always behind a NAT, so ESP/AH packets will be encapsulated in UDP packets.
Any firewall on both VPN peers has to allow the UDP traffic mentioned in the previous point.
Beware that the UDP encapsulation affects the MTU of the traffic going through the VPN connection.
I have an AWS EC2 machine running a Laravel 5.2 application that connects to a Postgress 9.6 databse running in RDS. While most of the connections work, some of them are getting rejected when trying to stablish, which causes a Timeout and consequently an error in my API. I don't know what is causing them to be rejected. Also, it is very random when it happens, when it does happen it may be in any API endpoint and inside the endpoint in any query.
When the timeout is handled by PHP, it shows a message like:
SQLSTATE[08006] [7] timeout expired (SQL: ...)
Sometimes the Nginx handles the timeout and replies with a 504 Error. When Nginx handles the timeout I get an error like:
2019/04/24 09:48:18 [error] 20657#20657: *3236 upstream timed out (110: Connection timed out) while reading response header from upstream, client: {client-ip-here}, server: {my-url-here}, request: "GET {my-endpoint-here} HTTP/2.0", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "{}", referrer: "https://app.cartoriovirtual.com/"
All usage charts on the RDS and EC2 seems ok, I have plenty of RAM, storage, CPU and available connections for RDS. I also checked inner VPC Flows and they seem alright, however I have many IPs (listed as attackers) scanning my network interfaces, most of them been rejected. Some (to port 22) accepted but stoped at authentication, I use a .pem Key File for auth.
The RDS network interface only accepts requests from inner VPC machines. In its logs, every 5 minutes I have a Checkpoint like this:
2019-04-25 01:05:29 UTC::#:[22595]:LOG: checkpoint starting: time
2019-04-25 01:05:34 UTC::#:[22595]:LOG: checkpoint complete: wrote 43 buffers (0.1%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=4.393 s, sync=0.001 s, total=4.404 s; sync files=19, longest=0.001 s, average=0.000 s; distance=16515 kB, estimate=16515 kB
Anyone has tips on how to find a solution? I looked at all possible logs that came in mind, fixed a few little issues but the error persists. I am running out of ideas.
I created jmeter test plan with 2000 threads and 10 ramp-up time.
When i ran the test against apache server, some of my test results give a connection refused error.
The connection refused error occured after 21 seconds.
So, my question is this 21 seconds originates from the jmeter or the apache web server?
As far as I know, apache server timeout default is 30 seconds, i didn't change that.
This means your apache server is refusing connections, which means it could be overloaded or misconfigured.
I was wondering if someone could point me out into the right direction.
Right now our IHS / Websphere Server is unable to handle more than 170 concurrent users.
We have tuned the IHS, Websphere Thread Pools, Datasource properties, JVM Heap and kernel parameters.
On heavy load we are seeing this in the IHS plugin log
[Mon Jun 27 10:42:15 2011] 00e90070 00002f30 - ERROR: ws_common: websphereGetStream: Failed to connect to app server on host 'XXXXXX', OS err=79
[Mon Jun 27 10:42:15 2011] 00e90070 00002f30 - ERROR: ws_common: websphereExecute: Failed to create the stream
[Mon Jun 27 10:42:15 2011] 00e90070 00002f30 - ERROR: ws_common: websphereHandleRequest: Failed to execute the transaction to 'XXXXXXNode01_YYYYYY'on host 'XXXXXX'; will try another one
Error 79 is connection refused! The strange thing is that both the IHS and the Websphere are on the same server...
Verifying the Thread Pools in the WAS we don't see them reaching their maximum. Monitoring the HEAP it seems OK...
Any ideas?
What is the number of maximum concurrent connections specified # the Web Containers in WAS Cluster?
Can you make a direct call to the XXXXXXNode01_YYYYYY (by passing the IHS) when this error occurs via your browser)? If it still gives your errors, it simply validates the message provided by the plug-in.
HTH
Manglu