according to official docs - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html
only inbound TCP rule have to be added to sec group.
but how does the response come out? what protocol and port the response comes out back when i type my commands in cli terminal?
or i need only 1 inbound rule to simply ESTABLISH connection and it works both ways - it sends and receives request, response thru ssh thru this 1 inbound rule?
Security Groups are stateful. They track the originating request and automatically allow responses. Per the official documentation:
Security groups are stateful—if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules. For more information, see Security group connection tracking.
Related
I can't clearly understand how Haproxy performs health checks in http mode.
I need to resend http request to another server (next in list of backend servers), if first one returned some error status code (for example, 503).
I need following behaviour of Haproxy:
1) I receive some HTTP request
2) I send it to the first server
3) If I get 503 (or some other error code), this HTTP request must be send to the next server
4) If It returns 200 code, next http requests of this tcp session goes to first server
I know it's easy to implement in nginx (using proxy_next_upstream, I suppose). But I need to use Haproxy, because software I need to connect works on the layer 4 and I can't change it, so it need to keep groups of http messages in the same tcp session. I can keep them in the same session in haproxy, but not in nginx.
I know about httpchk and observe, but they are not what I need.
First one allows me to send some http request, not http request I received (I need to analyse http traffic to decide what http status I will answer).
Second marks my servers as dead and doesn't send messages to it anymore, but I need this messages to by analysed.
I really need behaviour like nginx, but with ability to have http messages in the tcp sessions.
Probably there is some nice way to implement it with ACL?
Could anyone please give me a detailed explanation how haproxy handles load balancing in http mode or offer some solution to my problem?
UPDATE:
For example, when I tried to do it with observe, I used configuration:
global
log 127.0.0.1 local0
maxconn 10000
user haproxy
group haproxy
daemon
defaults
log global
option dontlognull
retries 3
maxconn 10000
contimeout 10000
clitimeout 50000
srvtimeout 50000
listen zti 127.0.0.1:1111
mode http
balance roundrobin
server zti_1 127.0.0.1:4444 check observe layer7 error-limit 1 on-error mark-down
server zti_2 127.0.0.1:5555 check observe layer7 error-limit 1 on-error mark-down
Thanks,
Dmitry
You can use the option httpchk.
When "option httpchk" is specified, a complete HTTP request is sent
once the TCP connection is established, and responses 2xx and 3xx are
considered valid, while all other ones indicate a server failure,
including the lack of any response.
listen zti 127.0.0.1:1111
mode http
balance roundrobin
option httpchk HEAD / HTTP/1.0
server zti_1 127.0.0.1:4444 check inter 5s rise 1 fall 2
server zti_2 127.0.0.1:5555 check inter 5s rise 1 fall 2
Source: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20httpchk
All the examples for node-http-proxy show a {target: <URL>} option, but I don't want to proxy to a single target. How do I set it up for outbound requests (to any URL)?
Since node-http-proxy seems to be a proxy for inbound connections to a server farm, and not for outbound connections to random websites, I found another tool instead:
http://newspaint.wordpress.com/2012/11/05/node-js-http-and-https-proxy/
This allows me to customize the requests and responses as needed.
one of my client has receive a mail that paypal is upgrading there policy. When i do some r&d, i found similar thread on magento forum but no one has replied so far.Following are the link of that thread.
PAYPAL SERVICE UPGRADES
So my question, what modification i have to made in current configuration or there is huge changes required in current magento payment gatway coding.
Any help or suggestion is appreciated.
Major thrust of the quoted document is "don't use hard coded IP addresses".
Magento uses the server name, so DNS will resolve automatically to the new assigned netblocks.
Your Hosting Provider may have to modify their firewalls if they're filtering traffic, but it's probably unlikely. They also need to be running a web server that uses HTTP 1.1.
Viewing your web server's response headers will tell you that, look for HTTP/1.1 200 OK
Quoted PayPal notice of service upgrade
If your site is:
Calling our APIs with a hardcoded PayPal API endpoint IP address, rather than using DNS resolution: Impact of upgrade: API calls will timeout or your will encounter an internal error from your system.
You need to: Use DNS resolution to access our API endpoints and/or open your firewall to the new IP addresses which will be communicated later.
Using HTTP methods other than GET, POST, DELETE and PUT: Impact of upgrade: API calls will return HTTP/1.0 400 Bad Request or HTTP Error 405 Method not allowed.
You need to: Send the API requests using one of the allowed methods. Heartbeat calls using the HEAD method won’t be allowed.
Using the HTTP 1.0 protocol: Impact of upgrade: API calls will return HTTP/1.0 400 Bad Request.
You need to: Update your code to HTTP 1.1 and include the Host header in the API request.
Needing firewall changes to allow new IP addresses: Impact of upgrade: API calls will error out if your system responsible for making API calls to PayPal is behind a firewall that uses Access Control List (ACL) rules and limits outbound traffic to a limited number of IP addresses.
You need to: Update your firewall ACL to allow outbound access to a new set of IP addresses we will be publishing. Test your integration on Sandbox (see the IP addresses for Sandbox API endpoints). The list of new IP addresses for our Live API endpoints will be posted here when available in January.
I write a HTTP small server under Windows. Access to the server is secured with the usual HTTP auth mechanisms (I use Windows HTTP API). But I want to have no auth for localhost, i.e. local users should be able to access the server without password.
The question is: is that save? More precisely, is it safe to trust the remote address of a TCP connection without further auth?
Assume for a moment that an adversary (Charly) is trying to send a single malicious HTTP GET to my server. Furthermore, assume that all Windows/router firewalls ingress checks for localhost addresses let source addresses of 127.0.0.1 and [::1] pass.
So the remote address could be spoofed, but for a TCP connection we need a full three-way handshake. Thus, a SYN-ACK is sent by Windows upon reception of the SYN. This SYN-ACK goes nowhere, but Charly might just send an ACK shortly afterwards. This ACK would be accepted if the ack'ed SEQ of the SYN-ACK was correct. Afterwards, Charly can send the malicious payload since he knows the correct TCP SEQ and ACK numbers.
So all security hinges on the unpredicability of Windows' TCP outgoing initial sequence number (ISN). I'm not sure how secure that is, how hard it is to predict next session's ISN.
Any insight is appreciated.
In the scenario you are describing an attacker wouldn't get any packets from your web server. If you can use something like digest auth (where a server sends to a client a short random nonce string first and then clients uses that nonce to create an authentication hash) you'd be fine.
If installing a firewall on a system is an option, you could use a simple rule like "don't accept packets with source ip 127.0.0.1 from any interface other then loopback".
Here is the thing:
We've implemented a C++ RESTful API Server, with built-in HTTP parser and no standard HTTP server like apache or anything of the kind
It has been in use for several months in Amazon structure, using both plain and SSL communications, and no problems have been identified, related to Amazon infra-structure
We are deploying our first backend using Amazon ELB
Amazon ELB has a customizable health check system but also as an automatic one, as stated here
We've found no documentation of what data is sent by the health check system
The backend simple hangs on the socket read instruction and, eventually, the connection is closed
I'm not looking for a solution for the problem since the backend is not based on a standard web server, just if someone knows what kind of message is being sent by the ELB health check system, since we've found no documentation about this, anywhere.
Help is much appreciated. Thank you.
Amazon ELB has a customizable health check system but also as an
automatic one, as stated here
With customizable you are presumably referring to the health check configurable via the AWS Management Console (see Configure Health Check Settings) or via the API (see ConfigureHealthCheck).
The requirements to pass health checks configured this way are outlined in field Target of the HealthCheck data type documentation:
Specifies the instance being checked. The protocol is either TCP,
HTTP, HTTPS, or SSL. The range of valid ports is one (1) through
65535.
Note
TCP is the default, specified as a TCP: port pair, for example
"TCP:5000". In this case a healthcheck simply attempts to open a TCP
connection to the instance on the specified port. Failure to connect
within the configured timeout is considered unhealthy.
SSL is also specified as SSL: port pair, for example, SSL:5000.
For HTTP or HTTPS protocol, the situation is different. You have to
include a ping path in the string. HTTP is specified as a
HTTP:port;/;PathToPing; grouping, for example
"HTTP:80/weather/us/wa/seattle". In this case, a HTTP GET request is
issued to the instance on the given port and path. Any answer other
than "200 OK" within the timeout period is considered unhealthy.
The total length of the HTTP ping target needs to be 1024 16-bit
Unicode characters or less.
[emphasis mine]
With automatic you are presumably referring to the health check described in paragraph Cause within Why is the health check URL different from the URL displayed in API and Console?:
In addition to the health check you configure for your load balancer,
a second health check is performed by the service to protect against
potential side-effects caused by instances being terminated without
being deregistered. To perform this check, the load balancer opens a
TCP connection on the same port that the health check is configured to
use, and then closes the connection after the health check is
completed. [emphasis mine]
The paragraph Solution clarifies the payload being zero here, i.e. it is similar to the non HTTP/HTTPS method described for the configurable health check above:
This extra health check does not affect the performance of your
application because it is not sending any data to your back-end
instances. You cannot disable or turn off this health check.
Summary / Solution
Assuming your RESTful API Server, with built-in HTTP parser is supposed to serve HTTP only indeed, you will need to handle two health checks:
The first one you configured yourself as a HTTP:port;/;PathToPing - you'll receive a HTTP GET request and must answer with 200 OK within the specified timeout period to be considered healthy.
The second one configured automatically by the service - it will open a TCP connection on the HTTP port configured above, won't send any data, and then closes the connection after the health check is completed.
In conclusion it seems that your server might be behaving perfectly fine already and you are just irritated by the 2nd health check's behavior - does ELB actually consider your server to be unhealthy?
As far as I know it's just an HTTP GET request looking for a 200 OK http response.