I haven't been able to find a way for hours, so I'm trying to get help from Stack Overflow.First, I added the application I want to block to the inbound and outbound rules of the firewall.Inbound or outbound traffic was not successfully sent to the application.However, I made an additional rule because I want to receive only one specific traffic from the application.However, I am still not receiving the traffic I want to receive from that IP. After looking into this in detail, it seems that the blocking priority among the rule priorities of the firewall is higher than the allowable priority.Do you know how to solve this problem? Allow only specific traffic originating from applications with blocking rules applied.
Create blocking rules for the application in inbound and outbound
I want to receive only one traffic from the application
Create an allow rule for the IP from which the traffic is coming
But the allow rule still doesn't work because the block rule has a higher priority than the allow rule
How to fix this, or how to get only some traffic for that application
I've been under too much stress trying to figure this out. Any advice would be appreciated. Thank you.
Related
I'm investigation some issues with Stripe webhooks not reaching our test server.
According to their docs they submit requests from the following IPs: https://stripe.com/docs/ips#webhook-notifications
I have added these IPs to the iptables:
I'm not an iptables expert, but looking at this it seems that it's only matching 54.187.216.72. Other requests from Stripe will fail with a timeout error, which I'm assuming are coming from other IPs.
I can see the only working IP in my apache logs. I think I can rule out ufw / firewall issues because I have tried to temporary disable that as well during testing.
My question: How do I investigate this issue further? Is my iptables setup correct? Is there anything else here that could block IPs other iptables and ufw?
Stripe could not tell me which IP was used on their requests.
I hope I'm providing the correct information here, if not please let me know!
Thanks a lot!
The problem is that you do not have information from network diagnostic tools and cannot get it. When you see in the Stripe dashboard a timeout error this cannot mean only that your server blocks incoming requests or does not respond. Such error messages are generated by the Stripe backend, not network tools. Where are the packets lost? Are they really lost? Quite possible that Stripe has strict time intervals to resolve Promises and just rejects them despite of result.
You can't run traceroute from their IPs. Those IPs doesn't respond to ping. You can talk to your hosting provider and get "no problem here" response. You can turn off the firewall and still have errors. I'm pretty sure your filter rules are not the root of this issue.
P.S.: We had the same one. Without any filtering rules our endpoint was accessible from around the world except those IPs. The problem appeared without making any changes in the configuration. The problem disappeared without any changes in the configuration. Neither Stripe nor the hosting provider had problems found.
I have several servers handling the same requests and several clients sending requests. The servers are routers to keep/track the identity of the clients and the clients are dealers which round robin servers. Does this dealer/router pair without a broker make sense? It works and fits my need but I don't see this pattern on the official guides.
If you are trying to balance your request loads between servers at the client, yes you are right. This pattern is named client-side-load-balancing. You can read about that here
But I'm not sure about your implementation.
You can see more in google SRE book.
I am looking for a specific kind of proxy that is meant to operate in a rendezvous mode, such that two nodes can make an outgoing connection to the same proxy, send a routing token, and have their packets relayed to each other from that point.
Proxy servers like HAProxy would be perfect but AFAIK they do not offer something like that: the goal of the proxy in this case is to make another outgoing connection and route the packets to that location. In this case, I want two nodes to connect to the proxy, and have their packets relayed between them through the proxy, after sending a routing token that can be used to associate the two nodes.
I could write my own server to perform such type of relaying, but I am wondering if something already exists to do something like this. I am looking for such a solution as a fallback for cases where NAT traversal protocols like ICE/STUN/TURN are not feasible due to a highly restricted network environment that does not allow UDP traffic. The base protocol for the proxy could be TCP, HTTP or WebSocket, which would be easier to allow in a firewall with a simple rule.
Any ideas or recommendations?
I believes SOCKSv5 has everything you are asking for.
two nodes can make an outgoing connection to the same proxy. send a routing token, and have their packets relayed to each other from that point.
The routing token in this case would be the endpoint address and/or the user credentials. I would first look at the supper simple implementation built into the 'ssh' utility, this guide goes over how to get everything set up. If you need something more granular then look into dante.
The only tricky part is when you try to use the user credential option with SOCKv5 as it is not as well supported in browsers, but is possible with addons.
An XMMP server sends push notifications to a client behind a NAT using a public endpoint( IP + Port) supplied by NAT to client. But how long this endpoint is assigned to this specific client by NAT, what will happen if NAT assigns same endpoint to another client ? How this problem can be solved?
XMPP uses a standard TCP connection. NATs will keep the association for as long as the connection is alive (unless they are horribly broken).
Update: The last part of my statement could have been expanded a bit. Horribly broken NAT implementations do exist. Generally these are a small percentage, but many (most?) popular XMPP clients do ensure they send some kind of keepalive over idle connections.
There are three kinds of keepalive you can use I'll list them here in order of bandwidth/processing requirements:
TCP keepalives are a good lightweight option, especially as once they are enabled, they are automatically handled by the OS. How to enable them will depend on your language and framework, but at the lowest level, you need to enable the SO_KEEPALIVE option on the socket.
There are two problems with TCP keepalives. One is that you can't control them from your application (unless you write platform-specific code). The second problem is that some NAT implementations are so broken that they will ignore TCP keepalives too! But you're hopefully down to a very small percentage now.
So another option is whitespace keepalives. Since these involve data going across the stream, you should be safe from even the broken NATs that ignore keepalives.
Whitespace keepalives simply involve sending the space character (' ') across the XMPP stream at any time it is idle. XML and XMPP allow unlimited whitespace between elements, and it is simply ignored by the recipient.
Finally, you can use fully-fledged XMPP pings (XEP-0199). These involve ending an actual <iq/> 'get' stanza to the server, which then must reply. This ensures data flows in both directions, and should make even the most broken NAT implementations keep your connection alive.
Ok, I should mention that there is an even worse class of NAT. I have seen NATs that will simply 'forget' about your mapping for a range of reasons, including their mapping table being full, or just after a timer. There is nothing you can do to work around these, they don't work with any long-lived TCP connections. The best you could probably do at that point is use BOSH (essentially XMPP over HTTP).
Conclusion: If you are concerned that your application may run behind some of these devices, I suggest something like the following algorithm (exact times may be tweaked, but I recommend these as minimum values):
If you have not sent any data for 60s, send a single space character.
If you have not received any data for 120s, send an XMPP ping to your server.
If the server doesn't reply to the ping within a reasonable amount of time, reconnect.
Because the behaviour of broken NAT devices is beyond any standard protocol specification, it is naturally impossible to devise a perfect solution that will work with all of them, all of the time. You just have to accept that these are a small minority, and none of this matters for working NAT devices (though there are other kinds of network breakages that may make regular keepalives/pings a good idea, depending on the needs of your application).
The Solution is sending keep alive messages to maintain the NAT entry. XMPP whitespace is typically used. Send it eg every Ten minutes to preserve reachability of the nated client.
You have to keep in mind that NAT is no standardized technique. Thus there are different implementations. The provided RFCs in the comment above is from the BEHAVE working group.
Windows NLB works great and removes computer from the cluster when the computer is dead.
But what happens if the application dies but the server still works fine? How have you solved this issue?
Thanks
By not using NLB.
Hardware load balancers often have configurable "probe" functions to determine if a server is responding to requests. This can be by accessing the real application port/URL, or some specific "healthcheck" URL that returns only if the application is healthy.
Other options on these look at the queue/time taken to respond to requests
Cisco put it like this:
The Cisco CSM continually monitors server and application availability
using a variety of probes, in-band
health monitoring, return code
checking, and the Dynamic Feedback
Protocol (DFP). When a real server or
gateway failure occurs, the Cisco CSM
redirects traffic to a different
location. Servers are added and
removed without disrupting
service—systems easily are scaled up
or down.
(from here: http://www.cisco.com/en/US/products/hw/modules/ps2706/products_data_sheet09186a00800887f3.html#wp1002630)
Presumably with Windows NLB there is some way to programmatically set the weight of nodes? The nodes should self-monitor and if there is some problem (e.g. a particular node is low on disc space), set its weight to zero so it receives no further traffic.
However, this needs to be carefully engineered and have further human monitoring to ensure that you don't end up with a situation where one fault causes the entire cluster to announce itself down.
You can't really hope to deal with a "byzantine general" situation in network load balancing; an appropriately broken node may think it's fine, appear fine, but while being completely unable to do any actual work. The trick is to try to minimise the possibility of these situations happening in production.
There are multiple levels of health check for a network application.
is the server machine up?
is the application (service) running?
is the service accepting network connections?
does the service respond appropriately to a "are you ok" request?
does the service perform real work? (this will also check back-end systems behind the service your are probing)
My experience with NLB may be incomplete, but I'll describe what I know. NLB can do 1 and 2. With custom coding you can add the other levels with varying difficulty. With some network architectures this can be very difficult.
Most hardware load balancers from vendors like Cisco or F5 can be easily configured to do 3 or 4. Level 5 testing still requires custom coding.
We start in the situation where all nodes are part of the cluster but inactive.
We run a custom service monitor which makes a request on the service locally via the external interface. If the response was successful we start the node (allow it to start handling NLB traffic). If the response failed we stop the node from receiving traffic.
All the intermediate steps described by Darron are irrelevant. Did it work or not is the only thing we care about. If the machine is inaccessible then the rest of the NLB cluster will treat it as failed.