I am trying to eliminate errors and failed connection attempts during the use of Azure Performance Load testing. I believe I need to find the correct IP addresses to whitelist as when the Web App is restricted to specific IPs, we see the issues that do not occur when IPs are unrestricted.
We see 403 forbidden errors - and, almost exactly 1/2 of the connections show as failed attempts during the testing. We use a whitelist to allow access to our dev web site from only certain IPs. When I remove the IP restrictions, the tests work fine.
If I'm on track, I need to know which IPs to whitelist. This is for the South Central US.
Azure does provide a Datacenter IP Ranges article, however, there are two problems: 1) There is no indication of which IPs are for Performance load testing. And, 2) I see no way of importing the entire article if IPs - which I would happily do at this time.
Related
I am attempting an installation of OKD 4.5 in a restricted (i.e. air-gapped) environment. I am running into an issue during the installation process where-in, as far as I can tell, the bootstrap machine is attempting and failing to access the mirrored registry I have running.
Based on my research, I believe this issue is stemming from a lack of proxy settings within the install-config.yaml file as described in the documentation here, however I am having trouble wrapping my brain around what functions I'm attempting to accommodate by adding this proxy information into the configuration and exactly what information I should be adding. I haven't been able to find any other segments of the documentation that go into details about this either (however if someone can simply point me in the direction of such documentation, that would be extremely helpful).
Would anyone be willing to explain to me what values should be going into the proxy lines in this file and why? Does this information replace, compliment, or require changes in any way to the networking segment of the configuration?
As a related question, do I need to change any of the networking subnet values to reflect my local network? In all examples I've seen the clusterNetwork.cidr and serviceNetwork subnets are the same as the documentation (cidr: 10.128.0.0/14, serviceNetwork: - 172.30.0.0/16), and some include an additional machineNetwork field. Is this field something I should be adding and if so, should I just be including my subnet for this field?
As context for my specific scenario, here are my environment specifications as well as the specific errors I am getting:
OKD Release: `4.5.0-0.okd-2020-10-15-235428`
Environment: Virtualized Bootstrap, Master and Worker nodes, in virt-manager, running on Centos 7 in
air-gapped environment. This host machine contains the install directory and also provides DNS,
Apache Server, HAProxy for load balancing and the mirrored registry.
Errors:
From <log-bundle>/bootstrap/journals/release-image.log:
localhost.localdomain release-image-download.sh[114151]: Error: Error initializing source docker://okd-services.okd.local:5000/okd#sha256<.....>:
error pinging docker registry okd-services.okd.local:5000: Get "https://okd-services.okd.local:5000/v2/":
dial tcp <okd-services.okd.local ip>:5000: connect: connection refused
From systemctl status named (several requests to IPs I don't recognize which seem to be NTP requests):
network unreachable resolving '2.fedora.pool.ntp.org.okd/AAAA..
network unreachable resolving './NS/IN': 199.7.91.13#53
etc
I have ensured that host-node and node-node communication is present, and that the registry is accessible from the nodes ( to test, I netcat the certificate pem into a node and update its trusts, then curl -u the registry using https://fqdn:5000/v2/_catalog), so I am fairly certain all the connections are established properly.
To conclude, since I'm fairly sure that the proxy/network settings in the install-config.yaml file are to blame, and since I am unable to find more elaboration on these configurations in the official docs or elsewhere, I would very much appreciate any in-depth explanation of how I should be configuring this for an air-gapped environment. Additionally, if anyone believes that another issue is the cause, any input regarding that would be great.
We have an appservice hosting some odata api's in Azure. We are running an instance in Central US and another in East US 2. We have a Traffic Manager profile set up so a single url is balanced between the two instances. There is an intermittent issue, is there a way to hit a specific server as the endpoint to test them?
If you want to hit a specific server, you can directly access the instance with each instance domain name. Since Azure Traffic Manager works based on DNS, you can verify Traffic Manager settings using the tools like nslookup or dig to resolve DNS names. To effectively test a performance traffic routing method, you must have clients located in different parts of the world.
About the performance, please note that
The only performance impact that Traffic Manager can have on your
website is the initial DNS lookup.
Traffic does NOT flow through Traffic Manager. Once the DNS lookup
completes, the client has an IP address for an instance of your web
site. The client connects directly to that address and does not pass
through Traffic Manager. The Traffic Manager policy you choose has no
influence on the DNS performance. However, a Performance
routing-method can negatively impact the application experience. For
example, if your policy redirects traffic from North America to an
instance hosted in Asia, the network latency for those sessions may be
a performance issue.
You may set the DNS TTL value low so that changes propagate quickly (for example, 30 seconds).
In addition, there are sample tools to measure DNS performance.
To troubleshoot a probe failure, you need a tool that shows the HTTP status code return from the probe URL. There are many tools available that show you the raw HTTP response.
Fiddler
curl
wget
Also, you can use the Network tab of the F12 Debugging Tools in Internet Explorer to view the HTTP responses.
Hope this information could help you.
Since Mozilla and Google announced, that they intend to activate DNS over HTTPS in the default settings in the future and the IETF approved officially the draft (https://datatracker.ietf.org/wg/doh/about/), I tried to understand the impact on our corporate network. It is now possible for every application to bypass the internal DNS Server (assigned via DHCP) and directly connect to a public DNS service. There is no easy way for an administrator to prevent application and users doing this, since all traffic is routed through HTTPS.
In most corporations that I know, there is a split DNS setup in place, allowing internal (intranet) and external (internet) name and IP resolution for the same domain name (e.g. mail.mycorp.example) with different resolve values. It also allows to add additional, intranet only, services like wiki.intra.mycorp.example, that would not be resolvable/accessible from the internet. Same goes for infrastructure names like server01.eq.mycorp.example.
The problem I see is, that if the application itself is preferring DNS over HTTPS and is not correctly falling back to the system assigned DNS servers, internal only domains would not be accessible.
I made an experiment with Firefox 61.0.1 (64-Bit) on Windows 10. I have set:
network.trr.bootstrapAddress = 1.1.1.1
network.trr.uri = https://mozilla.cloudflare-dns.com/dns-query
network.trr.mode = 2
network.trr.mode = 2 should prefer DNS over HTTPS, but fallback to system DNS if no value received, mode = 1, which I also tried, should make a race and use the first valid result that Firefox gets back.
Unfortunately, after activating DNS over HTTPS in Firefox, all internal only websites did no longer work. All requests end in a timeout and fail therefor.
What do I miss?
Is there a better way to handle internal only DNS entries in future setups?
The exact configuration you described works in my corporate network. It first tries DoH for internal sites, then falls back to local DNS and internal sites resolve and load correctly.
I am in need to do Performance testing using Blazemeter for my site established in QA Environment.
I am doing testing for my site established in QA with the help of VPN connection and now I wonder how to do performance testing-Blazemeter on it and how to connect/establish connectivity on it.
Please help me on this....
Check this link.
Did you check with Blazemeter first?
You also need to check with your Network team! You might not want to do your load testing through a VPN connection which is not going to give you accurate results. It is better to expose the staging/PROD clone environment outside the network for the load testing - This is the process we follow.
Solution:
Got update from Blazemeter:
"First, you'll need to whitelist our IPs, so please refer to our IP ranges list:
https://guide.blazemeter.com/hc/en-us/articles/207420645-What-IPs-are-sending-the-traffic
Then, as long as you have the respective credentials to access the VPN, and run your test locally in Jmeter against the target server successfully, you may run your tests in Blazemeter as desired"
We got to buy dedicated IP from them and need to whitelist it from the Network team. Now the connection would have established and can start doing Performance testing.
Kindly refer the link: https://www.blazemeter.com/blog/top-3-options-running-performance-tests-behind-your-corporate-firewall
Okay, so we implement Recaptcha in production. We get errors because it can't reach the IP address it needs to use the service. We open a port for the IP address to reach Google. No problem. We do that and configure that IP address explicitly to work. It works great. Then, the next day, we start getting errors again because Recaptcha is using a different IP address. I can allow requests from that IP address, too, but now I'm unsettled. Where are these addresses coming from? How do I configure this to work reliably?
Recatpcha from Google can use any Google IP address and there are lots of them.
Ran this from Windows:
_netblocks.google.com text =
nslookup -type=TXT _netblocks.google.com
"v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all"
That's all the network Google uses currently. These can change so check them often.
Google suggest allowing port 80 to all IPs outbound, this highly insecure. They recommend going through a proxy server but again that is highly insecure if your web server is an DMZ. Proxy aware trojans do exist. All that need to be done is exploit a vulnerability to execute arbitrary code and you can create reverse connection on port 80 through a proxy server to download the payload. Then it is trivial to escalate privileges and own the box. I don't mean just Windows servers but Linux as well. I've done it in lab environment on security was on. It's really easy to do.
This is the Google website I got this from:
http://code.google.com/p/recaptcha/wiki/FirewallsAndRecaptcha
I wanted to append to this answer with more recent information. The documentation that Chris is pointing to does not include all of the TXT records necessary to dig (thanks Google):
_netblocks2.google.com (IPv6 subnets)
_netblocks3.google.com (Additional IPv4 subnets)
In my particular case, the _netblocks3 entry contained 2 large /19's that made my initial rule ineffective
(I found additional references here: https://support.google.com/a/answer/60764?hl=en)
Perhaps you should be using a hostname rather than IP