Siteminder bypass authentication for specific IP address - siteminder

I have a resource/url that is protected by siteminder. I would like to allow access to url for anyone coming from a specific IP address range.
If a browser is from a specific IP address range, I don't want them to Authenticate, they need to see the url without being prompted to log in.
Is there a way in siteminder to allow open access for specific IP address ranges, but require login for anyone outside of that range?
thanks

This is probably a bad idea security wise since an IP address could be spoofed leaving your application completely open. If you still wanted to continue I would look into using a load balancer/proxy to forward the request to another agent with a separate configuration.

Related

How a dns proxy works? (smart dns)

I am trying to build a new DNS, which will act as a proxy for certain domain names and uses a public DNS as upstream.
My understanding of DNS:
Client asks DNS (x.x.x.x) about example.com
DNS will look up inside its zones (or parent and root) and find example.com can be found at i.i.i.i
DNS will send i.i.i.i to the client.
Now, client asks the ip address of restricted.test and DNS server knows it is a restricted website, so instead of giving the direct ip to the website, it gives it's own proxy address p.p.p.p to the client.
Please correct me if I'm wrong till now, but when the client tries to connect to p.p.p.p how the proxy server knows which website the client wants to go in?
I really want to know how these work under the hood
Thanks in advance.
This mechanism you are asking about is the Proxy Auto-Configuration (PAC) file.
Read more about it here :
https://developer.mozilla.org/en-US/docs/Web/HTTP/Proxy_servers_and_tunneling/Proxy_Auto-Configuration_PAC_file
And here :
https://www.websense.com/content/support/library/web/v76/pac_file_best_practices/PAC_explained.aspx
Essentially in corporate networks, a PAC file is pushed out to every computer, and browser settings are also configured to enable the PAC file. But it can also be done manually. Just check your browser proxy settings to see the location of the PAC file it is pointed to.

URL shortner Api

I am using URL Shortener API to shorten our mobile app download link. (https://www.googleapis.com/)
We have some restrictions on our server such that we don't allow unrecognized IP access.
So I would like to know what would be the IP range that googles use when the URL is shortened using this API (https://www.googleapis.com/).
This will help us to configure our security settings to allow access to these IP's
google-apis-explorer
When you say "using the URL Shortener API", are you referring to making calls to this API from your server (as in outbound traffic is IP restricted) or using the short URL to reach your server (as in inbound traffic is IP restricted)? I'll go ahead and answer both possibilities, but please clarify if these weren't what you meant.
If you're trying to allow calls to this API from your server with outbound traffic IP restricted
The URL shortener API can be called through any of Google's IP addresses. There's no way to get a list of these because they will vary by location, load balancing, etc. Plus, you wouldn't want to attempt to restrict by IP this way because whitelisting even one of Google's IP addresses would allow calls from your server to all of Google's services. This likely includes any service hosted on Google Cloud, which could be a proxy, meaning literally anything in the world could be called this way; you'd be entirely eliminating IP restrictions on your server.
If you're trying to shorten your server's URLs using this API and your server has inbound traffic IP restricted
You shouldn't need to do anything. These URLs are just domain redirects. In the end, the user ends up visiting your website (server) using its actual long URL (there's no proxying), so just whitelist the allows users' IPs and it should work.

Restrict public web application access to specific dynamic source IP addresses

I'm developing a web application using Laravel hosting on a public cloud. Now, the application can be accessed publicly on the internet via domain address. However, I want to restrict to only users who are connecting to the organization networks to be able to use the application since we do not want the application to be used at home or elsewhere.
At the moment, the organization has 2 places (2 public internet networks) where they must be able to access to the application. Both of them are using home-standard internet where IP address changes every time the internet reconnects. As we do not have static IP addresses, I cannot filter user by using IP address filter. The IP filter rule must be changed every time when the organization network reconnected.
My application already have solid authentication and authorization mechanism and, of course, the users must know this information since they must access the app for work. However, this doesn't meet the requirement.
I have thought about the VPN but it (probably) doesn't not work because if we allow user the access to the VPN, they still be able to access the VPN anywhere and use the application outside the work places. If we restrict the VPN client to access from specific IP address, then when the IP changes, the same problem occurs.
To sum up, I would like to ask for the advice on how to restrict the access of web application, hosted on public internet, to the users that are connecting from the public IP address that can change every time when the internet reconnected. The requirement may sound strange but it is as it is. Please feel free to ask for more details if you want to and have a discussions on the suggestions.
Thank you in advance.
You could setup a client for a dynamic dns service (e.g. dyndns) on the client-side.
Then you could use that on the server-side to always check against current IP using that dns.
As alternative you could bind the website to localhost only and only let it be accessed via an pubkey-enforced SSH tunnel (and make that get auto-established by a script/scheduler on the client side, on a permission level outside of the users' reach, so that they can't take the private key needed for the connection anywhere)
You can use different PHP methods and variables to detect from where the request has been originated. Just whitelist your domains and organizations, and allow only them by adding a middleware.
Additionally, you can generate a token using Laravel Passport or you can create your own mechanism, and then use that token to authenticate if the request is valid or not.
Since the IP changes, you can setup a dynamic dns or as suggested on the comment above.

Recaptcha IP addresses

Okay, so we implement Recaptcha in production. We get errors because it can't reach the IP address it needs to use the service. We open a port for the IP address to reach Google. No problem. We do that and configure that IP address explicitly to work. It works great. Then, the next day, we start getting errors again because Recaptcha is using a different IP address. I can allow requests from that IP address, too, but now I'm unsettled. Where are these addresses coming from? How do I configure this to work reliably?
Recatpcha from Google can use any Google IP address and there are lots of them.
Ran this from Windows:
_netblocks.google.com text =
nslookup -type=TXT _netblocks.google.com
"v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all"
That's all the network Google uses currently. These can change so check them often.
Google suggest allowing port 80 to all IPs outbound, this highly insecure. They recommend going through a proxy server but again that is highly insecure if your web server is an DMZ. Proxy aware trojans do exist. All that need to be done is exploit a vulnerability to execute arbitrary code and you can create reverse connection on port 80 through a proxy server to download the payload. Then it is trivial to escalate privileges and own the box. I don't mean just Windows servers but Linux as well. I've done it in lab environment on security was on. It's really easy to do.
This is the Google website I got this from:
http://code.google.com/p/recaptcha/wiki/FirewallsAndRecaptcha
I wanted to append to this answer with more recent information. The documentation that Chris is pointing to does not include all of the TXT records necessary to dig (thanks Google):
_netblocks2.google.com (IPv6 subnets)
_netblocks3.google.com (Additional IPv4 subnets)
In my particular case, the _netblocks3 entry contained 2 large /19's that made my initial rule ineffective
(I found additional references here: https://support.google.com/a/answer/60764?hl=en)
Perhaps you should be using a hostname rather than IP

How does proxy bypass firewall filter?

I am wondering how the proxy will bypass the content filter within firewall?
For example, if you are in China and try to connect to facebook, the GFW will block it. But if you use proxy server, then you can connect through. What is the logic here?
Thanks,
The Firewall blocks the web address from being accessed. A proxy has a different web address and is therefore accessible. However, the proxy is able to access the web address as it is outside the firewall. It sends the HTML code from the webpage to be accessed to your computer.
Think of the proxy as a middleman. It gets you what you want and then sends it to you, without you ever accessing the webpage directly and alerting the firewall.

Resources