Decide a different 'main' IP for a specific process - windows

I have 3 IP set on my server (Windows Server 2008).
One particular application needs to be working on the 3rd IP, unfortunately the 'bind' paramater of that special application is half working, the application is using my main IP instead to communicate.
The application use "GetHostByName" and/or "GetAddrInfo" to get my main ip. I can know that because I reverse engineered it.
I would like to "spoof"(I think) another main IP for this application. I would like to keep my IP settings as is because everything else is working and I feel I shouldn't touch it.
So basically I would like "GetHostByName" and "GetAddrInfo" to return another IP of my choice, only to THIS particular application.
I am aware that this probably can't be done exactly as described. Maybe it can, but if not I would like to know what do you guys think the best solution would be to achieve my goal.
Thank you,
Yanick

Well I continued my research, changed my words, finally found this:
https://r1ch.net/projects/forcebindip
Note: the only thing not working was accessing stuff with 127.0.0.1 (MySQL for example), I had to use my main IP, not sure if my problem, my specific application's problem or a common problem.

Related

Are there any ways to monitor all HTTP protocols and block certain ones using a single script on WIndows?

I want to write a program that can monitor all system HTTP/HTTPS protocols used to open the default browser, and block certain ones, automatically changing certain requested URL into another. The process of changing a URL is simple, but the monitoring and blocking part is quite puzzling.
e.g. When clicking on the URL 'https://example.com/asdf.htm', the request will be blocked by the program and the the Windows system will receive the command of 'http://www.example2.org/asdf.htm' instead and the latter instead of the former URL will be opened by the default browser.
I am an amateur developer and student who do not have much experience in solving such problems.
I searched the web and found someone asked a similar question years ago:
https://superuser.com/questions/554668/block-specific-http-request-from-windows
However, I didn't find any useful advice on coding in the page. Maybe I can use an antivirus program to block certain URLs or change the hosts file to block certain URLs but the URL replacement cannot be done. Certainly, changing the hosts to a certain server which redirects certain requests might work but that's too complex. I wish someone can help me solve the problem by giving a simple method on monitoring the Windows system itself. Thanks!
To summarize our conversation in the comments, in order to redirect or restrict traffic, either to sites, either to ports (protocols are actually "mapped" via ports) the main solutions usually are:
a software firewall - keep in mind that SWFW don't usually redirect, they just permit or allow traffic via ports
a hardware firewall (or advanced router, not the commercial ones, but enterprise grade) - they do what you want, but they are very expensive and not worth for a home experiment
a proxy server - this can do what you want
Other alternatives that might or might not work would include editing the hosts file, as you said, but as stated earlier I don't recommend it, it's a system file and if you forget about it, then it can be a hindrance (also keep in mind that normally you should not use a Windows user with admin rights even at home, but that is another story) and a browser extension (which Iwould guess only changes content on pages, not the way a browser works (such as changing URLs).
I think a proxy server is the best pick here. Try it and let me know.
Keep in mind I still recommend you read about networking in order to get a better idea of what you can and can't do in each setup.

Inter-Gear Communication for Openshift?

I'm trying to create an app such that gear 2 according to this model can be accessed by gear 3,4...n when using the --scaling option.
The idea being for this structure is the head of a chain of relays. I'm trying to find where the relevant information is so all the following gears have the same behavior. It would look like this:
I've found no documentation that describes how to reach gear 2 (The Primary DNAS) with a url (internal/external ip:port) or otherwise, so I'm a little lost as to how to let the app scale properly.
I should mention so far I've only used bash scripting, but I'm not worried about starting the program in other languages, but so long as it follows that structure in openshift I'm not worried.
The end result is hopefully create a scalable instance of shoutcast on openshift.
To Be Clear:
I'm developing a cartridge, not using the diy, all I understand of openshift is in this guide but of course I'm limited because I'm new.
I'm stuck trying to figure out how to have the cartridge handle having additional gears use the first gear as a relay. I am not confused about how Openshift routes requests externally to the gears and load balances them. I'm not lost how to use port-forwarding to connect to my app, the goal would be to design the cartridge so this wouldn't be a requirement at all, to only use external routes.
The problem as described above is that additional gears need some extra configuration, they need an available source (what better than the first gear?). In fact the solution to my issue might be to somehow set up this cartridge to bypass haproxy with an external route that only goes to the first gear.
Github for those interested, pass it around, it'll remain public. Currently this works only as a standalone, scaling it (what I'd like to fix) causes issues. I've been working on this too long by myself, so have at it :)
There's a great KB article that explains how the routing works on OpenShift gears here https://help.openshift.com/hc/en-us/articles/203263674-What-external-ports-are-available-on-OpenShift-.
On a scalable application, haproxy handles all the traffic routing to your gears. the only way to access your gears is through the ports mentioned in the article above. rhc does however provide a port-forwading option that would allow you to access things like mysql directly from your local machine.
Please note: We don't allow arbitrary binding of ports on the externally accessible IP address.
It is possible to bind to the internal IP with port range: 15000 - 35530. All other ports are reserved for specific processes to avoid conflicts. Since we're binding to the internal IP, you will need to use port forwarding to access it: https://openshift.redhat.com/community/blogs/getting-started-with-port-forwarding-on-openshift

Is there a way to find out what the local ip is of a website?

Like i said in the title. Is there a way to find out what the local ip is of a website. If there is not, is it possible to make one?
So it must start with 192......
Thanks
I'm not sure whether I got your question at all. But when you want to know the ip from an website, simply resolve the domain (for example using dig, nslookup, etc).
When you want to know the internal network-ip of a webserver that is located behind a nat, I think there's no simple solution possible - unless you have access to the machine.

Is it possible to hide the IP address in a PHP connection?

Long story short, I'm wanting to test my site's anti-bot systems ("bot" here referring to players of the game cheating with programs, not spiders etc.).
I've written my own bot using PHP's CLI. Most of the time, my site is able to detect the bot activity and block it.
However I need to test dealing with dynamic IPs, and since I have a static one this is no easy task as far as I can tell. There are other things I'd like to be able to test that involve multiple IPs.
So, bottom line, is it possible to hide/change the IP address seen by the server when my PHP script connects to it and, if so, how do I do it? (I've never really used proxies before so I don't know much about them).
you can write a test code which does substitute $_SERVER['REMOTE_ADDR'] at the very beginning of your script and do whatever tests you like.
No, IP is the one of the few things client can't camouflage.
You can definitely use Proxy severs. There are many open proxy servers that are available, but those are not reliable and slow. You can use the paid proxy solutions, something like this proxy.lc

1 A-record for every subdomain (10000+); any potential issues? Any other solution?

Most solutions I've read here for supporting subdomain-per-user at the DNS level are to point everything to one IP using *.domain.com.
It is an easy and simple solution, but what if I want to point first 1000 registered users to serverA, and next 1000 registered users to serverB? This is the preferred solution for us to keep our cost down in software and hardware for clustering.
alt text http://learn.iis.net/file.axd?i=1101
(diagram quoted from MS IIS site)
The most logical solution seems to have 1 x A-record per subdomain in Zone Datafiles. BIND doesn't seem to have any size limit on the Zone Datafiles, only restricted to memory available.
However, my team is worried about the latency of getting the new subdoamin up and ready, since creating a new subdomain consist of inserting a new A-record & restarting DNS server.
Is performance of restarting DNS server something we should worry about?
Thank you in advance.
UPDATE:
Seems like most of you suggest me to use a reverse proxy setup instead:
alt text http://learn.iis.net/file.axd?i=1102
(ARR is IIS7's reverse proxy solution)
However, here are the CONS I can see:
single point of failure
cannot strategically setup servers in different locations based on IP geolocation.
Use the wildcard DNS entry, then use load balancing to distribute the load between servers, regardless of what client they are.
While you're at it, skip the URL rewriting step and have your application determine which account it is based on the URL as entered (you can just as easily determine what X is in X.domain.com as in domain.com?user=X).
EDIT:
Based on your additional info, you may want to develop a "broker" that stores which clients are to access which servers. Make that public facing then draw from the resources associated with the client stored with the broker. Your front-end can be load balanced, then you can grab from the file/db servers based on who they are.
The front-end proxy with a wild-card DNS entry really is the way to go with this. It's how big sites like LiveJournal work.
Note that this is not just a TCP layer load-balancer - there are plenty of solutions that'll examine the host part of the URL to figure out which back-end server to forward the query too. You can easily do it with Apache running on a low-spec server with suitable configuration.
The proxy ensures that each user's session always goes to the right back-end server and most any session handling methods will just keep on working.
Also the proxy needn't be a single point of failure. It's perfectly possible and pretty easy to run two or more front-end proxies in a redundant configuration (to avoid failure) or even to have them share the load (to avoid stress).
I'd also second John Sheehan's suggestion that the application just look at the left-hand part of the URL to determine which user's content to display.
If using Apache for the back-end, see this post too for info about how to configure it.
If you use tinydns, you don't need to restart the nameserver if you modify its database and it should not be a bottleneck because it is generally very fast. I don't know whether it performs well with 10000+ entries though (it would surprise me if not).
http://cr.yp.to/djbdns.html

Resources