Dnsmasq: Consider all responses - dnsmasq

I have set up a dnsmasq server and enabled the --all-servers flag to query all servers. My problem is, that by default, dnsmasq won't consider all responses. It would stop after the first one, whether it is positive or negative. I know that this is the intended behavior for e.g. web nameservers which all know the same hosts but I have a different setup.
Is there any kind of way to enable this and evaluate all responses?
Thanks in advance!

Related

Rate Limit/Throttling with Tomcat (and Spring REST)

I read a lot of threads now, still my problem could not be solved sufficiently:
If running a tomcat webserver with a spring rest backend, there should must be a way to limit the possible requests per seconds/minute/... based on, lets say, the IP of a requestor.
My investigations led to the following possibilites so far:
Use Guava RateLimiter or https://github.com/weddini/spring-boot-throttling and check all requests in the preHandle. But since this does not take into account, which IPs at what time requested, something like a REDIS store would make more sense to check against (IP/Last access timestamp)
Put a more advanced web server in front of tomcat which offers this functionality (e.g. apache2 or nginx)
Now I don't like the first solution, since the requests already hit the application itself and the second solution builds up an additional layer, which I can't really believe is necessary for such a basic problem..
So my question is, what methods and solutions am I missing here? I read something about tomcat valve semaphore, but it seems to just limit the overall rate of requests.
Would it be most efficiently/possible to already filter with some basic functionality like iptables or fail2ban on the 8443 port and simply drop requests by the same ip in a give time frame?

Possible to configure SonarQube shortened url alias?

Currently our users have to enter http://biglongservername:9000/sonar in order to access our site. Can it be configured to correspond to http://sonar? Our DNS guys say that can't do any more than change the CNAME so that pinging "sonar" takes you to biglongservername.domainname.org, which doesn't help our users much, but might be a start. Is this possible?
There are 3 parts to this:
DNS configuration to alias http://biglongservername to http://sonar. Your DNS guys have already said they can make this happen for you, so take them up on it. The address becomes http://sonar:9000/sonar
Dropping the :9000 from the address. This is a matter of SonarQube configuration. In $SONARQUBE_HOME/conf/sonar.properties, set sonar.web.port to 80, the default port. Restart. The address becomes http://sonar/sonar
Dropping the "/sonar" from the end of the address. This is again a matter of configuration. In $SONARQUBE_HOME/conf/sonar.properties (yes, the same file) comment out sonar.web.context. Restart. The address becomes http://sonar.
Note that I would test each of these steps before moving on to the next one. And while step #1 can happen transparently to your users, they will certainly notice steps #2 and #3. You may want to set up a brief outage window.

How to start a server on a free port from within a client script?

I have a (python) script that must start a server. The server should be using a free port obviously, and the python script must know this port in order to communicate with it.
Question is, how do I make sure this is the case?
You cannot determine the free port in the python script, and pass it to the server, because in the meanwhile another application could have taken the port.
You cannot let the server choose a port, because then the port is unknown to the script.
This looks like a pretty common problem, so I suppose it has been tackled before.
What is the neatest way to do this?
Use a list of preferred ports, and try them in the order of preference. This list will of course be known to both client and server.
I suspect you are picking a low port. Since most of the lower ports (close to 1024 or below) already have dedicated applications, you want to avoid these.
If you are using a higher port the likeliness of a collision is negligible, which I think is the common solution.

How do I get a collection (or a stream) of distinct IP addresses using Tor?

I'm writing a web crawler for academic research. This crawler makes millions of requests that I want to distribute over ten or so IP addresses.
My machine has one IP address already. I can get a second by using Tor. Can I get even more IP addresses out of Tor? Here are ideas (and questions surrounding them) that I have for doing this
Run multiple instances of Tor; each provides an IP address. (But will Tor map more than one or two anonymized IP addresses to my machine?)
Run one instance but for each request change its identity. (But will Tor rate-limit this behavior, as mentioned here?)
Would either of these ideas work, or do the bits in parentheses make them fail? Any other ideas?
Tor relays have rate limits. NEWNYM is limited to 5 second intervals.
If they're not fast enough, a willing botnet or app engine should work.

Multiple connections in a single SSH SOCKS 5 Proxy

My fist question here on Stackoverflow: What should I need to do so that the SSH SOCKS 5 Proxy (SSH2) will allow multiple connections?
What I have noticed, is that when I load a page in Firefox (already configured to use the SOCKS 5 proxy), it loads everything one by one. It can be perceived by bare eyes, and I also confirm that through the use of Firebug's NET tab, which logs the connections that have been made.
I have already configure some of the directives in the about:config page, like pipeline, persistent proxy connections, and a few other things. But I still get this kind of sequential load of resources, which is noticeably very slow.
network.http.pipelining;true
network.http.pipelining.maxrequests;8
network.http.pipelining.ssl;true
network.http.proxy.pipelining;true
network.http.max-persistent-connections-per-proxy;100
network.proxy.socks_remote_dns;true
My ISP sucks because during the day, it intentionally breaks connections on a random basis. And so, it is impossible to actually accomplish meaningful works without the need of a lot of browser refresh or hitting F5 key. So, that is why I started to find solutions to this.
The SSH's dynamic port forwarding is the best solution I find to date, because it has some pretty good compression which saves a lot of useless traffic, and is also secure. The only thing remaining is to get it to have multiple connections running in it.
Thanks for all the inputs.
I have had the same thoughts and my conclusion is that it should already have multiple connections going through the socks proxy. This is because if you view the ssh connection with -vvv flag, you'll notice it opening up different ports for the different requests.
I think it may have something to do with SSH-over-TCP itself; plus, perhaps, some extra inefficiencies and/or bugs in the implementations. Are you using only OpenSSH on Mac OS X / *BSD / Linux, or is this PuTTY on Windows?
Your situation is actually pretty much exactly why SCTP was developed (as a TCP replacement), which has a notion of multiple streams from within a single connection.
Hopefully, we'll have SSH over SCTP readily available one day. The best part about SCTP is that it'd still work over IPv4, i.e. it is supposedly mostly a matter of only the endhosts having support for it, so, unlike IPv6, you wouldn't have to wait for your lazy ISP (at leasts, theoretically).

Resources