How to setup rethinkdb proxy server - rethinkdb

we have two client machines, how do we connect both of them using proxy server? As you said earlier:
"To start a RethinkDB proxy on the client:
rethinkdb proxy -j -j ..."
only of the clients can connect in this way, since the ports will already be in use.

As mentioned elsewhere, you can avoid the port conflict by passing in the -o 1 argument. This shifts the ports that the proxy uses by an offset of 1.

Related

binding rethinkdb webUI to 'localhost' and proxy: refuses connections or exposes to full network

I am not able to successfully bind and secure the rethinkdb http client, either being exposed to the whole network or refusing connections behind the proxy.
I am thus left with no choice but to restart the rdb daemon with bind-http=all each
time I want to access it...
Rdb starts with systemctl under archlinux. Three configurations I tried:
# /etc/rethinkdb/instances.d/mydb.conf
bind-http=localhost #(1)
bind-http=127.0.0.1 #(2)
bind-http=1.2.3.4 #(3)
Resulting in:
Fails to parse 'localhost'
Refuses connections behind the proxy
Equivalent to bind-http=all
Firefox 59 uses a socks proxy, working ok
as the browser's ip address does become 1.2.3.4:
$ ssh -TND 8080 user#1.2.3.4
I am quite convinced that I had secured the http client as expected,
and problems started after I updated both FF and rdb
(FF59 fails to parse 'localhost' as well for example)
I don't know if this is a bug or a feature or if I am missing something,
any help is most welcome. Many thanks
Beware of the "localhost" string.
Configuring the rethinkdb server with:
#/etc/rethinkdb/instances.d/mydb.conf
bind-http=127.0.0.1
http-port=8084
and binding some local port with SSH:
[client]$ ssh -L 8080:127.0.0.1:8084 server
is enough to access the web interface at 127.0.0.1:8080, as suggested by #jishi.
Configuring the browser to use a SOCKS proxy as per the rdb docs is not at all necessary.
For some reason localhost:8080 is not understood by FF59 (gets invisibly prefixed by www or something).

kamailio proxy server can't find other server

We are running two proxy server, we need to connect A server to B server.
but A server can't find B server because B server sip domain only register TCP!
only answer dig _sip._tcp.serveraddress SRV, not answer dig _sip._udp.serveraddress.
our dns server admin say, sorry we can't support udp.
how to fix kamailio source for support tcp dns?
The latest version (at this time 4.1.2) should try all the protocols.
You can try to play with dns proto preference global parameters, giving a higher priority to tcp, see:
http://www.kamailio.org/wiki/cookbooks/4.1.x/core#dns_parameters
As example:
dns_tcp_pref=50

Proxify an application via loopback adapters and SSH

This is part programming, part sysadmin, so please excuse me if you feel that this should be over on serverfault.
I have an application that is not SOCKS aware and that we need to use through a firewall. We cannot modify the application to have SOCKS support either.
At the moment, we do this by aliasing the IPs the application talks to the loopback adapter on the host, then creating SSH tunnels out to another host. The IP's the application uses are hardcoded. Our SSH connections look like:
ssh -L 1.2.3.4:9999:1.2.3.4:9999 user#somehost
Where 1.2.3.x are aliases on the loopback.
So the application connects to the open port on the loopback, which gets sent out to the SSH host and onto the real 1.2.3.4.
It works, but the problem is that this application connects to quite a few IPs ( 50+ ), so we end up with 50 ssh connections out from the box.
We've tried to use several 'proxifying' apps, like tsocks and others but have had alot of issues with them ( the app is running on OS X and tsocks doesn't work so well, even with the patches )
Our idea was to write a daemon that listened on all interfaces on the specified port - it would then take the incoming packets from the application, scrape the packet info ( dst IP, port, payload ), recreate the packet and proxify it through a single SSH SOCKS connection ( ssh -D 1080 user#somehost ). That way, we only have 1 SSH connection that all the ports are being proxied through.
My question is - is this feasible? Is there something that I'm missing here? I've been combing through pfctl, ipfw, iptables docs, but I don't see any option to do it through those and this doesn't seem like it'd be the most difficult thing to code. It would recreate the packet based on the original destination IP and port, connect to the local SOCKs proxy and resend the packet as if it were the original application, but now with SOCKS support.
If I'm missing something that someone knows about that already does this, please let me know. I don't know socket programming or SOCKs too well, but this doesn't seem like it'd be too big of a project to tackle, but I'd like some opinions if I'm biting off way more that I should.
Thanks
If your application could add SOCKS client support, you can simply ssh -D lock_socks_port remote_machine, which will open up the local *lock_socks_port* as a SOCKS server at localhost, which can then connect to any host accesible by the remote machine.
Example: imagine you are using an untrusted wifi network without encryption. You can simply launch ssh -D 1082 home, and then configure your web browser to use localhost:1080 as SOCKS server. Of course, you need a SOCKS-enabled client. All the traffic would appear as coming from your gateway, and the connection would be opaque to those snooping the wifi.
You can also open a single ssh client with an indefinite number of LocalForward requests, which would be tunneled on top of a single ssh session.
Moreover, you can add ssh connections to an already-established ssh connection by using the ControlMaster and ControlPath options of ssh.

Windows Tool or utility to validate remote TCP / UDP ports are accessible over the network?

I am trying to find somw Windows based tools that can help me validate TCP and UDP connection on remote machines.
My Problem (just one use case):
At work, I manage many clustered servers that I run load tests against. In order to get a rich test, I use Jmeter-Plugins which provides a Server agent that opens a TCP socket on port 4444 on a target remote machine: http://code.google.com/p/jmeter-plugins/wiki/PerfMonAgent
There are many times when I setup a new load test farm, that either the network, or the server configuration, or the ServerAgent itself can have issues and thus not allowing a Load test client to access that TCP connection.
The issue I have is that I dont know what part of the system is broken.
What I think I need:
I would like to know how I can open a TCP (not HTTP with cUrl), connection to a remote server to validate that the network allows the connection, as well as the Server firewall allows the given TCP connection to be accessed remotely.
What I have looked:
These are some of the tools I have looked at so far:
Nmap http://nmap.org
Ncat http://sourceforge.net/projects/nmap-ncat/
TCP/IP Builder http://www.drk.com.ar
Zenmap 6.01 and nmap might do the job I want, but some machines where not accessible to Zenmap when I know 100% that the server was accessible via HTTP, so that was strange.
I have looked at many tools and either they:
Dont allow remote connections
Dont seem to want to connect to a TCP socket
Or I dont understand the tools to accomplish the validation I stated above.
I would greatly appreciate all comment and suggestions to help with this re-occurring problem I face.
Mick,
Firebind.com can do what you'd like to do. Firebind is an Internet based server that can listen on any of the 65535 UDP or TCP ports. It uses a java based client to send traffic to and from the server from your machine.
Carl
www.firebind.com

Ports with C++ Server/Client applications

If I create a c++ server/client application, the port I used to communicate does it need to be open on the router of the server and client machine
Or what other approach could I take? the client computer needs to receive information from the server but I am not able to have any ports opened because it is on a school network....
[edit]
Hmm My setup is a php page running on a server say when I press hello, the server makes a ssh connection through php and sends shell commands to the machine. The server is running off of a school server which I do have ssh access to and run all my things from there. The client computer will be my pc running off of the school wifi which is not connected to the server. The server will try to make a ssh connection to the public ip of my computer running off of the school wifi(no ports open/can ssh out but no ssh in). Will these methods you mention make this possible, in particular the connect.c since I can't run putty off of the server, and the connect.c I could call from the php.
The choice of language is highly irrelevant here.
There don't need to be ports 'open' on any router, unless your traffic must pass through it. On normal peer hosts in the same network (or subnet) there would hardly be any firewall policy, not even in schools.
Technically it is possible for the switch to block peer-2-peer traffic (meaning traffic not destined to the outgoing gateway), but that is not very usual.
Of course, if the school doesn't allow outbound (WAN) traffic on most ports, tough luck, and they're absolutely right :)
You can look at
ssh (with tunnels -L, -D and -R options, perhaps -o GatewayPorts on)
stunnel
connect.c
http-tunnel
All very readily googled
To establish a TCP/IP connection, only the server port needs to be accessible by the client. The connection is full-duplex, therefore data can flow from the client to the server and vice-versa.
If you are using UDP for your application, which is a connection-less protocol, what happens depends heavily on the firewall or router and whether it performs connection tracking for your service or not.
Unless you provide some additional information on your service and the network setup on both the client and the server side, we cannot provide more concrete information.

Resources