I am trying to cache my requests using requests_cache and redis like so:
requests_cache.install_cache(
'requests_cache', backend='redis', expire_after=600
)
and when Redis is run on localhost:6379, everything is fine and works out of the box.
However when I deploy my app to Heroku where there is a REDIS_URL environment variable, the above command fails because obviousle REDIS_URL does not point to localhost:
Error 111 connecting to localhost:6379. Connection refused.
So the question is, how do I make it work on Heroku? the docs aren't clear on the subject.
You have to pass an additional argument to install_cache called connection which will be of StrictRedis type. So I guess create it like that:
r = redis.StrictRedis(host='REDIS_URL', port=6379, db=0)
requests_cache.install_cache(
'requests_cache', backend='redis', expire_after=600, connection=r
)
Or something similar, depending on how much information REDIS_URL contains (protocol, port etc.)
Related
This feels like a basic question, I'm sure other people needed something like this at some point however I couldn't find anything clear on this topic and I'm not very familiar to networking so I hope following makes sense (and sorry if I am butchering the terminology)
I often need to connect to a VPN server at work. At the moment I am using Cisco AnyConnect, which upon connection asks me the host server, my username, my password and routes all my traffic through the VPN afterwards.
The problem is, depending on what I'm doing I often need to jump back and forth to VPN (some applications need local network and others dont)
What would be perfect is to create one VPN connection and just keep it on a port without routing anything to it. Then I can use it as a proxy to selectively route my traffic through VPN (eg. I override http_proxy locally on one terminal instance and run applications that require VPN through there without having to jump back and forth). Furthermore if I create this connection from the terminal I can automate most of the process, with something like:
function callExecutableThroughVPN() {
if ! is_connected_to_vpn then
echo "coulnt find the vpn connection, will attempt to connect. enter password:"
# get password input here
setup_vpn_on_port_9876 # pass password input here
echo "setting proxy to 127.0.0.1:9876"
http_proxy=127.0.0.1:9876/
https_proxy=127.0.0.1:9876/
fi
./executable_that_need_vpn
}
Then I can simply stay on my network and use a wrapper like above for few processes that require their traffic re-routed.
So in summary, my question is: Is it possible to create a single VPN process through terminal to listen a local port, so I dont have to route all my traffic at once, and I can simply kill this process when I'm done
I recommend using SSH tunnel/Socks Proxy (see ssh -D) and tsocks wrapper. For http(s) proxies I recommend the proxychains tool.
I am not able to successfully bind and secure the rethinkdb http client, either being exposed to the whole network or refusing connections behind the proxy.
I am thus left with no choice but to restart the rdb daemon with bind-http=all each
time I want to access it...
Rdb starts with systemctl under archlinux. Three configurations I tried:
# /etc/rethinkdb/instances.d/mydb.conf
bind-http=localhost #(1)
bind-http=127.0.0.1 #(2)
bind-http=1.2.3.4 #(3)
Resulting in:
Fails to parse 'localhost'
Refuses connections behind the proxy
Equivalent to bind-http=all
Firefox 59 uses a socks proxy, working ok
as the browser's ip address does become 1.2.3.4:
$ ssh -TND 8080 user#1.2.3.4
I am quite convinced that I had secured the http client as expected,
and problems started after I updated both FF and rdb
(FF59 fails to parse 'localhost' as well for example)
I don't know if this is a bug or a feature or if I am missing something,
any help is most welcome. Many thanks
Beware of the "localhost" string.
Configuring the rethinkdb server with:
#/etc/rethinkdb/instances.d/mydb.conf
bind-http=127.0.0.1
http-port=8084
and binding some local port with SSH:
[client]$ ssh -L 8080:127.0.0.1:8084 server
is enough to access the web interface at 127.0.0.1:8080, as suggested by #jishi.
Configuring the browser to use a SOCKS proxy as per the rdb docs is not at all necessary.
For some reason localhost:8080 is not understood by FF59 (gets invisibly prefixed by www or something).
We are attempting to connect to a WebDAV server using net use over SSL. On some servers we're seeing an issue in which this connection only succeeds if we specify port 443 in the URL.
Does Map
net use * "https://example.com:443/folder"
net use * "\\example.com#SSL#443\folder"
and, bizarrely, so does this:
net use * "\\example.com#SSLasdf\folder"
Does Not Map
net use * "https://example.com/folder"
net use * "\\example.com#SSL\folder"
In the non-working cases we consistently receive the following error:
System error 67 has occured.
The network name cannot be found.
We have noticed some things that might be useful information:
We have a test server that's configured the same way as the prod server and it works as expected.
In the non-working cases, no incoming requests are ever seen at the prod server from the failing host.
All clients are based on the same image.
The problem does not manifest uniformly on all clients -- some work, some don't.
There is an existing, valid entry for example.com in the client DNS cache.
Flushing the client DNS cache of the affected servers does not resolve the problem.
Once the problem appears, it seems to stick. That is, if I execute one of the working mappings, delete it, and then immediately execute one of the non-working mappings, the problem persists.
We are utterly stumped. Any theories?
You are seeing different behaviors because you are connecting using different names. Once a name has been attempted and failed, the WebClient (this is the service that enables WebDAV) will cache the response for a period. To clear the cache, locate the WebClient service in the Services console and restart it. Or from an administrative command prompt execute the following command:
net.exe stop webclient && net.exe start webclient
We ultimately determined that we were mis-interpreting the System Error 67 that net use was returning. We discovered two interesting things:
In the event that the WebDAV returns a 404 or a 50x on the initial, root folder PROPFIND, net use will (rightly) interpret this as the root folder being unavailable. The fact that it says the network name could not be found let us to believe that the problem was with the name resolution, but it was really just saying, 'hey, I couldn't find anything at this path.'
If 'net use' fails due to a 404/50x, it appears that for a brief period of time it will automatically fail any additional mappings for that same host without issuing a request. For example, if net use http://me.com/foo returns a 404, then net use http://me.com/bar will instantly fail if made in rapid succession to that first call, and no request record will be seen in the WebDAV server logs.
My best guess is that appending the #443 port didn't make any real difference. What it perhaps did do was to trick net use into thinking it was talking to a different host, at least for the purposes of its 'auto-fail' feature. But that's just a guess.
My aim is to do some automated testing over HTTP and HTTPS/SSL, via Rack, without recourse to a proxy server setup or anything like that. I have a gem that I want to test and I'd like for others to be able to run tests on too, so I'd like it to be as self contained as possible.
The code for App runs fine when run on it's own, so it's not included here, the problem is with the Rack part.
I'd like to do something like this:
app = Rack::Builder.app do
map "/" do
Rack::Handler::WEBrick.run App, Port: 3000
end
map "/ssl" do
Rack::Handler::WEBrick.run App, Port: 3001 # more options for SSL here...
end
end
run app
I've tried several combinations of the code above, like:
http = Rack::Builder.app do
map "/" do
run App
end
end
https = Rack::Builder.app do
map "/ssl" do
run App
end
end
Rack::Handler::WEBrick.run http, Port: 3000
Rack::Handler::WEBrick.run https, Port: 3001 # more options for SSL here...
With the two server setup up I tend to get one server run on the first port listed, then on interrupt it will run the second server on the next port listed - and then, on the next interrupt, either another server on 9292 or it shuts down.
I'm obviously doing something not quite right.
This is quite close, but ends up running the two servers via two different command line commands:
Starting thin server on different ports
Any help is much appreciated.
Current Thin does not support this -- I checked the source code.
Thin v2 is still pre-release, but the config code looks like it supports this via declaring multiple listeners in the config file.
But Thin v2 is still alpha software.
You can also switch to another server like Unicorn that does explicitly support binding to multiple ports or addresses.
I'm using the bert-rpc gem in Ruby 1.9.3 to make calls to an Ernie server that is not on my local network:
BERTRPC::Service.new("www.someurl.com", 9998)
Now I want that connection to be secured via SSH. I was thinking about using a local unix socket, but that means I need to open up the bert-rpc gem code and replace the TCPSocket calls to UnixSocket calls. Isn't there another way?
Isn't it possible to just forward a localhost port 9998 to www.someurl.com 9998, so I can do this:
BERTRPC::Service.new("localhost", 9998)
I've tried the local-to-remote net/ssh examples, but I can't really wrap my head around them, and I can't find any good documentation. Would anybody be so kind to show me an example of how to do the port forwarding?
Thanks
The solution to this was pretty simple. Create a SSH Gateway:
gateway = Net::SSH::Gateway.new('www.someurl.com', 'myuser', :password => "somepass")
gateway.open('www.someurl.com', 9998, 9998)
This routes localhost:9998 to www.someurl.com:9998. This WILL NOT work on Heroku, as Heroku doesn't allow binding on other ports than the assigned $PORT.
Does anyone have an idea on how to make this work on Heroku with a Unix Socket in /tmp?