Proxychains routes differently with and without sudo - proxy

I am getting some very strange behavior from proxychains which I am unsure how to approach and troubleshoot. When I access a box, I ssh in with -R to open a reverse tunnel back out to my local machine.
ssh -R 1234 user#host
On this remote host, I have proxychains configured as follows (/etc/proxychains4.conf)
strict_chain
proxy_dns
remote_dns_subnet 224
tcp_read_time_out 15000
tcp_connect_time_out 8000
socks4 127.0.0.1 1234
I have two machines, BadHost and GoodHost. I am using the exact same tunneling technique on both of these hosts. On GoodHost, everything works as expected. Proxychains on the remote host sends traffic to port 1234, which is carried by ssh back to my local machine, where it reaches out to the internet.
$ proxychains curl www.example.com
[proxychains] config file found: /etc/proxychains4.conf
[proxychains] preloading /usr/lib/x86_64-linux-gnu/libproxychains.so.4
[proxychains] DLL init: proxychains-ng 4.14
[proxychains] Strict chain ... 127.0.0.1:1234 ... www.example.com:80 ... OK
<!doctype html>
<html>
...
On BadHost, I get the following
$ proxychains curl www.example.com
[proxychains] config file found: /etc/proxychains4.conf
[proxychains] preloading /usr/lib/x86_64-linux-gnu/libproxychains.so.4
[proxychains] DLL init: proxychains-ng 4.14
[proxychains] Strict chain ... 127.0.0.1:1234 ... 159.x.x.7:8080 <--socket error or timeout!
curl: (7) Couldn't connect to server
On both hosts, nslookup returns the same results
$ nslookup www.example.com
Server: 159.x.x.156
Address: 159.x.x.156#53
Non-authoritative answer:
Name: www.example.com
Address: 93.184.216.34
Name: www.example.com
Address: 2606:2800:220:1:248:1893:25c8:1946
It especially confuses me that using sudo seems to solve the problem on BadHost
$ sudo proxychains curl www.example.com
[proxychains] config file found: /etc/proxychains4.conf
[proxychains] preloading /usr/lib/x86_64-linux-gnu/libproxychains.so.4
[proxychains] DLL init: proxychains-ng 4.14
[proxychains] Strict chain ... 127.0.0.1:1234 ... www.example.com:80 ... OK
<!doctype html>
<html>
...
Both proxychains ... and sudo proxychains ... on BadHost are using the same configuration file, as shown in my outputs...
Why is proxychains routing to this unknown IP on the subnet over port 8080?
159.x.x.7:8080
How can I troubleshoot what is happening?

Related

cURL error 28: Failed to connect to systemb port 80: Timed out for http://systemb/api/push_products [duplicate]

This question shows research effort; it is useful and clear
I have checked the cURL not working properly
When I run the command curl -I https://www.example.com/sitemap.xml
curl: (7) Failed to connect
Failed to connect on all port
this error only on one domain, all other domain working fine, curl: (7) Failed to connect to port 80, and 443
Thanks...
First Check your /etc/hosts file entries, may be the URL which You're requesting, is pointing to your localhost.
If the URL is not listed in your /etc/hosts file, then try to execute following command to understand the flow of Curl Execution for the particular URL:
curl --ipv4 -v "https://example.com/";
After many search, I found that Hosts settings not correct
Then I check nano /etc/hosts
The Domain point to wrong IP in hosts file
I change the wrong IP and its working Fine
This is new error Related to curl: (7) Failed to connect
curl: (7) Failed to connect
The above error message means that your web-server (at least the one specified with curl) is not running at all — no web-server is running on the specified port and the specified (or implied) port. (So, XML doesn't have anything to do with that.)
you can download the key with browser
then open terminal in downloads
then type sudo apt-key add <key_name>.asc
Mine is Red Hat Enterprise(RHEL) Virtual Machine and I was getting something like the following.
Error "curl: (7) Failed to connect to localhost port 80: Connection refused"
I stopped the firewall by running the following commands and it started working.
sudo systemctl stop firewalld
sudo systemctl disable firewalld
If the curl is to the outside world, like:
curl www.google.com
I have to restart my cntlm service:
systemctl restart cntlm
If it's within my network:
curl inside.server.local
Then a docker network is overlapping something with my CNTLM proxy, and I just remove all docker networks to fix it - you can also just remove the last network you just created, but I'm lazy.
docker network rm $(docker network ls -q)
And then I can work again.

Why would apache refuse connection to localhost 127.0.0.1 on OSX?

When trying to access sites on my localhost the connection is refused. Two days ago the set up was working without issues with multiple virtual hosts configured. I'm not aware of any changes that could have affected the set up. I spent all day yesterday trying to troubleshoot the issue but have been going around in circles.
OS: OSX 10.11.16
httpd -V returns this:
Server version: Apache/2.4.18 (Unix)
Server built: Feb 20 2016 20:03:19
Server's Module Magic Number: 20120211:52
Server loaded: APR 1.4.8, APR-UTIL 1.5.2
Compiled using: APR 1.4.8, APR-UTIL 1.5.2
Architecture: 64-bit
Server MPM: prefork
threaded: no
forked: yes (variable process count)
Server compiled with....
-D APR_HAS_SENDFILE
-D APR_HAS_MMAP
-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
-D APR_USE_FLOCK_SERIALIZE
-D APR_USE_PTHREAD_SERIALIZE
-D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
-D APR_HAS_OTHER_CHILD
-D AP_HAVE_RELIABLE_PIPED_LOGS
-D DYNAMIC_MODULE_LIMIT=256
-D HTTPD_ROOT="/usr"
-D SUEXEC_BIN="/usr/bin/suexec"
-D DEFAULT_PIDLOG="/private/var/run/httpd.pid"
-D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
-D DEFAULT_ERRORLOG="logs/error_log"
-D AP_TYPES_CONFIG_FILE="/private/etc/apache2/mime.types"
-D SERVER_CONFIG_FILE="/private/etc/apache2/httpd.conf"
httpd.conf is configured to allow virtual hosts and nothing has changed in httpd-vhosts.conf file.
LoadModule vhost_alias_module libexec/apache2/mod_vhost_alias.so
...
# Virtual hosts
Include /private/etc/apache2/extra/httpd-vhosts.conf
apachectl configtest returns:
Syntax OK
I've tried running a port scan for 127.0.0.1 and http port 80 does not show. This and the connection being refused makes me think this is where the issue is but I don't know why. The OSX firewall is turned off. I've tried the solution posted here but it did not fix it.
My /etc/hosts file looks like this:
#
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
#
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
127.0.0.1 site.local
127.0.0.1 othersite.local
...
I can ping 127.0.0.1. I previously had homebrew installed to run different PHP versions but I've removed that to try and bring the system back to stock. I really don't know what to try next, any help would be really appreciated.
It happened to me while upgrading php. Below steps worked for me to bring me back on track.
Generally, mac creates a back-up before upgrading. Hence, we'll be using the pre-update version of httpd.conf
cd /etc/apache2/
sudo mv httpd.conf httpd.conf-afterupdate
sudo mv httpd.conf.pre-update httpd.conf
sudo apachectl configtest
sudo apachectl restart
You could check to ensure you have Listen: 80 in your /usr/local/etc/apache2/2.4/httpd.conf configuration file.
After several days of trying to debug this I resolved it by overwriting my httpd.conf file with an older one that was created when upgrading to osx elcapitan.
sudo cp httpd.conf~elcapitan httpd.conf
After doing this localhost was accessible again. I don't know what was wrong with my previous httpd.conf file. I'd been through it many times looking for issues and never found any. I even diffed the two files to try and see where the problem was and found no reason why it would fail in the way it was.
Once I had localhost responding again I went through the process of enabling the modules I require and reconfiguring my virtual hosts.
Again, I don't know what was wrong with the other httpd.conf file. Perhaps it was corrupted in some way. Regardless it was failing silently with apachectl configtest not reporting any problems.
So if others have a similar issue it may be worth reverting back to an older httpd.conf file. OSX usually creates a backup when upgrades are done.
In my case DocumentRoot in the httpd.conf file was wrong.
I figured it out after typing httpd in the terminal. Try the same maybe it can give you some hints to solve the problem.
➜ ~ httpd
AH00526: Syntax error on line 255 of /usr/local/etc/httpd/httpd.conf:
DocumentRoot '/Users/xx/Desktop/sites' is not a directory, or is not readable

Setting up a chef server - chef-server-ctl commands not working(404 not found)

I'm trying to set up my own chef server on a hosted VM on a cloud environment, the problem is that whenever i try to execute one of the chef-server-ctl commands like user-create or user-list I get the following error:
ERROR: The object you are looking for could not be found
Response: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /users was not found on this server.</p>
<hr>
<address>HTTP_Server at 127.0.0.1 Port 443</address>
</body></html>
I seen that on the <address> tag it points to 127.0.0.1, but my server its another IP.
First I edited the /etc/opscode/chef-server.rb and on server_name I used the fully-qualified domain name (FQDN), nothing changes.
Then, on /etc/hosts file I had two lines:
127.0.0.1 localhost
999.999.999.999 mydomain.com <- this is the ip I use for ssh
So, following this response, I replaced the localhost with mydomain.com, didn't change the IP address.
$: hostname $: hostname -f both have the same outputs mydomain.com
Now, when I try to run sudo chef-server-ctl reconfigure I get:
FATAL: SocketError: getaddrinfo: Name or service not known
IDK what else to try...
127.0.0.1 is localhost, meaning it is always reachable from anywhere. If you are trying to get Chef Server to listen only on the public interface and not localhost, I suspect that will break many things and would be unsupported.

Privoxy/TOR not working with Iceweasel

I installed tor and privoxy on my linux 64-bit box. And uncommented the following line in /etc/privoxy/config file.
forward-socks5 / 127.0.0.1:9050 .
Then I started services for both. Now, if I run either of the following commands, I get the same IP address, which is not the real ip of PC. So I conclude both tor and privoxy are running.
curl -x 127.0.0.1:8118 curlmyip.com
curl --socks5 127.0.0.1:9050 curlmyip.com
If I use chrome with --proxy-server localhost:8118 switch, I again get the same anonymized IP address.
The problem is, I cannot use the http proxy, localhost 8118, with firefox/iceweasel. I go to Edit -> Preferences -> Advanced -> Network -> Settings and set HTTP and SSL proxies to localhost 8118. Iceweasel says "The proxy server is refusing connections"
Any solutions?
The use of browsers other than Tor Browser is recommended against. The use of privoxy / polipo has been deprecated by The Tor Project long time ago as well. The current advice is to only use Tor Browser, because only Tor Browser gives you an unified web fingerprint and you won't stand out.
I encountered a similar error where I was trying to use a combination of tor and privoxy on home PC.
The OS used was Kali Linux 2.0.
Steps to replicate issue
Installed tor
sudo apt-get install tor
Started Tor relay
tor
Validated if tor was working
netstat -atnp tor | egrep tor
In the output, observed tor output -- great.
tcp 0 0 127.0.0.1:9050 0.0.0.0:* LISTEN 2401/tor
tcp 0 0 192.168.x.x:44278 xx.xxx.xx.xx:443 ESTABLISHED 2401/tor
Installed privoxy
sudo apt-get install privoxy
Modified default privoxy config file in /etc/privoxy/config as per the instructions here under "How do I use privoxy together with tor" and included the following lines:
forward-socks4a 127.0.0.1:9050 .
forward 192.168.*.*/ .
forward 10.*.*.*/ .
forward 127.*.*.*/ .
Then started privoxy
privoxy /etc/privoxy/config
Ran the command to check if privoxy was working:
netstat -atnp | egrep privoxy
Output showed that privoxy was running (Notice tcp6 which is IPv6 - I didn't pay attention to that initially, but this was the problem):
tcp6 0 0 ::1:8118 :::* LISTEN 3881/privoxy
Then set the SSL and HTTP proxy to 127.0.0.1:8118 and I got the error when surfing internet sites, "The proxy chosen is refusing connections"
Fix:
On reading the privoxy config file carefully, the listen-address stanza displays the following information.
Some operating systems will prefer IPv6 to IPv4 addresses even
if the system has no IPv6 connectivity which is usually not
expected by the user. Some even rely on DNS to resolve
localhost which mean the "localhost" address used may not
actually be local.
**It is therefore recommended to explicitly configure the
intended IP address instead of relying on the operating
system, unless there's a strong reason not to.**
Appears that KALI was preferring to bind to the IPv6 localhost [::1] than IPv4 local host 127.0.0.1 even though I had no IPv6 connectivity.
So I changed listen-address line from
listen-address localhost:8118
to
listen-address 127.0.0.1:8118
and restarted privoxy...
pkill privoxy # kills all processes with privoxy in their name
privoxy /etc/privoxy/config
I then set the SSL, HTTP proxies to 127.0.0.1:8118 and the SOCKS proxy to 127.0.0.1:9050 (Socks 4) in ICEWEASEL. And voila! I was able to connect to internet sites.
For verification, I ran netstat and nmap which showed that privoxy was binding to IPv4 localhost IP..
> netstat -atnp | grep privoxy
tcp 0 0 127.0.0.1:8118 0.0.0.0:* LISTEN 3934/privoxy
> nmap 127.0.0.1 -p 8118
PORT STATE SERVICE
8118/tcp open privoxy
> nmap -6 localhost -p 8118
PORT STATE SERVICE
8118/tcp closed privoxy
Note:
My /etc/hosts file also has the entry for the localhost:
127.0.0.1 localhost
It works for me. Please try downloading a binary version of Firefox:
ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/17.0.8esr/linux-i686/en-US/
Following your exact instructions above with this binary on Gentoo worked for me. I'd surmise that you have an off version of Firefox.

Proxy built with netcat not allowing http basic authentication

I made a simple proxy server using nc, here's the one-liner:
mkfifo queueueue
nc -l 8080 <queueueue | nc http://$JENKINS_HOSTNAME 80 >queueueue
It listens on port 8080 and then forwards the data to a connection to our Jenkins server. Jenkins is behind a VPN, and the machine I am running this proxy on has VPN access.
On my other machine (no VPN access), I would like to curl the Jenkins server, here's the command to initiate the request through the proxy:
http_proxy=10.1.10.10:8080 curl --user $JENKINS_USERNAME:$JENKINS_PASSWORD http://$JENKINS_HOSTNAME/api/json
Both the client and the proxy machine are on the same network, I can ping and ssh between them, also, I know that the client is connecting to the proxy server, I think the failure is arising when the client is trying to authenticate, here's the output when I try to curl:
$ http_proxy=10.1.10.10:8080 curl --user $JENKINS_USERNAME:$JENKINS_PASSWORD http://$JENKINS_HOSTNAME/api/json
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved here.</p>
<hr>
<address>Apache Server at $JENKINS_HOSTNAME Port 80</address>
</body></html>
How can I curl through a proxy like this with HTTP Basic Authentication?
I would use ssh for this instead of netcat.
Just to get some confusion out of the way, I will be referring to the node with VPN access as the "server", and the node without VPN access as the "client".
On the server side you should only need to install and have an ssh server running (in my test I have OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011).
On the client side you will need to do the following:
1) in your /etc/hosts file add in the address that your target URL resolves as on the server. I wasn't able to get curl to run DNS lookups through the proxy, which is why this is necessary.
2) setup ssh keys between the server and the client. while this is not necessary, it makes life easier.
3) run the following ssh command to have ssh act as a SOCKS proxy:
user#host$ ssh -vND 9999 <server>
-v is there so you can see what is going on with ssh,
-N tells ssh to not execute a remote command - this is useful for just simple port forwarding
-D this option is what actually forwards your local requests to the server
4) now you should be able to run the curl command you have above, but add in
---socks5 localhost:9999
Your full command will look like this:
curl --user $USER:$PASSWORD --socks5 localhost:9999 http://$JENKINS/api/json
If I can figure out how to forward the DNS requests from curl through ssh I'll update the ticket.
edit: formatting, awful grammar.

Resources