I am trying to set up a proxy server on Amazon EC2 (Singapore). But I always get "302 moved temporarily" error while trying to run "Squidclient http://www.google.com" on EC2 instance. And I cannot reach any site with that proxy as well from local desktop. I am just new to Squid. May anyone help me find out what is wrong?
My EC2 instance is a t2.micro Ubuntu Server 14.04 LTS one. I created a test security group with all traffics inbound/outbound enabled from/to anywhere. The squid version is 3.3.8.
The squid.conf is as below actually the default one, but commented out all deny lines and allow all http_access.
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
#http_access deny !Safe_ports
#http_access deny CONNECT !SSL_ports
#http_access allow localhost manager
#http_access deny manager
#http_access allow localhost
http_access allow all
http_port 3128
cache_dir ufs /var/spool/squid3 100 16 256
coredump_dir /var/spool/squid3
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
refresh_pattern . 0 20% 4320
The error from "Squidclient http://www.google.com" is:
HTTP/1.1 302 Moved Temporarily
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: http://www.google.com.sg/?gfe_rd=cr&ei=AM2WVORc0YagA8zFgoAI
Content-Length: 260
Date: Sun, 21 Dec 2014 13:37:04 GMT
Server: GFE/2.0
Alternate-Protocol: 80:quic,p=0.02
X-Cache: MISS from ip-172-31-15-144
X-Cache-Lookup: MISS from ip-172-31-15-144:3128
Via: 1.1 ip-172-31-15-144 (squid/3.3.8)
Connection: close
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
here.
</BODY></HTML>
The Squidclient running result looks reasonable to me. Your ec2 instance located in Singapore, and google is redirecting you to www.google.com.sg
Note the output from Squidclient
<A HREF="http://www.google.com.sg
The reason you can't reach out to any site via the squid proxy from you local desktop, is that you have to explicitly allow your desktop's public IP in squid's config, like below
acl home_network src <your desktop public IP>
http_access allow home_network
Then restart squid
Finally, try to config squid to listen on a higher port number instead of 3128, more detail is at How can I using squid from internet
Related
I work on a project where there are multiple nexus registries behind different proxies :
How can I make sure that Gradle (or any repository related tool, such as NPM, maven, etc) can handle 3+ differents proxies at the same time to reach multiple Nexus instances ?
Until now, we were using a workaround : 1 nexus was accessed through HTTP proxy and 1 through HTTPS proxy. But now, we have 3 proxies to handle !
I think that it must be possible to add a machine (a squid instance ?) which would redirect proxy requests to the correct proxy, based on the domain name :
I'm not used to Squid and I still not managed to achieve this. Can anyone confirm if this is possible (or not) using Squid ? Does anyone would have another solution to suggest ?
Just for the background story, this network setting is due to multiple partner companies being involved in the project. We have access to each company Nexus through dedicated VPN and proxies.
OK, so I managed to run a Squid in a docker with the following config :
acl host src 10.0.0.0/8
acl host src 172.0.0.0/8
http_access allow host
maximum_object_size 256 MB
maximum_object_size_in_memory 256 MB
dns_nameservers 8.8.8.8 8.8.4.4
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access deny all
http_port 3128
cache_peer 10.1.1.1 parent 8080 0 proxy-only no-query name=proxy1
cache_peer 10.2.1.1 parent 8080 0 proxy-only no-query name=proxy2
cache_peer 10.3.1.1 parent 8080 0 proxy-only no-query name=proxy3
acl sites_proxy1 dstdomain .domain1.com
acl sites_proxy2 dstdomain .domain2.com
acl sites_proxy3 dstdomain .domain3.com
cache_peer_access proxy1 deny !sites_proxy1
cache_peer_access proxy2 deny !sites_proxy2
cache_peer_access proxy3 deny !sites_proxy3
coredump_dir /var/spool/squid3
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
refresh_pattern . 0 20% 4320
never_direct allow all
Then, I run the docker using this command :
docker run --rm --volume ~/DevTools/squid/squid.conf:/etc/squid/squid.conf -v ~/DevTools/squid/logs:/var/log/squid -p 3128:3128 datadog/squid
I am configuring squid as proxy to forward request and intend to allow request only to example.com. This is the request as made from my browser: http://111.222.333.444/http://example.com where 111.222.333.444 is IP of my proxy server, where squid is installed.
But I am getting Invalid URL error, while the server is trying to fetch /http://example.com (note leading slash), here is access log record:
1505858815.396 0 12.34.56.78 NONE/400 3687 GET /http://example.com - NONE/- text/html
Here is the configuration that I am using
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src all # Allow access from everywhere
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl CONNECT method CONNECT
acl safe_url url_regex example\.com.*
http_access allow safe_url
http_access deny all
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 80
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
What I am doing wrong?
Got it! squid is proxy hence need to do the proxy connection to the server. If using firefox it can be done by setting up proxy and entering proxy server IP and port number. Once done URLs can be used directly in this case, use this directly in the navigation bar: http://example.com
More on how to setup proxy on Firefox and Chrome
I have a product form Symantec and their help is...less than helpful, including a nice message that says "Contact your reseller" in the "Contact Us" link. My reseller says to contact them. How? Anyways, it's a repackaged version if Squid for Windows. When I point IE to the proxy running locally I get "Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect." However, when I point IE on another machine to the server running Squid everything works fine.
I have zero experience with Squid or proxies. I tried some different configs based on searches here but nothing worked. I'm sure it's something simple. Here is the config:
digest_generation off
hierarchy_stoplist cgi-bin ?
acl all src 0.0.0.0/0.0.0.0
cache deny all
maximum_object_size 0 KB
emulate_httpd_log on
debug_options ALL,1
cache_store_log none
access_log none
useragent_log none
auth_param ntlm program c:/clientsiteproxy/libexec/mswin_ntlm_auth.exe
auth_param ntlm children 80
auth_param ntlm keep_alive on
auth_param negotiate children 80
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
authenticate_ip_shortcircuit_ttl 30 seconds
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
read_timeout 15 minutes
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Smartconnect dstdomain ned.webscanningservice.com
acl CONNECT method CONNECT
acl authproxy proxy_auth REQUIRED
acl our_networks src 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16
acl HEAD method HEAD
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow HEAD
http_access deny !our_networks
http_access allow Smartconnect
http_access allow authproxy
http_access deny all
icp_access allow all
httpd_suppress_version_string on
visible_hostname ClientSiteProxy
forwarded_for off
header_access Via deny all
never_direct allow all
cache_dir null c:/ClientSiteProxy
coredump_dir c:/clientsiteproxy/var/cache
http_port 3128
This is most likely the culprit: http_access deny !our_networks. This statement denies outbound access for all source IPs apart from 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16.
When you browse from the same machine, browser would bind on localhost, so you can try expanding the our_networks definition with 127.0.0.1.
I have managed to get my HTTP "proxy" connection to work but on most https connections I get the error:
Connection to failed.
The system returned: (111) Connection refused
Here is my config file I am currently using in squid:
http_port 3128
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network
acl myhost src <myip>
http_access allow myhost
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
via off
forwarded_for off
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
dns_nameservers 8.8.8.8 8.8.4.4
Running on a Ubuntu 12.04 VPS - connecting remotely via the browser proxy settings page...
All I want to do is to be able to connect to my server and browse http and https sites. Http works with the config above... https does not...
Remove the following line:
http_access deny CONNECT !SSL_ports
i would recommend keeping
http_access deny CONNECT !SSL_ports
but allowing those listed by adding:
http_access allow SSL_ports
I would like to force a squid proxy to use a proxy cache peer form everything related to Youtube.
My main squid server is on server A, it is a realy simple setup, with nearly no rules, everything is authorized.
Server A is linked to server B via OpenVPN. Server B also have a squid proxy set up and works well (tested with ssh tunnel, no problem).
On server A I have added theses rules :
acl youtube_videos_regx url_regex -i ^http://[^/]+\.youtube\.com/videoplayback\?
acl youtube_videos_regx url_regex ^http://(.*?)/get_video\?
acl youtube_videos_regx url_regex ^http://(.*?)/videodownload\?
acl youtube_videos_regx url_regex ^http://(.*?)/videoplayback\?
acl youtube_videos dstdomain .youtube.com
acl youtube_videos dstdomain .youtube-nocookie.com
acl youtube_videos dstdomain .googlevideo.com
acl youtube_videos dstdomain .ytimg.com
cache_peer 10.4.0.1 parent 3128 0 proxy-only no-query connect-timeout=5
cache_peer_access 10.4.0.1 allow youtube_videos
cache_peer_access 10.4.0.1 allow youtube_videos_regx
cache_peer_access 10.4.0.1 deny all
But this doesn't seems to works :
1383861430.377 578 192.168.0.103 TCP_MISS/200 192976 GET http://r9---sn-5hn7ym7e.googlevideo.com/videoplayback? - HIER_DIRECT/208.117.250.14 application/octet-stream
1383861430.636 935 192.168.0.103 TCP_MISS/200 238032 GET http://r9---sn-5hn7ym7e.googlevideo.com/videoplayback? - HIER_DIRECT/208.117.250.14 application/octet-stream
1383861430.642 2353 192.168.0.103 TCP_MISS/200 238032 GET http://r9---sn-5hn7ym7e.googlevideo.com/videoplayback? - HIER_DIRECT/208.117.250.14 application/octet-stream
1383861432.467 617 192.168.0.103 TCP_MISS/200 192976 GET http://r9---sn-5hn7ym7e.googlevideo.com/videoplayback? - HIER_DIRECT/208.117.250.14 application/octet-stream
Sometimes it works :
1383860987.725 125 192.168.0.103 TCP_MISS/204 353 GET http://r20---sn-5hn7ym7r.googlevideo.com/generate_204 - FIRSTUP_PARENT/10.4.0.1 text/html
Could it be because of the data type ?
If so, I don't know what kind of rule to add.
Thanks in advance.
Ok, I found how to solve my problem.
I just have to add this to my config file :
never_direct allow youtube_videos
never_direct allow youtube_videos_regx
These 2 lines, force squid to not use direct connection to my 2 acl.