can't pass the tor and privoxy page test on CentOS7 - proxy

I have successfully started the tor and privoxy. But when I came to the page test, it always said that "Privoxy is not being used" .
I followed the answer of Question 4.10 "How do I use Privoxy together with Tor?" on this page ,but failed.
I'm working on CentOS7 and used Wget to get the test page http://config.privoxy.org/show-status .
Any help would be really appreciated!
this is what I type in command line:
(myapp)[hadoop#kaiyuandao myapp]$ sudo service privoxy start
/etc/init.d/privoxy: line 97: kill: (24849) - No such process
Starting Privoxy, OK.
(myapp)[hadoop#kaiyuandao myapp]$ sudo service tor start
Starting tor...done.
(myapp)[hadoop#kaiyuandao myapp]$ wget http://config.privoxy.org/show-status
--2015-09-08 05:43:08-- http://config.privoxy.org/show-status
Resolving config.privoxy.org (config.privoxy.org)... 198.199.92.59, 162.243.226.87
Connecting to config.privoxy.org (config.privoxy.org)|198.199.92.59|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://privoxy.org/config/show-status [following]
--2015-09-08 05:43:08-- http://privoxy.org/config/show-status
Resolving privoxy.org (privoxy.org)... 216.34.181.97
Connecting to privoxy.org (privoxy.org)|216.34.181.97|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://www.privoxy.org/config/ [following]
--2015-09-08 05:43:09-- http://www.privoxy.org/config/
Resolving www.privoxy.org (www.privoxy.org)... 216.34.181.97
Reusing existing connection to privoxy.org:80.
HTTP request sent, awaiting response... 200 OK
Length: 3832 (3.7K) [text/html]
Saving to: ?.how-status.1?
100%[=======================================================================================================>] 3,832 --.-K/s in 0s
2015-09-08 05:43:09 (82.2 MB/s) - ?.how-status.1?.saved [3832/3832]
(myapp)[hadoop#kaiyuandao myapp]$ vi show-status
and this is the content I get from the test page
Privoxy is not being used
The fact that you are reading this page shows that Privoxy was not used in the process of accessing it. Had the request been made through Privoxy, it would have been intercepted and you would be looking at Privoxy's web-based user interface now.
So what went wrong? Chances are (in this order) that:
this page is in your browser's cache. You've once been here before starting to use Privoxy, and now your browser thinks that it already knows the content of this page. Hence it doesn't request a fresh copy.
Force your browser to do that. With most browsers, clicking "reload" while holding down the shift key (shift-reloading) should suffice, but you might need to manually clear the browser's cache (both memory and disk cache).
your browser is not set up to use Privoxy.
Check your browser's proxy settings and make sure that it uses 127.0.0.1, port 8118 (or, if you did a custom configuration, whatever different values you used).
when using multiple proxies in a chain, that either the chain is broken at some point before Privoxy, or that an earlier proxy serves this page from its cache.
Shift-reload, clear all caches, and if the problem still persists, trace the proxy chain starting with your browser's settings. Please refer to the forwarding chapter of the user manual for details.
Until version 2.9.13, Privoxy was also known as Internet Junkbuster. If you recently upgraded, then the web-based interface has moved - it is now at http://config.privoxy.org/ (Short form: p.p [Privoxy Proxy]).
If you have read the user manual and still have trouble, feel free to submit a support request to get help.

My main problem is that I forgot to set the http proxy.
Since I use wget to get pages, I changed /etc/wgetrc and set http_proxy=127.0.0.1 .
FYI

Related

Typo3 behind Proxy

I'm trying to get a Typo3 (6.2) instance running behind a (forwarding) proxy (squid). I have set
'HTTP' => array(
'adapter' => 'curl',
'proxy_host' => 'my.local.proxy.ip',
'proxy_port' => '8080',
)
as well as
'SYS' => array(
'curlProxyServer' => 'http://my.local.proxy.ip:8080',
'curlUse' => '1'
)
The proxy doesn't ask for credentials.
When I try to update the extension list, I get the error message
Update Extension List
Could not access remote resource http://repositories.typo3.org/mirrors.xml.gz.
If I try Get preconfigured distribution, it says
1342635425
Could not access remote resource http://repositories.typo3.org/mirrors.xml.gz.
According to the proxy log, the server doesn't even try to connect to the proxy.
I can easily download the file using wget on the command line.
Ok, I've investigated he issue a bit more and from what I can tell, the Typo3 doesn't even try to connect anywhere.
I used tcpdump and wireshark to analyze the network traffic. The site claims to have tried sending a http-Request to repositories.typo3.org so I'd expect to find either a proxy connection attempt or a DNS query followed by an attempt to connect to that IP. (Of course, the latter is known not to work.) However, none of this happens.
I've tried some slight changes in the variable curlProxyServer. The documentation clearly states
String: Proxyserver as http://proxy:port/. Deprecated since 4.6 - will be removed in TYPO3 CMS 7. See below for http options.
So I tried adding the trailing "/" and removing the "http://" - no change. I'm confident there's no problem whatsoever regarding the proxy as the proxy isn't even contacted and has been working perfectly fine for everything else for years.
The error message comes from \TYPO3\CMS\Extensionmanager\Utility\Repository\Helper::fetchFile(). This one uses \TYPO3\CMS\Core\Utility\GeneralUtility::getUrl() to get the actual file content.
According to your setting, it should use the first part of the function, because curlUse is set and the URL starts with http or https.
So what you would need to do now is to throw some debug lines in the code and check at what point the request goes wrong.
Look at the source code, three possibilities come to mind:
The curl proxy parameters does not support a scheme, thus it should be 'curlProxyServer' => 'my.local.proxy.ip:8080',.
Some redirect does not work.
Your proxy has problems with https, because the TYPO3 TER should be queried over https.

Warning status issue in jmeter result table

I have to do load testing for a web based application. I am getting status as warning the results table. My request contains the URL and the path as /. I have passed username and password in the parameters section. Even after tried many times still it shows status as warning.
I have also tried using Proxyserver address, port, user name and password ...still no luck..
Please help me on this.
If you get a Warning status, this means that JMeter detected a response code > 399.
There can be a lot of reasons for this, examples:
Wrong URL : 404
Error : 500
To have more details on it, add a View Results Tree and inspect all tabs to see:
Request : What you are sending (headers / Cookies / body)
Response : What you are getting (headers / Cookies / body)
Then fix your HTTP request by comparing request in browser with what you have build.
Alternatively, use JMeter recording feature.
To see all ways to debug a script, have a look at this book where sample chapter explains lot of ways.
If you get warning status that means your entered URL contains https:// part which means your given URL working with HTTP protocol.
Remove https:// part from
- HTTP Request Defaults
- HTTP Request
So replace https:// part by WWW and try again

Is it possible to do cache busting with HTTP/2?

Has anybody tried?
Here is the use case. In a first request-response cycle, this would happen:
Request 1:
GET / HTTP/1.1
...
Response 1
HTTP/1.0 200 OK
Etag: version1
Cache-control: max-age=1
... angly html here
....<link href="mycss.css" >
...
Request 2:
GET /mycss.css HTTP/1.1
...
Response 2 (probably pushed):
Etag: version1
Cache-control: max-age=<duration-of-the-universe>
...
... brackety css ...
...
and then, when the browsers goes a second time to the same page, it will of course fetch again the "/" resource because of the very short max-age:
GET / HTTP/1.1
...
If-not-modified: version1
But it won't fetch mycss.css if it has it in cache. However, the server can use the validator present in the "if-not-modified" header of the request for "/" to get an idea of the client's cache age, and may conclude that mycss.css version's of the browser is too old. In that case, before even answering the previous request, the server can "promise" a new version of mycss.css/
By the specs, should the browser accept and use it?
Overview:
I still don't know what the answer to my question is from a purely theoretical side, but at least today it doesn't seem possible in practice to do cache-busting this way :-(, with neither Google Chrome or Firefox. Both reject or ignore the pushed stream if they believe that the resource they have in cache is fresh.
I also got this from somebody who prefers to remain anonymous:
Browsers will typically put resources received through push in a
"demilitarized zone" and only once the client asks for that resource
it will be moved into the actual cache. So just pushing random
things will not make them end up in the browser cache even if the
browser accepts them at the push moment.
Update
As early 2016, it is still not possible, due mainly to lack of consensus on how this should be handled, and if it should be allowed at all or not.
As this page shows, even with HTTP/2, the way to solve the stale assets issue is to create a unique URL for each asset version, and then ensure that the user receives that new URL when they re-visit the page.

Nginx will not stop rewriting

I am attempting to configure an owncloud server that rewrites all incoming requests and ships them back out at the exact same domain and request uri but change the scheme from http to https.
This is failed miserably. I tried:
redirect 301 https://$hostname/$request_uri
and
rewrite ^ https://$hostname/$request_uri
Anyway, after removing that just to make sure the basic nginx configuration would work it as it had prior to adding the ssl redirects/rewrites it will NOT stop changing the scheme to https.
Is there a cached list somewhere in nginx's configuration that keeps hold of redirect/rewrite protocols? I cleared my browser cache completely and it will not stop.
AH HA!
in config/config.php there was a line
'forcessl' => true,
Stupid line got switched on when it received a request at the 443 port.
Turned off and standard http owncloud works and neither apache/nginx are redirecting to ssl.
Phew.

How to set nginx cache headers to never expire?

Right now I'm using this:
location ~* \.(js|css)$ { # |png|jpg|jpeg|gif|ico
expires max;
#log_not_found off; # what's this for?
}
And this is what I see in firebug:
Did it work? If I didn't get it wrong, my browser is asking for the file again, and nginx is answering 'not modified', so my browser uses the cache. But I thought the browser shouldn't even ask for the file, it already knows it will never expire.
Any thoughts?
Do not use F5 to reload the page. Use click on the url + enter, or click in a link. That's how I got only 1 request.
Clearly , your file is not stale as its max-age and expiry date are still valid and hence the browser will not communicate with server.The Browser doesn't ask for the file unless it is stale. i.e. its cache-control ( max -age) is over or Expiry date is gone. In that case it will ask the serve if the given copy is still valid or not. if yes, it will serve same copy, else it will get new one.
Update :
See, here is the thing. F5/refresh will always make browser to request the server if anything is modified or not. It will have If-Modified-Since in Request header. While it is different from just navigating the site, coming back to pages and click events in which browser will not ask server , and load from cache silently( no server call). Also, if you are testing on firefox Live HTTP Headers, it will show you exactly what is requested, while Firebug will always show you If-Modified-Since. Safari's developer menu should show load time as 0. Hope it helps.

Resources