Non User Agent Denied - web-hosting

I'm reeealy stuck and way over my head.
I've got a e-commerce site set up with woocommerce and realex (the merchant company).
I'm having problems with the response URL, it is being denied when programmatic (non user-agent) requests are being made.
try:
$ wget curl http://fifty2printsolutions.com/?wc-api=WC_Gateway_Realex_Redirect
But when you set a valid user agent:
try:
$ curl -A "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.112 Safari/534.30" http://fifty2printsolutions.com/?wc-api=WC_Gateway_Realex_Redirect
it works.
This points to a server configuration issue, it seems like programmatic (non user-agent) requests are being denied by his server with a 406 response.
At least this is what the extension author has told me
How can I got about fixing this? I'm with host papa, the support is a bit lack luster.
Any help would be greatly appreciated.

I think you might be dealing with an Apache mod_security issue here. From my experimentation, as long as I have something in the User-Agent, it will process ok. However, if I include the words libwww or perl in the User-Agent, or leave it blank, it doesn't work.
Realex Epage Redirect uses "epage.cgi / libwww-perl" (or similar, I don't have it in front of me) as the User-Agent. This hits both of those rules, so it is getting blocked. We've come across this before and the only solution is to ask the administrators of the site to modify the mod_security rules to allow this. We once changed the User-Agent string to something else, but incredibly, several shopping carts suddenly stopped working and we had to roll the change out.
Hope this helps
Owen

Related

w3m: fake JavaScript and user-agent

I use w3m to search words in the Spanish dictionary (dle.rae.es), so I'm using a bash script with this line:
w3m "https://dle.rae.es/$1"
The filename of the script is defes. For example, to search the meaning of "casa" I type defes casa, viewing the result in my terminal.
However, I'm getting this error:
Please enable cookies.
Please wait...
We are checking your browser... dle.rae.es
Please stand by, while we are checking your browser...
Redirecting...
Please turn JavaScript on and reload the page.
Please enable Cookies and reload the page.
Why do I have to complete a CAPTCHA?
Completing the CAPTCHA proves you are a human and gives you temporary access to the web property.
What can I do to prevent this in the future?
If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not
infected with malware.
If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking
for misconfigured or infected devices.
Cloudflare Ray ID: 69ffcac51e4a668f • Your IP: XX.YYY.ZZZ.NNN • Performance & security by Cloudflare
I try to do something like this:
w3m -header 'User-Agent: blah' ...
Tested with a lot user agents.
I'm also using the -cookie flag to try to remove the cookies message...
What could I do?

bug in google drive SDK JS api (TypeError: Cannot read property 'sl' of undefined)

A few weeks ago we started noticing strange errors from the google client API or google drive JS api (not sure which, the URL reference is below), they have increased in frequency over the last few days
TypeError: Cannot read property 'sl' of undefined
This seems to be affecting windows Chrome mostly - a typical example of the user agent from our error logs is
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43 Safari/537.31)
from what I could see, the only line with .sl is this:
if(!this.b.headers.Authorization){var f=(0,_.Hx)(_.p,_.p);f&&f[_.Ak.pl.sl]&&(c=f[_.Ak.pl.sl].split(/\w+/))}
this comes from
https://apis.google.com/_/scs/apps-static/_/js/k=oz.gapi.en.uSTvEdNXb7o.O/m=client/rt=j/sv=1/d=1/ed=1/am=UQ/rs=AItRSTOm1KS5pZVEepZkn9qQJeuQZC_Qjw/cb=gapi.loaded_0
I know this is intentionally cryptic, so it's beyond me to suggest how to fix it, but I would appreciate if someone looks into this as the frequency seems to be increasing. Perhaps a guard around _Ak.pl to check if it's not null before executing .sl ?
I managed to resolve the problem which was reported. The issue is due to the authorize settings. Some settings seems to be not working for the app. Now the app works with following settings:
gapi.auth.authorize({client_id: clientId, scope: scopes, immediate: false}, handleAuthResult);
Previously the app was configured to run offline.
Note: In the code, clientId and scopes are variables, handleAuthResult is an associated function.

Janrain engage: token_url not called from Firefox

I have an application using Janrain Engage for login.
Everything works since few months, except on ONE machine from Firefox...
I have no clue for what reason, when I try to log-in from this machine (on my site or even on Janrain's admin site), I get the sign-in page, the I choose a provider, enter my information, validate and then, nothing happens !
Normal process trace is:
GET
https://XXXXXXX.rpxnow.com/signin/get_login_info?widget_type=auth&provider=google&time=1358864872301 [HTTP/1.1 200 OK 1144ms]
POST http://XXXXXXX.rpxnow.com/redirect?loc=yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy [HTTP/1.1 200 OK 2533ms]
POST https:// my_token_url_on_mydomain [HTTP/1.1 302 Found 3667ms]
On the faulty machine, I just have the first GET, and then nothing else...
The token_url callback is never called, so I even do not have any trace on my server.
The machine where the problem occurs is my personnal machine at home. Same login attemps works like a charm with Chrome or IE. I did'nt find any specific settings in my Firefox configuration.
I'm afraid some potential customers can get the same behaviour as me and go away... Is anyone experimenting similar problem ?
Froggy. You might have 3rd party cookies disabled in Firefox. That's the only thing I can think of that would cause that issue on a single browser. Go here for info on changing that setting: http://support.mozilla.org/en-US/kb/disable-third-party-cookies/

ASP.NET MVC "Potentially dangerous Request.Path" with valid URL

On my production ASP.NET MVC 3 site, I've been noticing the occasional "A potentially dangerous Request.Path value was detected from the client (%)." unhandled exception in the Windows application log.
While these can be perfectly valid under regular site usage (ie/ random web bots), a number of the requests appear to be from valid, local ISP users.
In the exception's request details, the Request URL is different than the Request path:
Request URL: http://www.somesite.com/Images/Image With Space.jpg
Request path: /Images/Imagehttp://www.somesite.com/Images/Image With Space.jpgWithhttp://www.somesite.com/Images/Image With Space.jpgSpace.jpg
Notice that in the "request path", any place there is a "space" in the path is replaced with an exact copy of the request url!
Within the site, the actual link looks like this:
<img src="/Images/Image%20With%20Space.jpg" />
Any idea what might be causing this? I tried to look at the documentation for Request.Path and Request.Url, but I can't figure out why they would be different. Hitting the Request URL directly brings up the resource correctly.
Update: I managed to get a trace of one of the malfunctioning requests by using IIS 7.0's Failed Request Tracing feature:
Referer: Google search
User-Agent: Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3
RequestURL: http://www.somesite.com:80/Images/Image%20With%20Space.jpg
Typing the URL manually into my iOS 5.1.1 brings up the image correctly. Searching for the image in Google Images brings up the image correctly. Still no successful reproduction.
Partway down the trace I see:
MODULE_SET_RESPONSE_ERROR_STATUS Warning. ModuleName="RequestFilteringModule", Notification="BEGIN_REQUEST", HttpStatus="404", HttpReason="Not Found", HttpSubStatus="11",
According to IIS' documentation, 404.11 from the Request Filtering module is a "double encoding" error in the URL. Experimenting a bit, if I purposefully create a double encoded url such as http://www.somesite.com/Images/Image%2520With%2520Space.jpg I get the exact error in the event log, complete with malformed Request Path.
The malformed Request Path in the event log error appears to be a bug in ASP.NET 4.0.
It doesn't, however, explain why I'm getting the error in the first place. I checked a large number of failed request logs - the only common factor is that they're all using AppleWebKit. Could it be a bug in Safari?
The httpRuntime section of the Web.Config can be modified to adjust the URL validation. ASP MVC projects are usually running in the validation mode 2.0 and the default invalid characters (separated by commas) are listed below.
<httpRuntime requestValidationMode="2.0" requestPathInvalidCharacters="<,>,*,%,:,&,\" />
As you can see the % sign is considered invalid. A space can be encoded to %20 causing the validation error. You can just add the requestPathInvalidCharacters attribute to the httpRuntime section in your Web.Config file and copy the values I listed below except for the "%," part.
Scott Hanselman has a blog post about this issue:
http://www.hanselman.com/blog/ExperimentsInWackinessAllowingPercentsAnglebracketsAndOtherNaughtyThingsInTheASPNETIISRequestURL.aspx
I can't help thinking that - given the restricted user-agent - this might represent incorrect handling of the URL by that browser on IOS 5.1.1. I don't personally own such a device so I can't test this - but it would be interesting to investigate how it behaves with a url that actually has spaces in it instead.
I have a feeling that it's seeing the %20 in the url from the page source and double-encoding it, thinking it's being helpful. The problem there being that IIS will decode it back (before ASP.net kicks in) and throw it's rattle out of it's pram because now it sees a literal %20 instead of a space.
I personally don't recommend modifying your servers' security settings; however it would be the easiest solution so I dare say that's what you will do.
Rather, I think if you can confirm this 'bug' (I'm already on the road, finding a safe hiding place from Apple's lawyers), find a format that works for this device; or take all the spaces out of your resource urls. A - is the best alternative.

google image search shell api

I'm looking for something like API for google-image search using in bash shell.
I want to gel list of links and resolution-info for some query string.
The ideal will be curling or wgeting any page and than parsing results.
But I cannot find any parseble page variant.
I'm trying $> curl "http://images.google.com/images?q=apple" and get nothing.
Any ideas?
There are APIs for Google's searches; http://code.google.com/apis/imagesearch although I don't know how you would meed the referrer/branding licensing requirements.
It seems that Google Images does not like curl (403 error code).
To avoid the 403 error, you need to fake the user agent, like this:
wget -qO- "http://images.google.com/images?q=apple" -U "Firefox on Ubuntu Gutsy: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20080418 Ubuntu/7.10 (gutsy) Firefox/2.0.0.14"
Still, I guess this is not enough since you get a load of javascript code, that needs to be executed somehow.
My 2 cents.

Resources