I'm going crazy searching for this event but I didn't find a single solution for that.
That's the situation :
Wordpress in a Azure Windows App Service.
All works fine, but in the eventLog.xml a see every day, seems scheduled, a log like that :
A potentially dangerous Request.Path value was detected from the client (:).
at System.Web.HttpRequest.ValidateInputIfRequiredByConfig()
at System.Web.HttpApplication.PipelineStepManager.ValidateHelper(HttpContext context)
The page request is like that :
https://www.mywebsite.com/my-category/my-article/https:/www.mywebsite.com/my-page/
I'm sure that those links are not in the google serp and there is no link to those pages.
I found that :
Azure websites throw 500 error when there is a colon in the URL
It's the same error, but I can't afford the same solution.
It seems to be related to the IIS Request Filtering feature, but I'm not a Windows wizard and I don't know how to disable this feature on Azure.
Can anyone solve this issue?
IIS VERSION : 10
Related
I am currently testing the changes for version 4.0 of the protocol for PSD2 using Direct Integration.
I am running under Visual Studio using a localhost website address.
When calling the SagePay payment endpoint with ThreeDSNotificationURL set as a localhost address (http://localhost:15536/Payments/ThreeDResponse) I receive the following error '3228 : The ThreeDSNotificationURL field format is invalid.'
If I change this field to a fully qualified domain (http://www.google.com) I no longer receive the error, but can't complete my testing.
Using localhost for the termurl in version 3.0 of the protocol works as expected.
I was attempting to work locally like your self and receiving the same issue. After speaking with support they confirmed that they will not accept "localhost". Also, the documentation suggests that HTTPS is a requirement, so this might also be a blocking factor.
I think someone suggested using ngrok as a means of tunneling external requests into your localhost, which is a good method to continue development locally whilst also being visible externally to services like SagePay.
Once I got passed the above issue, I got several more errors for other missing required fields as listed here; https://www.sagepay.co.uk/support/38/psd2-under-direct-integration (note that if BrowserJavascriptEnabled is true all conditional fields are then required)
Did you URL encode the ThreeDSNotificationURL in your post?
I send it like that and it is ok:
sb.Append(HttpUtility.UrlEncode("https://www.clientdomian.com/ac/ThreeDSNotificationURL.aspx"));
I run the site on my local IIS for development.
I recently had this issue and I wanted to document it here, because searching for this issue gives very little in terms of results.
I was getting this error code when i switched my Opayo/SagePay extension (MageNest SagePay for Magento) to 3ds2.
As it turns out, the full URL wasn't being sent. It was trying to send sagepay/direct/postBack?form_key=HZuYxgiEq9w2CNFB and NOT https://www.example.com/sagepay/direct/postBack?form_key=HZuYxgiEq9w2CNFB. It's partly my fault because there was a domain field in the config which was empty (it's not like the domain can't be retrieved automatically, huh) and partly the vendors fault because it was very badly documented.
So while this is a different problem for a different platform, I hope this helps someone.
I am running a WordPress on an Azure Web app connecting to a MySQL server on a different Windows server. When loading the mentioned page in Chrome, it shows 2 popups 403 & Forbidden. Checking the console throws this error - ecbcc.js:2 POST /wp-admin/admin-ajax.php 403 (Forbidden)
This works fine on FireFox & IE but not on Chrome. Any ideas why?
This is because of your cache. Minified version of JS is causing the issue in chrome browser. Check or purge the cache and check for the permissions applied to cached files as well.
I faced the same issue but it took a long time for me to fix it. Because my solution was not caused by common things like cache, .htaccess, files permissions, etc. I apply all the possible solutions as described here. When nothing worked for me, then I talked with my hosting provider and the issue was on their side. Actually, the server has black-listed my IP.
Below is the reply from the support of my hosting provider:
After checking it, it looks like the issue is caused by trigger
ModSecurity rules.
ModSecurity is an Apache module that works as a web application
firewall. It blocks known exploits and provides protection from a
range of attacks against web applications. However, sometimes,
mod_security may incorrectly determine that a certain request is
malicious, while it is actually legitimate. In such a situation, we
can whitelist the triggered mod_security rule on the server, so that
you can bypass the block.
In order to properly investigate, we need you to share your IP address
with us. You can copy it from here: https://ip.web-hosting.com/
Looking forward to your response.
This error can appear for more than one reason. Except for the accepted answer, if you are using a shared hosting solution as a server then it would be best to contact the support of the service. Also if you use Plesk or Cpanel you can check the server logs to see if there is any false positive rule that from mod_security that catches the error. Then you can find the error that could look something like that:
ModSecurity: Warning. Match of "test file" against "REQUEST_FILENAME" required. [file "/etc/httpd/conf/modsecurity.d/rules/custom/006_i360_4_custom.conf"] [line "264"] [id "77140992"]
You can apply the ID on your firewall exclusion list (if this is provided by your hosting service) and then the server will not block the request anymore.
IMPORTANT: If you are not sure what you are doing, ask your hosting provider for support. Experimenting on live servers/sites is not the best option and I would strongly recommend avoiding it.
I am developing a RESTful web api service. It's web api v.1, not v.2. Also I am developing on Visual Studio 2010 SP1. I have installed the MVC 4 for VS2010 SP1.
Please understand and keep in mind that Upgrading to newer versions of VS or Web Api 2 is not an option.
I have the following problem after a Windows Update ocurred.
When I try to use my RESTful api this way....
http://url.com/documents/ (get all the documents) I get the following error...
The resource cannot be found.
Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly.
Here is the stack trace.
[HttpException]: File does not exist. at System.Web.StaticFileHandler.GetFileInfo(String virtualPathWithPathInfo, String physicalPath, HttpResponse response) at System.Web.StaticFileHandler.ProcessRequestInternal(HttpContext context, String overrideVirtualPath) at System.Web.DefaultHttpHandler.BeginProcessRequest(HttpContext context, AsyncCallback callback, Object state) at system.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
If I specify the action name directly, it works. For example...
http://url.com/documents/get (gets all the documents) or
http://url.com/documents/7 (gets document id=7.
It only fails when you call it by its default name. I have already read similar situations here and I have tried to follow their solutions but they are not working for me properly.
Now, I know this is not a "web api routing" issue because I actually get the .net default exception page html markup (I am using Postman to test my webservice). When I force a "routing issue", then I get a JSON error description, which means that the Controller actually got created in the pipeline and returned a response.
Also, I have a custom file (SecurityHandler) that inherits from DelegatingHandler. This file gets executed almost first in the pipeline with each call to the api. Even before the actual Controller. Well, this file is never called when I get the error, which confirms to me that the "webserver" (either VS Development Server or IIS 7) is the one throwing the exception.
I have exhausted every single solution that I have found here. Change my web.config to multiple handlers configurations, re-installed MVC 4 for VS2010, created an entire new project... all these efforts have shown no results whatsoever.
Like I said, this was working perfectly fine until my pc restarted from a Windows Update BUT... why does it fail in the server as well? I did deploy my api to the server after the error started to occur.
Thanks.
The issue is not in Web API (and has nothing to do with it's version or Visual Studio 2010), it's the static file handler trying to serve and failing.
Alternatives:
0. Do you have a documents folder in your site? Get rid of it.
1. Remove the static file handler for directory browsing and re-add it.
2. Use RAMMFAR (less recommended)
The folk in the QA department use visual studio team test (2008 IIRC) to run load tests against our web application.
The latest set of tests have failed on several pages. The error reported is
Request failed: The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF
Searching for this using google yields quite a few results. it would appear that this error message is generated from the .Net framework WebRequest class (i.e. it is not a visual studio specific message). The most useful result is this one, which details my exact problem and how to suppress the error.
But of course, I want to get to the bottom of why this error occurs in the first place. Here are some more facts: -
This error never used to occur when the tests were run against an older version of the web app. The web app. host OS and web server (Win 2003 and IIS 6) are identical in both cases.
Not all the pages generate this error - only some.
The only significant change to these pages (that I can think of) is that they now use some AJAX whereas before they did not (IIRC)
In order to narrow down the problem, I created the simplest page that I could to replicate the problem. Luckily, that was not too hard. I then inspected the bytes in the header using Fiddler but I could not find an occurrence of a CR (0x0D) that was not followed by a LF (0x0A).
The raw HTTP response (as stored from Fiddler by response saving bytes - so its encoding should not have been altered during the save) is here as text if you don't believe me!
So now I am left thinking that the supposed error might be a false alarm. Does anyone else have experience of this/can help shed light?
This is definitely not a false alarm - I've been getting this error in my app a lot while trying to communicate with Facebook API.
I've just stumbled upon this response from Steven Cheng - http://www.velocityreviews.com/forums/t302174-why-do-i-get-the-server-committed-a-protocol-violation.html - and let me quote him:
From your description, you're using
the HttpWebRequest component to send
some http request to some external web
resource in your ASP.NET web
application. However, you're always
getting the "The server committed a
protocol violation.
Section=ResponseStatusLine" error
unless you set the following section
in the web.config file:
<system.net>
<settings>
<httpWebRequest useUnsafeHeaderParsing="true" />
</settings>
</system.net>
And you're wondering the cause of this behavior, correct?
As for this issue, I've performed some
research on this and found that the
problem is actually caused by the
critical http header
parsing/validating of the
HttpWebRequest component. According to
the Http Specification(http1.1), the
HTTP header keys shoud specifically
not include any spaces in their names.
However, some web servers do not fully
respect standards they're meant to.
Applications running on the Dotnet
framework and making heavy use of http
requests usually use the
httpWebRequest class, which
encapsulates everything a web oriented
developer could dream of. With all the
recently issues related to security,
the "httpWebRequest" class provides a
self protection mechanism preventing
it to accept HTTP answers which not
fully qualify to the specifications.
The common case is having a space in
the "content-length" header key. The
server actually returns a "content
length" key, which, assuming no spaces
are allowed, is considered as an
attack vector (HTTP response split
attack), thus, triggering a "HTTP
protocol violation error" exception.
Will try if this helps right now and post results later
Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is
Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)
The default content account is an admin on the site collection that I am trying to crawl.
Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.
Is there anything else that could be causing this error? Something with file system or database permissions maybe?
Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.
It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.