I tried to claim "Insight for my website" and i get the error "No admin data found at root webpage http://akcja-nikon.pl/. Insights requires admin data at this root webpage for the specified URL akcja-nikon.pl"
Admin tag data is there, i triple checked it, on both index.php and pickup.php (index redirects to pickup - in case you ask). I've done it almost a hundred times on all my other domains and never had a problem with that. I started having issues last week on this and one more domains.
Debugger scraps weird content for the URL an breaks after the first line on the HTML code
https://developers.facebook.com/tools/debug/og/object?q=akcja-nikon.pl%2Fpickup.php
and flags the error in red "Can't Download: Could not retrieve data from URL."
Any ideas? Maybe some weird Facebook cache?
Admin tag data is there, i triple checked it, on both index.php and pickup.php (index redirects to pickup - in case you ask).
No, it does not.
Your index.php redirects to agegate.php via JavaScript – which the Facebook scraper doesn’t care about.
And calling agegate.php directly, without any cookies set (which the scraper does not care for either), delivers a just an error message,
<br />
<b>Warning</b>: mysql_fetch_array(): supplied argument is not a valid MySQL
result resource in <b>/home/marketingpic/include-pl.php</b> on line <b>38</b>
together with an HTTP status code 302 Moved Temporarily and a faulty Location header that just says Location: pickup.php. (Faulty, because a fully qualified URL is required as value for a Location header by definition.)
And if I call your pickup.php, again without any cookies, it tries to send me back to index.php again (and again with a faulty Location header).
No idea, what exactly you are trying to accomplish here – but the way you are doing it right now, seems to be quite a nonsense.
Related
So we just started a website (poweronfilms.com). We only have 2 accounts so far. Josh (the superuser) and me. I am marked as an administrator. For some reason, every time I try to log into the frontend, it just reloads the page. If I put in false credentials, It tells me I have the wrong username/password. But If they're correct, it doesn't do anything. I can still log into the backend just fine though, and it tells me I'm logged in to the site, but I can't access it. Our superuser can still log on.
The most probable reason that it gives error for wrong credential but for valid credential it's not allowing you to login is might be because you have set wrong cookie domain in configuration.php
public $cookie_domain = '';
Recheck that setting, if it points to correct one or try with blank.
There's a myriad of reasons of why you are redirected back to the login page without any error, it might be:
Unwritable tmp and/or logs folder
Non-empty cookie domain
Forced caching in the .htaccess file (see here )
Invalid session handler
The above list is not exhaustive.
The URL "http://localhost/magento/" is not accessible.
Unable to read response, or response is empty
When install new magento in localhost.
As Ankit Parmar said, you can check
Skip Base URL Validation Before
...but you might encounter problems later on. This message appears probably because your domain is not public or not reachable. If you skip Base Url Validation, you will be able to finish the installation but you might get the strange problem of being able to log in but not to access the admin section (get redirected to the login page without any error message even if you see you're logged in by looking at the URL).
Rather than installing Magento under localhost, you should add a fake domain name in your host file and set up a vhost accordingly. you may then re-install Magento by accessing the fake domain name in your browser. You may setup the domain name in the core_config_data table if not done.
More over here: https://magento.stackexchange.com/questions/39752/how-do-i-fix-my-base-urls-so-i-can-access-my-magento-site
On my production ASP.NET MVC 3 site, I've been noticing the occasional "A potentially dangerous Request.Path value was detected from the client (%)." unhandled exception in the Windows application log.
While these can be perfectly valid under regular site usage (ie/ random web bots), a number of the requests appear to be from valid, local ISP users.
In the exception's request details, the Request URL is different than the Request path:
Request URL: http://www.somesite.com/Images/Image With Space.jpg
Request path: /Images/Imagehttp://www.somesite.com/Images/Image With Space.jpgWithhttp://www.somesite.com/Images/Image With Space.jpgSpace.jpg
Notice that in the "request path", any place there is a "space" in the path is replaced with an exact copy of the request url!
Within the site, the actual link looks like this:
<img src="/Images/Image%20With%20Space.jpg" />
Any idea what might be causing this? I tried to look at the documentation for Request.Path and Request.Url, but I can't figure out why they would be different. Hitting the Request URL directly brings up the resource correctly.
Update: I managed to get a trace of one of the malfunctioning requests by using IIS 7.0's Failed Request Tracing feature:
Referer: Google search
User-Agent: Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3
RequestURL: http://www.somesite.com:80/Images/Image%20With%20Space.jpg
Typing the URL manually into my iOS 5.1.1 brings up the image correctly. Searching for the image in Google Images brings up the image correctly. Still no successful reproduction.
Partway down the trace I see:
MODULE_SET_RESPONSE_ERROR_STATUS Warning. ModuleName="RequestFilteringModule", Notification="BEGIN_REQUEST", HttpStatus="404", HttpReason="Not Found", HttpSubStatus="11",
According to IIS' documentation, 404.11 from the Request Filtering module is a "double encoding" error in the URL. Experimenting a bit, if I purposefully create a double encoded url such as http://www.somesite.com/Images/Image%2520With%2520Space.jpg I get the exact error in the event log, complete with malformed Request Path.
The malformed Request Path in the event log error appears to be a bug in ASP.NET 4.0.
It doesn't, however, explain why I'm getting the error in the first place. I checked a large number of failed request logs - the only common factor is that they're all using AppleWebKit. Could it be a bug in Safari?
The httpRuntime section of the Web.Config can be modified to adjust the URL validation. ASP MVC projects are usually running in the validation mode 2.0 and the default invalid characters (separated by commas) are listed below.
<httpRuntime requestValidationMode="2.0" requestPathInvalidCharacters="<,>,*,%,:,&,\" />
As you can see the % sign is considered invalid. A space can be encoded to %20 causing the validation error. You can just add the requestPathInvalidCharacters attribute to the httpRuntime section in your Web.Config file and copy the values I listed below except for the "%," part.
Scott Hanselman has a blog post about this issue:
http://www.hanselman.com/blog/ExperimentsInWackinessAllowingPercentsAnglebracketsAndOtherNaughtyThingsInTheASPNETIISRequestURL.aspx
I can't help thinking that - given the restricted user-agent - this might represent incorrect handling of the URL by that browser on IOS 5.1.1. I don't personally own such a device so I can't test this - but it would be interesting to investigate how it behaves with a url that actually has spaces in it instead.
I have a feeling that it's seeing the %20 in the url from the page source and double-encoding it, thinking it's being helpful. The problem there being that IIS will decode it back (before ASP.net kicks in) and throw it's rattle out of it's pram because now it sees a literal %20 instead of a space.
I personally don't recommend modifying your servers' security settings; however it would be the easiest solution so I dare say that's what you will do.
Rather, I think if you can confirm this 'bug' (I'm already on the road, finding a safe hiding place from Apple's lawyers), find a format that works for this device; or take all the spaces out of your resource urls. A - is the best alternative.
I've been trying to follow these two links on how WebMatrix does URL Routing
http://www.mikesdotnetting.com/Article/165/WebMatrix-URLs-UrlData-and-Routing-for-SEO
http://www.asp.net/web-pages/tutorials/working-with-pages/18-customizing-site-wide-behavior
From my understanding, for something like http://localhost:44893/a/xyz
WebMatrix will first for a file name /a/xyz.cshtml and if that isn't found then it will check for /a.cshtml and if that isn't found then it will check for /a/default.cshtml
I created an empty site in WebMatrix 2 Beta ( 3/5 Refresh ). I created a folder name a and created a default.cshtml file inside.
If I go to http://localhost:44893/a, I'll get the default page but if I go to http://localhost:44893/a/xyz, I'll get
HTTP Error 404.0 - Not Found
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable
Module IIS Web Core
Notification MapRequestHandler
Handler StaticFile
Error Code 0x80070002
Requested URL http://localhost:44893/a/xyz
Physical Path C:\Code\Test\a\xyz
Logon Method Anonymous
Logon User Anonymous
Is there anything that I'm missing to setup this up?
You missed this part of the article:
If no matches are found during the search for files, Web Pages will
attempt to locate a default document instead. The two default
documents which work are default.cshtml and index.cshtml in that
order. However, this search is performed once, and assumes that the
URL is entirely a file path, and contains no UrlData.
The built-in routing system will always assume that the URL represents a file path. The only time a default document comes into play is when the system has already determined that a/xyz.cshtml doesn't exists, so tries to establish if xyz is a folder containing a default document. If a/xyz/default.cshtml (or index.cshtml) doesn't exist, no further attempts are made to locate a default document while trying to match this particular URL to a file path.
Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is
Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)
The default content account is an admin on the site collection that I am trying to crawl.
Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.
Is there anything else that could be causing this error? Something with file system or database permissions maybe?
Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.
It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.