Google Places REQUEST_DENIED from Server Request - Nothing works - google-places-api

After hours of searching and trying every possible solution found on the web, I cannot try anything else for my problem, so I really need help:
I want to implement a simple autocomplete text box with geocode results, so I call AJAX requests while the user types in the text box
I have enabled Google Maps and Google Places from the Google APIs Console
I have created a Key for server apps with my server's IP
I have a PHP file (called through AJAX) running on the server which sends the request to Google Places using file_get_contents() function (SSL is enabled) - also tried with cURL function
The request I'm asking for is
https://maps.googleapis.com/maps/api/place/autocomplete/json?input=MY_SEARCH_STRING&language=us&types=geocode&sensor=false&key=MY_KEY
where MY_SEARCH_STRING is a simple string like "London" and MY_KEY the Key I have created
what I get as a response is
{ "predictions" : [], "status" : "REQUEST_DENIED" }
I have also tried this with a browser key. Also tried to create a new project, enable Services all over again, and create new Key. Switched services on/off & creating new key many times in any combination. Nothing worked.
The strange thing is that the same code was working the last months in a landing page I had created on the server, but had a long time to check it, so I do not know when it stopped working.
I appreciate a lot any help!! Thank you.

SOLVED. In Google API Console, I had declared allowed server IPs. I was assuming that if I declared an IP, I was just ensuring that requests would be allowed only from that IP. In order for this to work, you must also declare the allowed per-user limits for each IP, or else it just doesn't allow any requests. I removed all the allowed IPs, waited for 3-4 minutes, and the request was allowed.

Not sure if this helps, but here is a short example:
https://google-developers.appspot.com/maps/documentation/javascript/examples/places-autocomplete?hl=el

I've had a similar issue, fixed it and discovered a few things that may be useful in troubleshooting this
using http instead of https will result in REQUEST_DENIED
omitting &userIp=x.x.x.x will result in REQUEST_DENIED
a server key needs to be generated and used in case the request is from a php script even if ONLY consumed via browser by users or it will result in REQUEST_DENIED
a few minutes are necessary before testing if the list of IPs allowed has been changed
Hope it helps some of you guys

Related

Instagram user page parsing (with proxy, without API)

I need to parse instagram user page without API and with proxy, and I use code like below
def client(options = {})
Faraday.new('https://www.instagram.com', ssl: { verify: false }, request: { timeout: 10 }) do |conn|
conn.request :url_encoded
conn.proxy options[:proxy]
conn.adapter :net_http
end
end
response = client.get('some_username/', proxy: URI('//111.111.111.111:8080'))
response.status # 302
response['location'] # "https://www.instagram.com/accounts/login/"
But previously, just a few days ago, code above worked as expected, i.e. had return 200 status and body with user page. Moreover code Faraday.get('https://www.instagram.com/some_username/') without proxy works fine, i.e. returns 200 status and body with user page. I've also tried the same by other clients, and result the same, success without proxy and redirect with it.
Client needs some additional specific configuration for working with proxy, maybe?
UPDATE
I'm not sure, but it looks like a problem with proxy, i.e. instagram somehow detects buyed/free proxies, maybe, and redirects requests fromt thats proxies (I've used buyed pack of proxies), because I've tried to use my own proxy and it's works.
Instagram made a changes lately. They are most likely have some special AI or use some service which review your IP address, which ISP you use, is it belonging to organization like Digitalocean, OVH, etc or residential, how many requests are you making to which endpoints, how are you making them, how many accounts you use on it, and how quickly you change them etc.
Right now if you hit the limits of scraping instagram you will be redirected to LoginAndSignupPage(you can find it in source code). Be aware that login on this point won't work - instagram will just return 429 error code, meaning too many requests. Also after every such block most likely your IP address is even less reliable, so if you will start scraping again after block it will get blocked even faster.
I guess the easiest way will be just use residential ip with enough high delay between requests - like 3-5 seconds, and even better if you can use somehow real accounts, and don't overuse them, as well try to make any other requests in meantime, like getting some posts, opening single post or something.
You can ignore pretty much any free IP proxy list available on google, 99% of those ips on it are banned, almost same with ips from Digitalocean, OVH etc, many of them are blocked as well.
instagram is indeed very hard to bypass and you need to rotate your proxies often to avoid blocking. try https://rapidapi.com/neotank/api/instagram130 to mitigate this - it rotates proxies for you..

Azure and CORS Access-Control-Allow-Origin with ajax and php

First I'm not in the web side of our world, so be nice with the backend guy.
A quick background : For a personal need I've developped a google chrome extension. They are basically a webpage loaded in a chrome windows and... yeah that's it. Everything is on the client side (scripts, styles, images, etc...) Only the data are coming from a server through ajax calls. A cron job call a php script every hours to generate two files. One, data.json contains the "latest" datas in a json format. Another one hash.json contain the hash of the data. The client chrome application use local storage. If the remote hash differ from the local one, he simply retrieve the data file from the remote server.
As I have a BizSpark account with Azure my first idea was : Azure Web Site with php for the script, a simple homepage and the generated file and the Azure Scheduler for the jobs.
I've developed everything locally and everything is running fine... but once on the azure plateform I get this error
XMLHttpRequest cannot load http://tso-mc-ws.azurewebsites.net/Core/hash.json. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:23415' is therefore not allowed access.
But what I really can't understand is that I'm able (and you'll be too) to get the file with my browser... So I just don't get it... I've also tried based on some post I've found on SO and other site to manipulate the config, add extra headers, nothing seems to be working...
Any idea ?
But what I really can't understand is that I'm able (and you'll be
too) to get the file with my browser... So I just don't get it
So when you type in http://tso-mc-ws.azurewebsites.net/Core/hash.json in your browser's address bar, it is not a cross-domain request. However when you make an AJAX request from an application which is running in a different domain (http://localhost:23415 in your case), that's a cross-domain request and because CORS is not enabled on your website, you get the error.
As far as enabling CORS is concerned, please take a look at this thread: HTTP OPTIONS request on Azure Websites fails due to CORS. I've never worked with PHP/Azure Websites so I may be wrong with this link but hopefully it should point you in the right direction.
Ok, will perhap's be little troll answer but not my point (I'm .net consultant so... nothing against MS).
I pick a linux azure virtual machine, installed apache and php, configure apache, set some rights and define the header for the CROS and configure a cron in +/- 30minutes... As my goal is to get it running the problem is solved, it's running.

Google map places service is giving REQUEST

I am using google place api for places sugestions.
https://maps.googleapis.com/maps/api/place/textsearch/json?query=ari&sensor=false&key=your_api_key
I have valid api key and this URL is working fine when I am executing it from the browser.
The api return "OK" as status and places suggestion but when I am executing the same URL by cUrl or file_get_contents It returns "REQUEST_DENIED" as status and hence no place suggestions.
why this is behaving like this.
Is there any setting which I am missing.
Any suggestion would be a great help.
Thanks
Did you ever get your answer to this? As far as I am aware this is die to "cross-site-scripting" security limits. You can't go from the Places API directly to Google even though you can in a browsers address bar. You have to make the call back to your sever and have the server send the call to Google - then return those results back to your page/ web site.

Bad Request - Request Too Long HTTP Error 400. The size of the request headers is too long

Some of my users are getting the following error sometimes when they request some of the pages of my site:
Bad Request - Request Too Long HTTP Error 400. The size of the request headers is too long
It seems to happen only in Firefox.
Deleting the users cookies does help.
What I don't understand is the following: I thought that cookies are appended to every request. Why is it that only one or two of my pages show this error and most do never?
It is also not dependent on the server page. If the user requests
http://example.com/user/Myname
he might get the error.
If he just changes the capitalization of the URL it works again (like http://example.com/user/myname). (I am running IIS which does not care too much about capitalization).
For the browser the two URLs are different, for the server they aren't.
Any idea what is happening?
It seems that there were too many cookies after all. I made sure that there were not so many and it is working now.
Some of our users also ran into this same exception on IE 8 for some our our intranet sites hosted in IIS. The issue turned out to be related to using Kerberos authentication where a user belongs to many active directory groups.
We found solutions from the following Microsoft Support Articles:
HTTP 400 - Bad Request (Request Header too long)" error in Internet Information Services (IIS)
Problems with Kerberos authentication when a user belongs to many groups
The fix for us was to set the following registry keys with increased values and/or create them if they didn't exist:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters\MaxFieldLength DWORD (32bit) - assigned value data 32000 (Decimal)
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters\MaxRequestBytes DWORD (32bit) - assigned value data 8777216 (Decimal)
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\HTTP\Parameters] "MaxFieldLength"=dword:00007d00 "MaxRequestBytes"=dword:0085ee00
Solution 1
delete all domain cookie from your browser
In Firefox 53
Alt -> Tools -> Page Info
Security
View Cookies
Remove All
In Chrome
check this superuser solution
Solution 2
Install Web Developer extension ( Firefox ,Chrome , Opera)
go to cookies tab -> Delete Domain Cookies like below screenshot
Solution 3
use incognito mode and see if it works for you
More details:
I had the same problem in Chrome and using a list from SharePoint.
after diagnosing with Chrome developer dashboard's network tab. I checked the headers and find the large cookie starting with the name WSS_exp and removing all of them from chrome cookie manager resolved my problem
The problem is due to a cookie that has become corrupted. The easy solution is to delete all your cookies but here is the best way to solve that specific issue, I have created a customized guide for Firefox, Chrome, and Internet Explorer. See here : http://timourrashed.com/how-to-fix-the-400-bad-request-error-message-from-a-website/
Another reason for this error stems from the user being in too many active directory groups. More modern SSRS versions do not have this problem.
It appears that the list of AD groups get passed in along in an HTTP header. Older versions of SQL Server Reporting Services have a header size limitation. So if a user is in a excessive number of groups, the easiest fix will be to remove unneeded groups.
If removing groups is not an option, you should be able to edit the web.config file and increase the limit. You can see how to do that here...
https://www.mssqltips.com/sqlservertip/4688/resolving-the-maximum-request-length-exceeded-exception-in-sql-server-reporting-services/
This answer is only in the case of using the browser Local Storage to store users' data.
Because the Local Storage has a limit of 5MB per domain, it's never cleared on its own, and there is no expiry date to remove data. When the local storage reaches the 5MB limit then starts storing data as cookies. Later, when the size of cookies reaches 1MB, the browser shows the 400 error (the size of the request headers is too long).
In this case, it is better to clear the unnecessary data from the local storage after using them.
I used ViewData instead of TempData, and issue solved.

Google checkout callback can't seem to reach https server

I am trying to implement Google Check out (GCO) on a new server, the process seemed to work fine on the old server.
The error from GCO integration console is the timeout error you might expect if there is load on the server and/or the response takes longer than 3 seconds to respond.
To perform a test (not integrating with my database), I have set some code to send an email to me instead. If I hit the https url manually, I get the email and I can see an output to the screen. If I then leave it as that, Google still returns the Timeout error and I don't get an email. So I have doubts as to whether google is even able to hit the https url.
I did temporarily attempt to use the unsecure url for testing and indeed I received the email, however this solution isn't the route we've developed for, so the problem is something to do with the secure url specifically.
I have looked into the certificate which is a UTN-USERFirst-Hardware which is listed as accepted on http://checkout.google.com/support/sell/bin/answer.py?answer=57856 . I have also tried to temporarily disable the firewall with no joy. Does anyone have any sugestions?
Good to hear you figured out the problem.
I'm adding the links below to add a litle more context for future readers about how Google Checkout uses HTTP Basic Authentication:
http://code.google.com/apis/checkout/developer/Google_Checkout_XML_API.html#urls_for_posting
http://code.google.com/apis/checkout/developer/Google_Checkout_XML_API.html#https_auth_scheme
http://code.google.com/apis/checkout/developer/Google_Checkout_HTML_API_Notification_API.html#Receiving_and_Processing_Notifications

Resources