google-geolocation wrong lat and lng inside aws lambda function - google-geolocation

i am trying to get user geoLocation by his Ip ,
so i tried calling googleapi inside my lambda function
const getLatLng = () => axios
.post(`https://www.googleapis.com/geolocation/v1/geolocate?key=${key}`)
.then(({ data }) => data);
Now the strange part is, when I run my node project locally and call the endpoint, it works really good and it detect my ip and return my correct lat and lng -
then I use the same code after I deploy the service to api gateway in aws,
and it return my lat lng wrong, so the request works but the value is wrong.
Does anyone know what might be the reason?
Is it true that i should call googleapi from my client and pass lat and lng to my server and then use them for my other calculations and stuff i ineed ?

Google Geolocation works based off of the device / IP you are requesting from.
If you request that information from an AWS instance, it will give you its best guess at where that AWS server is physically located. In this situation, Google has absolutely no idea who the client is or that you want their location information, and would have no way to get that.
You should use geolocation from the client-side and pass that data to your server, or use a different service that can perform geolocation based on the IP your server sees the client's request as coming from.

Related

Is there is any way to identify where the API request comes from

I'm working on the Flutter app which is using APIs to get the data from the server. The flutter app is public and anyone can use without login to the application. And all working fine.
My question: is there is any way to identify where the API request comes from. Because anyone can use this API to get data and this may lead flooding the server.
If it is possible to find out from where the request is coming from, then I can process the request that is ONLY from my Flutter application.
Is it possible?
Use https as protocol and add an api key and client secret to your app.
Then protect your api with e.g. http basic auth or OAuth.
https://laravel.com/docs/7.x/authentication#stateless-http-basic-authentication
https://laravel.com/docs/7.x/passport
when the first request comes in to the server, issue a token, for example
(psuedo code)
//here stringContainingData can be a json string having details about the client and the connection
token = MyHashingFunctionUsingAPassword(stringContainingData,MyStrongPassword);
after sending back the token, next api access should contain the token with every request if not reject, if the token exists, do this
stringContainingData = MyDeHashingFunction(token,MyStrongPassword)
//verify data
mappedToken = stringToMap(stringContainingData);
if(mappedToken.containsKey('keyThatShouldBePresent') //acknowledge request
else //reject request
to reject further flooding, set max requests/second from a single IP

How to access the Books API from a web application hosted on Microsoft Azure?

I have built a simple enough web application that uses the Google Books API to retrieve volume information for an ISBN the user provided. The application uses the official C# library. Requests are authorized by means of an API Key.
When running the app on my local machine I access the service with a German IP address and everything is fine. When accessing the Books API from Microsoft Azure however I get the following error:
Google.Apis.Requests.RequestError
Cannot determine user location for geographically restricted operation. [403]
Errors [
Message[Cannot determine user location for geographically restricted operation.]
Location[ - ]
Reason[unknownLocation]
Domain[global]
]
Does anyone know how to access the Google Books API from a web application hosted in Microsoft Azure?
country=us URL param solved this for me, mentioned in this github project, but not documented in the volumes list route of course
This blog says you can also use x-forwarded-for, I didn't try that
Ok I have no idea if this will work. But you can supply the ip address in your request. This may or may not work if it doesn't let me know and I will delete this.
var bookService = new BooksService(new BaseClientService.Initializer()
{
ApiKey = "xxxx",
ApplicationName = "BooksService Authentication Sample",
});
var request = bookService.Bookshelves.List("user");
request.UserIp = ""; //<-- find an ip address to feed it
var result = request.Execute();
The thing is Google normally takes the IP address the request is coming from. I wonder why it cant pick up a geo location from the IP address Azure is sending the request from.

Youtube data API refusing my server side requests

I'm developing an application that uses the Youtube Data API from the server (with node.js) so I have my server side credential key already set up but when I try to get data is always refusing my requests with this message.
Access Not Configured. The API is not enabled for your project, or there is a per-IP or per-Referer restriction configured on your API key and the request does not match these restrictions. Please use the Google Developers Console to update your configuration.
I've my app hosted in Heroku with the add-on Quote Guard Static that gives me two static IP's that I have whitelisted in the Credentials section of the google developers console. I also have the app hosted in modulus.io and whitelisted the IP range 54.0.0.0/8 that is what they gave me for their AWS region... Any of both deployments are working only in my local machine with my home external IP whitelisted.
The funny thing is that yesterday 15 minutes approximately after I whitelisted the 54.0.0.0/8 range, the app API started working in my Heroku host, but today it stopped again (this might be because it has changed to another IP inside their AWS region or something...).
Is there any way to check what IP is doing the requests to the Youtube data api? I can see in the developers console the request reaching the API and been rejected as "errors" at least I know that their are getting the requests...
Any ideas??
Thanks
EDIT:
Partially solved. See my answer below.
There's a simple service I found that returns your public IP:
http://api.ipify.org?format=json
You could add a route to your application that you can hit from your browser. The route handler then makes a request to this service and returns the result. You could then periodically check what your app's actual public IP is and adjust whitelist accordingly.
// Example express + request
app.get('/ip', function(req, res) {
request({ uri: 'http://api.ipify.org?format=json' }, function(err, response, body) {
res.send(body);
});
});
The problem with the app hosted in Modulus.io is solved now. The app was running by default in Joyent servers but I changed it to the Amazon AWS region and with the IP range 54.0.0.0/8 whitelisted google is accepting all my requests to the API.
Nevertheless, the app in heroku still not working but as is working in one service I'm going to stop investigating in the other one.

How can i use IAM in AWS to provide temporary security credentails

IAM has a component of service called Simple Token Service (STS). It allows you to create temporary access through SDK/ API to access AWS resources without having the need to create dedicated credentials. These STS tokens have a user-defined life period and those are destroyed post that. People use this service for accessing content from mobile devices such as Android/ IOS Apps.
But i don't know how to use this service.
Any help or support is appreciated.
Thanks
STS and IAM go hand in hand, and is real simple to use. Since you have not given a use case, please allow me to explain a few things before we get in coding.
Note: I code in PHP and the SDK version is 3.
The idea of STS is to create some tokens which allows the bearer to do certain actions without you (the owner or the grantee) compromising your own credentials. Which type of STS you are going to use depends on what you want to do. Possible actions are listed here.
E.g.1: Typically, you use AssumeRole for cross-account access or
federation. Imagine that you own multiple accounts and need to access
resources in each account. You could create long-term credentials in
each account to access those resources. However, managing all those
credentials and remembering which one can access which account can be
time consuming. Instead, you can create one set of long-term
credentials in one account and then use temporary security credentials
to access all the other accounts by assuming roles in those accounts.
E.g.2: Typically, you use GetSessionToken if you want to use MFA to protect
programmatic calls to specific AWS APIs like Amazon EC2 StopInstances.
Let us assume you have an IAM user and you want to create many temporary credentials for that user, each credential with a time frame of 15 minutes. Then you will write the following code:
$stsClient = \Aws\Laravel\AwsFacade::createClient('sts', array(
'region' => 'us-east-1'
));
$awsTempCreds = $stsClient->getSessionToken(array(
'DurationSeconds' => 900
));
Points to note:
Credentials that are created by IAM users are valid for the duration that you specify, from 900 seconds (15 minutes) up to a maximum of 129600 seconds (36 hours), with a default of 43200 seconds (12 hours); credentials that are created by using account credentials can range from 900 seconds (15 minutes) up to a maximum of 3600 seconds (1 hour), with a default of 1 hour.
In the above example I am getting $stsClient using AWS Facade which is part of Laravel framework. It is up to you how you get hold of $stsClient by passing credentials. Read this installation guide on how to instantiate your $stsClient.
Since STS is a global resource i.e. it does not require you to be in a specific region, you MUST ALWAYS set the region to us-east-1. If your region is set to anything else, you will get errors like should be scoped to a valid region, not 'us-west-1'.
There is no limit on how many temporary credentials you make.
These credentials will have the SAME permissions from which Account/IAM they are derived from.
Above code returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token, plus a few other information such as expiry time.
You can now give these temporary credentials to someone else. Let us say I gave this to my friend who happens to use the Javascript API. He now can write codes like this:
<script>
var accessKey = '<?php echo $credentials["AccessKeyId"] ?>';
var accessToken = '<?php echo $credentials["SecretAccessKey"] ?>';
var accessSessionToken = '<?php echo $credentials["SessionToken"] ?>';
AWS.config.update({
credentials: {
accessKeyId: accessKey,
secretAccessKey: accessToken,
sessionToken: accessSessionToken
},
region: 'us-east-1'
});
var bucket = new AWS.S3();
var file = fileChooser.files[0];
var params = {Bucket: 'mybucket', Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
results.innerHTML = err ? 'ERROR!' : 'UPLOADED.';
}).on('httpUploadProgress', function(evt) {
console.log('Progress:', evt.loaded, '/', evt.total);
});
</script>
What this script does?
Creates a new client using temporary credentials.
Uploads a very large file (more than 100MB) to mybucket using Multipart Upload.
Similarly, you can do operations on any AWS resource as long as you temporary credentials have permission to do so. Hope this helps.
STS is little tough to understand (need to put real time in reading it). I will try (yes try!) to explain as simple as possible. The service is useful if you need to do things like this:
You have a bucket on S3. You have say 100 users who you would like to upload files to this bucket. It is obvious that you should not distribute your AWS key/secret to all of them. Here comes STS. You can use STS to allow these users to write to a "portion" (say a folder by their google-id) of the bucket for a "limited time" (say 1 hr). You achieve this by doing the required setup (IAM, S3 policy and STS-AssumeRoles) and sending them a URL (which they use to upload). In this case, you can also use the Web Identity Federation to authenticate users by Google/FB/Amazon.com. So with no backend code, this workflow is achievable. Web Identity Federation Playground gives you a sense of how this works.
You have an AWS account and want somebody else (another AWS user) to help you manage this. Here again you give the other user limited-time access to a selected portion of your AWS resource without sharing the Key/secret
Assume you have a DynamoDB setup with a row of data per your app-user. Here you need to ensure a given user can write only on his row of data and not others. You can use STS to setup stuff like this.
A complete example is here:
Web Identity Federation with Mobile Applications : Articles & Tutorials : Amazon Web Services : http://aws.amazon.com/articles/4617974389850313
More reading:
Creating Temporary Security Credentials for Mobile Apps Using Identity Providers - AWS Security Token Service : http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingWIF.html

Google places API - 'REQUEST_DENIED' when using server-side proxy on EC2

I'm stuck trying to get an API call to Google Places working. I'm using a server-side PHP proxy for the request - not because I want to, but because it's part of JQuery-POI-Mapper.
The server I'm using is an Amazon EC2 server without a static IP address. I'm out of stactic IP addresses at the moment, but I'll take the time to request more if people think that's the problem. I identified the current public IP address of my EC2 server by running this command on the server:
curl http://169.254.169.254/latest/meta-data/public-ipv4
Next, I went to https://code.google.com/apis/console and created a new API key. I selected a server-side API key and used my EC2 server's public IP address.
Under the services tab, I enabled every related API I could think of, including:
Google Maps API v2
Google Maps API v3
Google Maps Gelolocation API
Places API
Static Maps API
Here's a screen capture of my Services:
The proxy code running on the EC2 server is part of a commercial package, so I shouldn't post the entire code, but it's very short, and the important part is:
$json = file_get_contents($url);
The $url variable in my case was:
https://maps.googleapis.com/maps/api/place/search/json?location=-37.7133771,145.14891620000003&radius=2000&types=bakery&sensor=false&key=AIzaSyCCUV...
The response I get is:
{
"html_attributions" : [],
"results" : [],
"status" : "REQUEST_DENIED"
}
I checked to see if I had gone over my quota already, but everything looks OK. What's interesting is that Google is showing that I had made requests to the Places API today, so Google definitely knows that the requests are coming from me.
Here's a screen capture of my API Traffic Report, which is all from testing:
Any advice would be much appreciated.
Thanks,
Bret
You need a Static IP address to successfully use IP Locking with a Server API Key.

Resources