How to Use Google BigQuery with the Google Places Library? - google-places-api

I'm looking to use the BigQuery tool to create a table of businesses in the United States, and I want to draw the information from Google Places in order to get certain results from the XML feed to insert into my own database. I'm wondering if that would be possible, and if so, how would I go about doing so?

This might not be possible under the "Google Maps/Google Earth APIs Terms of Service".
Check it at https://developers.google.com/maps/terms

Related

Accessing star ratings from reviews, for multiple businesses

Is it possible to fetch the Google star rating for any business using the Google Places API?
I have a comparison website and want to display the Google star ratings for each business on my site.
Many thanks
Yes. The responses from the Place Search and Place Details APIs include a rating field.
However, two important warnings:
These APIs are both billed, and are quite expensive ($17 and $32 per 1000 requests, respectively). Making a Place Details request for each business displayed in a comparison will probably be economically infeasible.
The Places API policies place a number of requirements on your use of Google's data. In particular, you cannot cache most data returned by the API (including ratings), and you cannot use the data alongside a non-Google map.

Why don't I see timestamp in Google analytics Query Explorer

As per google analytics GA360 link, Why Can't we see the Visitor time-stamp dimension both on Front End and on Query Explorer
Query Explorer: https://ga-dev-tools.appspot.com/query-explorer/
My Intention is to know/find Page tracking time-stamp w.r.t seconds not in DateHourMinute
Do we need to implement the custom Dimension as stamp-time like(YYYY-MM-DD HH:MM:SS:mmm) ?
The Google Analytics Query Explorer makes its calls via the google analytics API. The Google Analytics API has a list of valid dimensions and metrics time stamp is not one of these. valid Time dimensions
My advice would be yes create your own custom dimension using either a stamp like that or unix time its up to you how you do it or how you want to use it. You will then be able to access it though the custom dimensions

Google Script Performance and Speed Comparison with Google Sheet Formula

I have one thousand Google Form Responses spreadsheets. These are students answer sheets. I built a spreadsheet and pull data (TimeStamps and scores) for each student by using Google Spreadsheet formulas (INDEX MATCH and IMPORTDATA). Each student has different pages. But, it takes too many times and sometimes causes some source sheets being unresponsive (I think because of heavy formula usage). My questions;
Is it possible to do the same thing (pulling data if matches student's name from one thousand spreadsheets) by using Google Script?
If possible, which ones (Google Spreadsheets with formulas or Google Script) performance is better?
By looking your answers I will decide to begin learning Google Script or not.
Thanks in advance.
Is it possible to do the same thing (pulling data if matches student's name from one thousand spreadsheets) by using Google Script?
Yes, it's possible.
NOTE: Bear in mind that Google Sheets has a 5 million cell limit, so if your data exceeds this limit, you should consider to use another data repository.
If possible, which ones (Google Spreadsheets with formulas or Google Script) performance is better?
Since most Google Sheets formulas are recalculated every time that a change is made in the spreadsheet that holds them, it's very likely that Google Apps Script will be better when using Google Sheets/Google Apps Script as database management system because we could have more control over when the database transactions will be made.
Related
Measurement of execution time of built-in functions for Spreadsheet
Why do we use SpreadsheetApp.flush();?
Both does the same thing. Both will be as intensive on your computer. My advice would be to upgrade your PC!

Log analytics using Elasticsearch & Kibana - Few queries

I have just started playing around with ELK to develop our log analytics solution.
I had a few questions regarding the best practices so that I don't make any bad choice to begin with.
This tool will analyze various types of logs to find out and correlate any issue. It will run on multiple 'devices' and each device will be uniquely identifiable with a serial number.
Question 1) Is it possible to create a dashboard where the serial number is taken as an user input?
Details: I would like to have 1 dashboard created to analyze various fields and I should be able to specify the serial number of the device as an input. From what I see, I could use filter but then this would need the visualization to be 'edited'. So it appears to be me that right now, if I need to analyze multiple devices then I need to create a dashboard for each of the device. This will be a problem that if I need to modify the dashboard then I will have to make changes to all. The problem can be minimized by importing additional dashboards as a JSON file, still it is inconvenient.
Is there a better way that I am not aware of?
Question 2) On the main dashboard, I want to show a heatmap of various 'services' and their status as a time series. For e.g. say I am monitoring, CPU, memory, network and our service then I want to see something like below:
Now the heatmap visualization doesn't provide a way to uniquely specify the condition. I generated above image by populating dummy data where values were one of 0,1,2,3. Which means that I need to create such data periodically which the visualization can then use. Is there any built-in mechanism (scheduled jobs for e.g.) provided by ELK to do such processing. One option could be to run an external problem which queries Elasticsearch, fetches all the relevant information, analyzes it and puts it back into Elasticssearch. Is that the only way?
If there are any other suggestions, please feel free to share. Thanks.

Filter by zip code, or other location based data retrieval strategies

My little site should be pooling list of items from a table using the active user's location as a filter. Think Craigslist, where you search for "dvd' but the results are not from all the DB, they are filtered by a location you select. My question has 2 levels:
should I go a-la-craigslist, and ask users to use a city level location? My problem with this is that you need to generate what seems to me a hard coded, hand made list of locations.
should I go a-la-zipCode. The idea of just asking the user to type his zipcode, and then pool all items that are in the same or in a certain distance from his zip code.
I seem to prefer the zip code way as it seems more elegant solution, but how on earth do one goes about creating a DB of all zip codes and implement the function that given zip code 12345, gets all zipcodes in 1 mile distance?
this should be fairly common "task" as many sites have a need similar to mine, so I am hoping not to re-invent the wheel here.
Getting a Zip Code database is no problem. You can try this free one:
http://zips.sourceforge.net/
Although I don't know how current it is, or you can use one of many providers. We have an annual subscription to ZipCodeDownload.com, and for maybe $100 we get monthly updates with the latest Zip Code data complete with Lat/Longs of the centroid of the zip code.
As for querying for all zips within a certain radius, you are going to need a spatial library of some sort. If you just have a table of zips with lats/longs, you will need a database-oriented mechanism. SQL Server 2008 has the capability built in, and there are open source libraries and commercial libraries that will add such capabilities to SQL Server 2005. The open source database PostgreSQL has a project, PostGIS that adds this capability to that database. It is here: http://postgis.refractions.net/
Other database platforms probably have similar projects, but those are the ones I am aware of. With one of these DB based libraries you should be able to directly query for any zip codes (or any rows of any kind that have lat/long columns) within a given radius.
If you want to go a different route you can use spatial tools with a mapping library. There are open source options here as well, such as SharpMap and many others (Google can help out) that can use the free Tiger maps for the united states as the data source. However, this route is somewhat more complicated and possibly less performant if all you need is a radius search.
Finally, you may want to look into a web service. This, as you say, is a common need, and I imagine there are any number ob web services that you can subscribe to that can provide all zip codes in a given radius from a provided zip code. A quick Google search turned up this:
http://www.zip-codes.com/free-zip-code-tools.asp#radius
But there are MANY resources to be had for the searching on this subject.
how on earth do one [...] implement the function that given zip code 12345, gets all zipcodes in 1 mile distance?
Here is a sample on how to do that:
http://www.codeproject.com/KB/cs/zipcodeutil.aspx
Just to be technical... PostGIS isn't a project of the Postgres community... it's a stand-alone project that is built on top of Postgres. If you want help or support with PostGIS, you'll want to go to it's community instead of Postgres.
You can use PostGIS. Additionally, I've used deCarta's mapping libraries. They have technology which allows you to geokey any arbitrary data type. Then you can query these spatially.
disclaimer: I work for deCarta
Wouldn't it be more efficient to just figure out which cities are within a 1 mile radius and store that information in a table? Then you don't have to do calculations in the database all the time.

Resources