Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I've stumbled upon something when using Google's Places API to retrieve info about a place.
When I retrieve the info for one specific place using the Place Details API request, I get a different total rating than when opening the same place (with the same id) using Google Maps.
Why is this?
The only qualification for the API data, clearly stated in the documentation, is that not all but a maximum of five reviews are returned using the API.
I cannot find anything anywhere saying that the API data is not expected to be the same as what is visible in the UI -- and, in fact, in all other cases than this, it has been the same.
Example:
This place has an overall rating of 5.0 based on "19 reviews", of which 17 contain text and 2 just the star ratings.
Using the same ID, here's what the API call returns:
{ html_attributions: [],
result:
{ address_components:
[ [Object], [Object], [Object], [Object], [Object], [Object] ],
adr_address:
'Chandlery Building, Hamble Point Marina, <span class="street-address">School Lane, Hamble</span>, <span class="extended-address">Hamble-le-Rice</span>, <span class="locality">Southampton</span> <span class="postal-code">SO31 4NB</span>, <span class="country-name">UK</span>',
business_status: 'OPERATIONAL',
formatted_address:
'Chandlery Building, Hamble Point Marina, School Lane, Hamble, Hamble-le-Rice, Southampton SO31 4NB, UK',
formatted_phone_number: '023 8045 7008',
geometry: { location: [Object], viewport: [Object] },
icon:
'https://maps.gstatic.com/mapfiles/place_api/icons/v1/png_71/generic_business-71.png',
international_phone_number: '+44 23 8045 7008',
name: 'Inspiration Marine Group Ltd',
opening_hours: { open_now: false, periods: [Array], weekday_text: [Array] },
photos: [ [Object], [Object], [Object], [Object] ],
place_id: 'ChIJQ0_yxRpwdEgRziRrjoX0cUM',
plus_code:
{ compound_code: 'VM2P+CV Southampton, UK',
global_code: '9C2WVM2P+CV' },
rating: 4.9,
reference: 'ChIJQ0_yxRpwdEgRziRrjoX0cUM',
reviews: [ [Object], [Object], [Object], [Object], [Object] ],
types: [ 'point_of_interest', 'store', 'establishment' ],
url: 'https://maps.google.com/?cid=4859934327366689998',
user_ratings_total: 19,
utc_offset: 0,
vicinity:
'Chandlery Building, Hamble Point Marina, School Lane, Hamble, Southampton',
website: 'http://www.inspirationmarine.co.uk/' },
status: 'OK' }
I received an answer from Google explaining the difference:
The rating received through the Places API is an arithmetic average.
When the place is shown in Google Maps, the rating is calculated differently (my bold):
Your score is calculated from user ratings and a variety of other signals to ensure that the overall score best reflects the quality of the establishment.
https://support.google.com/business/answer/4801187
I remember reading somewhere that Google actually changed this feature, and moved to simple arithmetic averages, but apparently, this was not completely so.
Related
This question already has an answer here:
Karate - Nested JSON object schema validation causes KarateException
(1 answer)
Closed 1 year ago.
recently I started working with Karate and Yaml for the first time. I was able to validate simple response structures where all the answer data were on the same level. But now I have to validate a more complicated structure and I spent a lot of time without success.
When I perform a GET request I receive the next answer:
[
{
"factories": [
{
"id": 1,
"scopes": [
{
"id": null,
"name": "name 1",
"expireD": 10,
"isExpired": true
}
],
"companyName": "TEST",
},
{
"id": 2,
"scopes": [
{
"id": null,
"name": "name 2",
"expireD": 7,
"isExpired": true
}
],
"companyName": "TEST2",
}
],
"scopeId": null
}
]
The structure validation is not directly in the karate code. It is on a yml file like this:
operationId: getTest
statusCode: 200
params:
body: null
matchResponse: true
responseMatches:
scopeId: '##number'
factories:
companyName: '#string'
id: '#number'
scopes:
expireD: '#number'
name: '#string'
id: '#null'
isExpired: '#boolean'
I review the structure around 100 times and I have the same error all the time when I arrive here:
* match response contains responseMatches
The error is the next one:
$[1].factories | data types don't match (LIST:MAP)
I tried to use match each, ignore one by one the structures to see which one is failing and also reduce the validations as #array and it is not working.
Any help will be more than welcome. Thank you.
I really recommend NOT using YAML especially in a testing / validation scenario. But finally it is your call.
Here is a tip to save you some time, you can print out the value of the YAML and see where you were going wrong. I don't know YAML (and avoid it as far as possible), but I made a guess after a few failed attempts and managed to insert a - at the right place (apparently there are many other ways) to make some of the YAML a JSON array - which is what you want.
* text foo =
"""
operationId: getTest
statusCode: 200
params:
body: null
matchResponse: true
responseMatches:
scopeId: '##number'
factories:
-
companyName: '#string'
id: '#number'
scopes:
expireD: '#number'
name: '#string'
id: '#null'
isExpired: '#boolean'
"""
* yaml foo = foo
* print foo
Try the above and see how it differs from your example.
Finally, a solution was found. The Karate documentation offers an idea about how to use it defining a structure of data that could be used as a type. I tried before but I added before the responseMatches section. The yml file looks like this:
operationId: getTest
statusCode: 200
params:
body: null
matchResponse: true
responseMatches:
scopeId: '##number'
factories: '#[_ <= 5] factoryStructure'
factoryStructure:
companyName: '#string'
id: '#number'
scopes: '#[] scopeStructure'
scopesStructure:
expireD: '#number'
name: '#string'
id: '#null'
isExpired: '#boolean'
I've scoured all four corners of interweb trying to find documentation on how to do this. But my journey has been unsuccessful so far. Part way through the search, I was able to find out how to mention a User (not a bot), and that was even a pain to find. I found that you have to post a field named msteams at the top level of the "any" object parameter which is an object consisting of an entities array. That array is an array of objects. The following use of adaptiveCard works when mentioning a user with the proper values replacing username and userID:
CardFactory.adaptiveCard({
$schema: 'http://adaptivecards.io/schemas/adaptive-card.json',
type: 'AdaptiveCard',
msteams: {
entites: [
{
type: 'mention',
text: '<at>(username)</at>',
mentioned: {
id: <userID>,
name: <username>,
role: 'user'
}
}
]
}
body: [
{
type: 'TextBlock',
text: '<at>(userName)</at>',
}
]
});
The documentation of CardFactory.adaptiveCard just lists the parameter as an any Object and gives a small example not displaying an exhaustive list of fields of this parameter. It also posts a link to the Adaptive Card documentation, but that's what it's abstracting and the fields are not 1:1 (point and case this msteams object that is never referenced in the Adaptive Card documentation from what I can tell). I want to mention the bot itself that is posting this Adaptive Card. I've attempted to replace the mentioned object with the following
{
"id": "a3216960-131c-11eb-xxxx-xxxxxxxxx",
"name": "Bot",
"role": "bot"
}
This is equivalent to the object that I'm using to mention the "from" user in the adaptive card. But this is the recipient. The from user which is successfully mentioned is formatted like the following:
{
"id": "c3370a7c-95f2-4a60-xxxx-xxxxxxxxx",
"name": "User",
"role": "user"
}
Any help/guidance, tips, references would be greatly appreciated!
Currently #mention a bot in Adaptive card is not supported. You can #mention user in Adaptive card.
I'm new to Firebase and I'm building my first app on it so thought I'd ask if my current plans for the app's data structure make sense.
I've read the Firebase blog posts and several answers on SO which have helped me understand the concept of "optimise for the way the data will be read". However, my data will be read in a few different ways and it feels like I may be over complicating things.
Background
The app is like a directory for businesses in multiple towns (schemes) to promote their upcoming events and offers. I think of the data hierarchy like this:
Scheme: A town (the app has multiple schemes)
Category: A group of businesses around a theme (e.g. shoe shops)
Business: An administrative organisation (handles billing etc). Each business can have multiple locations (shops in different towns).
Location: A shop in a town.
Event: Each location can promote events. An event can be promoted at multiple locations but not necessarily all of a business's locations.
Offer: Similar to an event but a different type of object.
Viewing the data
The app user can view the offer & event data in 5 ways:
specific to a business (e.g. Joe's shoes' offers)
for a scheme (e.g. all offers in a Smalltown)
for the whole app (e.g. all offers anywhere)
in a category in a scheme (e.g. all shoe offers in Smalltown)
in a category in the whole app (e.g. all shoe offers anywhere)
In addition, I need to make sure that an administrator from each business can see/edit all of their business's data via a CMS I'm also building.
My approach
This is the data structure I'm thinking of using:
root {
schemes{
scheme1{
name: "smalltown",
logo: "base64 data",
bgcolor: "#FF0000"
},
scheme2{...}
},
businesses{
business1{
name: "Joe's Shoes",
logo: "base64 data",
locations: {
location1: true,
location3: true,
location15: true
},
address_hq: {
street: "45 Acacia Avenue",
town: "Bigtown",
postcode: "BT1 1JS"
},
contact_hq: {
name: "Joe Simpson",
position: "Owner",
email: "joe#joesshoes.com",
tel: "07123 456789"
},
subscription: {
plan: "Standard",
date_start: "10/10/2015",
date_renewal: "10/10/2016"
},
owner: "james1"
},
business2{...}
},
locations{
location1{
name: "Joe's Shoes",
logo: "base64 data",
scheme: "scheme1",
events: {
event1: true,
event27: true
},
offers: {
offer1: true,
offer6: true
},
business: "business1",
owner: "james1"
},
location2{...}
},
events{
event1{
schemes: {
scheme1: true,
scheme4: true
},
locations{
location1: true,
location21: true
},
categories: {
shoes: true,
footwear: true,
fashion: true
},
business: "business1",
date: "5/5/2016",
title: "The History of Shoes",
description: "A fascinating talk about the way shoes have...",
image: "base64 data",
venue: {
street: "Great Hotel",
town: "Bigtown",
postcode: "BT1 1JS"
},
price: "£10"
},
event2{...}
},
offers{
offer1{
schemes: {
scheme1: true,
scheme4: true
},
locations{
location1: true,
location21: true
},
categories: {
shoes: true,
footwear: true,
fashion: true
},
business: "business1",
date_start: "5/5/2016",
date_end: "5/5/2016",
title: "All children's shoes Half Price",
description: "Get 50% off all children's shoes - just in time for the summer",
image: "base64 data",
},
offer2{...}
}
}
Here's a graphic of similar data structure in case it's easier to read:
My question is whether I need to denormalise the data further (repeat more data in more places) or is there a better way to think about this altogether?
It feels like I'm getting potential complications from having to keep data in sync without the ability to simply read from a single place (e.g. I'll need to use queries and indexes (?) to combine location and event data for scheme-wide event listings).
Any advice on making this data structure more efficient would be great.
I would like to create a click stream application using HBase, in sql this would be a pretty simple task but in Hbase I have not got the first clue. Can someone advise me on a schema design and keys to use in HBase.
I have provided a rough data model and several questions that I would like to interrogate the data for.
Questions I would like to ask for accessing data
What events led to a conversion?
What was the last page / How many paged viewed?
What pages a customer drops off?
What products does a male customer between 20 and 30 like to buy?
A customer has bought product x also likely to buy product y?
Conversion amount from first page ?
{
PageViews: [
{
date: "19700101 00:00",
domain: "http://foobar.com",
path: "pageOne.html",
timeOnPage: "10",
pageViewNumber: 1,
events: [
{ name: "slideClicked", value: 0, time: "00:00"},
{ name: "conversion", value: 100, time: "00:05"}
],
pageData: {
category: "home",
pageTitle: "Home Page"
}
},
{
date: "19700101 00:01",
domain: "http://foobar.com",
path: "pageTwo.html",
timeOnPage: "20",
pageViewNumber: 2,
events: [
{ name: "addToCart", value: 50.00, time: "00:02"}
],
pageData: {
category: "product",
pageTitle: "Mans Shirt",
itemValue: 50.00
}
},
{
date: "19700101 00:03",
domain: "http://foobar.com",
path: "pageThree.html",
timeOnPage: "30",
pageViewNumber: 3,
events: [],
pageData: {
category: "basket",
pageTitle: "Checkout"
}
}
],
Customer: {
IPAddress: 127.0.0.1,
Browser: "Chrome",
FirstName: "John",
LastName: "Doe",
Email: "john.doe#email.com",
isMobile: 1,
returning: 1,
age: 25,
sex: "Male"
}
}
Well, you data is mainly in one-to-many relationship. One customer and an array of page view entities. And since all your queries are customer centric, it makes sense to store each customer as a row in Hbase and have customerid(may be email in your case) as part of row key.
If you decide to store one row for one customer, each page view details would be stored as nested. The video link regarding hbase design will help you understand that. So for you above example, you get one row, and three nested entities
Another approach would be, denormalized form, for hbase to perform good lookup. Here each row would be page view, and customer data gets appended for every row.So for your above example, you end up with three rows. Data would be duplicated. Again the video gives info regarding that too(compression things).
You have more nested levels inside each page view - live events and pagedata. So it will only get worse, with respect to denormalization. As everything in Hbase is a key value pair, it is difficult to query and match these nested levels. Hope this helps you to kick off
Good video link here
Hello I am using the Windows Azure SDK for node but can't figure out how to do a batch update using this library:
https://github.com/WindowsAzure/azure-sdk-for-node/blob/master/lib/services/table/batchserviceclient.js
as I couldn't find any examples on how to do it. Has anybody used the isInBatch property of the tableService.js file to do batch updates, deletes and inserts?
Any help or advice would be appreciated.
Cheers
In the Windows Azure SDK for node github repo, take a look at the Blog example under /examples/blog. Specifically, blog.js. Here, you'll see sample code, starting around line 91, where a series of blog posts are written to the same partition, in an entity group transaction:
provider.tableClient.beginBatch();
var now = new Date().toString();
provider.tableClient.insertEntity(tableName, { PartitionKey: partition, RowKey: uuid(), title: 'Post one', body: 'Body one', created_at: now });
provider.tableClient.insertEntity(tableName, { PartitionKey: partition, RowKey: uuid(), title: 'Post two', body: 'Body two', created_at: now });
provider.tableClient.insertEntity(tableName, { PartitionKey: partition, RowKey: uuid(), title: 'Post three', body: 'Body three', created_at: now });
provider.tableClient.insertEntity(tableName, { PartitionKey: partition, RowKey: uuid(), title: 'Post four', body: 'Body four', created_at: now });
provider.tableClient.commitBatch(function () {
console.log('Done');
Note the point about partition. This is the only way you can write multiple entities within a single transaction: They must be in the same partition.
EDIT - As #Igorek rightly points out, a single entity group transaction is limited to 100 entities. Also, the entire payload of the transaction may not exceed 4MB. See this MSDN article for all the details around entity group transactions.