How to know if a Google Places Add request did not pass the moderation? - google-places-api

I request Google Places API once a week to see if my requests to add new places have passed the moderation queue (if the scope has changed to google).
The problem is that I don't know how to know if a request has been rejected by the moderation.
The only solution seems to ask for get-current-place (or other "search" request) and look for the place, since the place must not appear in the result once rejected by the moderation, but I'm not really convinced by that solution.
Thanks

Unfortunately there isn't a good or stable way to do this at the moment. Looking at search results gives an approximation, but there are other reasons that a valid place may not appear in the results, so it won't give a strong confirmation.

Related

Recommended way of getting student assignments in Google Classroom API

Trying to get all assignments for a given student but cannot find a reliable (fast) way to do it.
It seems like the only way would be:
Get the student courses via courses.list
Loop through the courses list and call courses.courseWork.list for each
Say that on average a student has 10 courses, then 10 requests have to be made. But this takes a while and is kind of overkill...
I would like to know if I am missing something, is there a better way?
I guess you are the user who posted the last comment in this Feature Request. Unfortunately the method you described is the only way.
For someone who faces the same issue, in the Feature Request, you can click on the star next to the issue number to receive updates and to give more priority to the Request.

Mixpanel alias on multiple devices

I'm confused by the way that Mixpanel alias() is supposed to work, despite the fact that Mixpanel have multiple pages attempting to explain it.
According to this page, I should call alias() only once per user, because it will create a one-time mapping from their user ID to the device's generated ID. But shouldn't that mapping be the other way around? Let's say Bob starts my app on his phone and logs in, at which point I call alias() to map all his actions so far to his account. He then goes through the same process on his tablet - I would expect that I can then call alias() on that machine to do the same thing. But the page I mentioned specifically says not to do that, because it will map his user ID to that device's ID now.
I can call identify() on the multiple devices, but that does not link his previous events to his user ID.
I feel like I'm misunderstanding how this whole thing works, but I've now spent a few hours pondering this so I'm hoping it's confused someone else in the past too...
I always understood alias() as mapping the identifiers both ways. I've had a similar case as you. I'm almost sure that it does not matter how many times you alias and in which direction you alias the identifiers.
This is not authoritative though, but rather based on past usage and possibly-flawed understanding.
As they explain on their help documentation:
https://mixpanel.com/help/questions/articles/how-should-i-handle-my-user-identity-with-the-mixpanel-javascript-library
Ideal implementation
The ideal integration that will allow you to track users from anonymous browsing all the way through signup and subsequent logins:
When a new user signs up, call (once)
mixpanel.alias("YOUR_USER_ID")
When a user logs in, call
mixpanel.identify("YOUR_USER_ID")
Applying this to your question, you need to use identify when the user do login with the mobile and another time when he do it with the tablet.

Additional validation logic before sending a request to the external system

I'm publishing a comment on Instagram using their API. In the documentation they describe the rules that the message that is being sent has to pass. So far my approach always was to add the validation layer just before the message would be sent to the service checking if it satisfies all the requirements. I preferred to get back to the user quicker with the proper error without sending any requests to the social network.
It requires to maintain additional logic in my application and in case of Instagram, where rules are not so simple (like e.g. just limiting the length of the message) I started thinking if that's the optimal approach.
For example, one of the requirements on comments is that they cannot contain more than 4 hashtags which forces me to keep some logic to be able to check how many hashtags are in a string.
Would you think that the effort put into keeping that validation is worth it? I always thought so, but am not so sure any more.
Would you think that the effort put into keeping that validation is
worth it? I always thought so, but am not so sure any more.
Absolutely yes unless you don't care about user at all.
User is your primary value and his comfort is above all things. So, double validation is a must for a good software.
I think you shouldn't struggle to implement absolutely ALL Instagram checks, but at least most of those, that users fail most often.

Is it possible to query out all of the "praises" messages from yammer?

Yammer has a feature that allows you to "praise" someone. However, looking at the data that is returned from a "praise" message, it does not appear there is any distinguishing flag or attribute that marks it as a "praise" message. Inside the message, there is an attachment with "type=praise" and then "praised-user-id".
What is the best way to pull this information out of Yammer?
Unfortunately there is no way to pull all praise messages from Yammer at this time. It's worth joining the Yammer Developer Network and tracking updates to the API, but this is not something that I'd expect to see in the near future.
Thinking slightly on a tangent, try using the 'search' API looking for the word 'praised' as this is always in the body. This should return only results containing praised.

How do I get around the Twitter API caching problem?

I'm building a Twitter app that requires to check user data somewhat frequently, but I'm facing trouble with a cache that's oddly on Twitter's side, not mine.
Try the following user:
users/show in XML: http://twitter.com/users/show.xml?screen_name=technolocus
users/show in JSON: http://twitter.com/users/show.json?screen_name=technolocus
normal page: http://twitter.com/technolocus
All these methods of accessing data should return the same values, right? Check the statuses_count for each of them.
XML: 12548
JSON: 12513
normal: 12498
The normal method (i.e. just visiting the profile non-programatically) serves up the most correct value of 12498. If I post or delete tweets to this account, it gets updated on the profile page instantly, but the XML and JSON methods still return cached data.
At this point, the values of the XML and JSON methods are 12 to 18 hours old respectively.
I first tried to access these methods from my website (hosted on Dreamhost). I thought it was Dreamhost caching the responses. Then I tried to access the API directly from my browser. I did a cURL from the command line from my machine after that. It wasn't dreamhost. I thought it was probably my ISP (I think they use NetApp or something like that). Then I asked a friend in another corner of India to try it. He's getting the exact same cached responses as I am.
So it isn't Dreamhost's cache; it isn't my ISP or my country's cache. There's only one conclusion - Twitter is caching responses.
How in the heavens do I get around this?!?
Forgot to mention this: The script on the server is in PHP and is using cURL to retrieve the XML and JSON data from Twitter, while the local tests have been just using the browser. Both have the exact same result!
First, I think you should report this a a bug to Twitter. I see the same discrepancy as you, and no matter what that seems like a bug. Even if they're caching, I'd expect that a cache on their side would store an abstract form that would then be rendered into HTML, JSON, and XML. I wonder if what's actually going on is that these requests are performing similar but different queries.
Are you sure that the values are "old"? For example, did you actually delete about 50 updates recently (since you say the HTML one is newest but shows a lower count than the other two)? If you create another update do you see the HTML number increment while the other numbers stay the same, or do they all increment simultaneously?
If what you are saying is accurate, and it probably is, generally, you can't get around it. Twitter would want to be caching its responses since they are costly to reproduce every single time.
When you use Twitter's APIs, you end up being bound by its conventions, even if that includes caching.
Your best bet is to tweet to #twitterapi and get them to give you a response as to why the two representations are divergent.
Add ?blah=xxxx to all urls.
I don't develop anything against twitter and ocassionaly manually "follow" three tweets by going to them in my browser. They always lag behind by half a day. I add ?asdsadsadsad to the url (everytime something different) and it always updates. I don't know what Twitter is doing here and came here while searching for the problem. But I guess this trick of appending a random value to the url via GET will probably work for your api requests, too.

Resources