Events of REST API of firebase - events

Now I am developing a REST API on embedded System, and I want to use event stream to get the updated data on firebase like the Android's onChildChanged callback do.
At the same time, I learnt from the doc that the REST API can just support the events like POST, and it is just server listen to the client way.
If I use poll to get the updated data, it is a waste memory and has delay to get the latest data.
So how can I implement this callback function on REST API like the Android do?

In addition to the regular REST verbs (POST, GET, PUT, DELETE and PATCH), Firebase's REST API also supports streaming events using the Event Source / Server-Sent Events protocol. This part of the API is specifically made for use-cases such as the one you are describing, allowing multiple updates to bent sent from the server to a connected client through a single HTTP connection.
These are some examples of events that the server might send:
// Set your entire cache to {"a": 1, "b": 2}
event: put
data: {"path": "/", "data": {"a": 1, "b": 2}}
// Put the new data in your cache under the key 'c', so that the complete cache now looks like:
// {"a": 1, "b": 2, "c": {"foo": true, "bar": false}}
event: put
data: {"path": "/c", "data": {"foo": true, "bar": false}}
// For each key in the data, update (or add) the corresponding key in your cache at path /c,
// for a final cache of: {"a": 1, "b": 2, "c": {"foo": 3, "bar": false, "baz": 4}}
event: patch
data: {"path": "/c", "data": {"foo": 3, "baz": 4}}
This data was taken from the blog post announcing the REST streaming feature, which also contains a good description of it. There are also examples of REST streaming clients implemented in Python and Ruby available.
I hope these links are enough to get you started. If you are having trouble getting it to work, open a question where you show the code that fires the HTTP request and handles the response.

Related

Google Play Developer API cancelSurveyResult missing from SubscriptionPurchase.Get response

We are implementing server to server notifications for Android in-app purchases.
For each of the user-generated events (purchase, cancellation, etc.), we receive and parse real-time developer notifications, according to Google Play's rtdn reference.
After parsing each notification, we submit the subscription to the play store for verification, according to Google Play Developer API purchases.subscriptions.
The JSON responses we expect back should correspond to the action that the user has taken. For example, after the SUBSCRIPTION_PURCHASED notification was received, the verification response looks like this:
{
"startTimeMillis": "1615196368232",
"expiryTimeMillis": "1615196783756",
"autoRenewing": true,
"priceCurrencyCode": "EUR",
"priceAmountMicros": "3990000",
"countryCode": "DE",
"developerPayload": "",
"paymentState": 1,
"orderId": "GPA.3391-4718-6063-47314",
"purchaseType": 0,
"acknowledgementState": 1,
"kind": "androidpublisher#subscriptionPurchase"
}
Now, the headaches start when the user cancels their subscription (via the Play Store app), thus generating a SUBSCRIPTION_CANCELED notification. The verification response looks like this:
{
"startTimeMillis": "1615196368232",
"expiryTimeMillis": "1615196664327",
"autoRenewing": false,
"priceCurrencyCode": "EUR",
"priceAmountMicros": "3990000",
"countryCode": "DE",
"developerPayload": "",
"cancelReason": 0,
"userCancellationTimeMillis": "1615196648401",
"orderId": "GPA.3391-4718-6063-47314",
"purchaseType": 0,
"acknowledgementState": 1,
"kind": "androidpublisher#subscriptionPurchase"
}
There are some new fields present, defining the cancellation:
"cancelReason": integer,
"userCancellationTimeMillis": string,
This is great and works as expected. However, there are still some cancellation fields missing. Namely, the SubscriptionCancelSurveyResult, defined as:
"cancelSurveyResult": {
"cancelSurveyReason": integer,
"userInputCancelReason": string
}
The documentation mentions for both userInputCancelReason and userCancellationTimeMillis:
Only present if cancelReason is 0.
cancelReason is already 0, as it can be seen in the screenshot attached. However, only userCancellationTimeMillis is present. Why is there no cancelSurveyResult and no userInputCancelReason?
You app can look at the cancelReason in the subscription resource returned from the Google Play Developer API to learn why the subscription was cancelled (e.g. the customer cancelled or had billing issues). If the subscription was cancelled by the user, your app can look at the cancelSurveyResult field to learn why the user cancelled the subscription.
Source: https://developer.android.com/google/play/billing/subscriptions#cancel

How to download an image/media using telegram API

I want to start by saying that this question is not for telegram bot API. I am trying to fetch images from a channel using telegram core API. The image is in the media property of the message object
"_": "message",
"pFlags": {
"post": true
},
"flags": 17920,
"post": true,
"id": 11210,
"to_id": {
"_": "peerChannel",
"channel_id": 1171605754
},
"date": 1550556770,
"message": "",
"media": {
"_": "messageMediaPhoto",
"pFlags": {},
"flags": 1,
"photo": {
"_": "photo",
"pFlags": {},
"flags": 0,
"id": "6294134956242348146",
"access_hash": "11226369941418527484",
"date": 1550556770,
I am using the upload.getFile API to fetch the file. Example is
upload.getFile({
location: {
_: 'inputFileLocation',
id: '6294134956242348146',
access_hash: '11226369941418527484'
},
limit: 1000,
offset: 0
})
But the problem is it throws the error RpcError: CODE#400 LIMIT_INVALID. From looking at the https://core.telegram.org/api/files it looks like limit value is invalid. I tried giving limit as
1024000 (1Kb)
20480000 (20Kb)
204800000 (200kb)
But it always return the same error.
For anyone who is also frustrated with the docs. Using, reading and trying out different stuff will ultimately work for you. If possible someone can take up the task of documenting the wonderful open source software.
Coming to the answer, the location object shouldn't contain id or access hash like other APIs rather it has its own parameters as defined in telegram schema.
There is a media property to a message which has a sizes object. This will contains 3 or more size options (thumbnail, preview, websize and so on). Choose the one that you will need and use the volume_id, local_id and secret properties. The working code will look something like this.
upload.getFile({
location: {
_: 'inputFileLocation', (This parameter will change for other files)
volume_id: volumeId,
local_id: localId,
secret: secret
},
limit: 1024 * 1024,
offset: 0
}, {
isFileTransfer: true,
createClient: true
})
The following points should be noted.
Limit should be in bytes (not bits)
Offset will be 0. But if its big file use this and limit to download parts of the file and join them.
Additional parameters such as isFileTransfer and createClient also exists. I haven't fully understood why its needed. If I have time I'll update it later.
Try using a library that's built on top the original telegram library. I'm using Airgram, a JS/TS library which is a well maintained Repo.

Better way to query server-provided fields in GraphQL

I'm thinking about moving our API from a standard HTTP endpoint to GraphQL.
On the client we have data tables in which the user can choose what columns they want to have visible.
Users can save their column layout for the next time they open the application.
The problem is the first time such a table is shown, the client requires two things:
What columns has the user saved
The data of these columns for each record
Currently we have a single endpoint returning both of these in JSON.
URL: /contacts/table
{
"columns": ["a", "b", "c"],
"rows": [
{
"a": 0,
"b": 1,
"c": 2
}
]
}
Is there a nice way to get the above working in a GraphQL API?
Some things I've tried:
Have an API that returns key value pairs
"columns": ["a", "b", "c"]
"rows": [
[
{
"key": "a"
"value": 0
},
{
"key": "b"
"value": 1
}
]
]
Problems with this approach:
It has a lot of overhead
It feels like reinventing the wheel seeing as JSON is already key/value based.
Doesn't work well with caching frameworks such as apollo when the same type needs to be accessed in a non-table way
Custom syntax
query {
tables {
contacts {
columns #provideFragment("ContactColumns")
}
}
contacts {
..ContactsColumns
}
}
Problems with this approach:
It's not standard (as far as I know), so it might not work with other tooling.

Steam API all games

I've been reading forums and trying Steam APIs, I'm searching for an API which provides all Steam Games.
I found the API providing all SteamApps, and the Steam Store API which provides information for Apps (I'm looking for the type: 'game'), but for this, I need to call the store API once for each SteamApp... And the Store API is limited to 200 calls every 5 minutes! Is it the only solution?
EDIT:
All Apps API : http://api.steampowered.com/ISteamApps/GetAppList/v0002/?key=STEAMKEY&format=json
App details API : http://store.steampowered.com/api/appdetails?appids={APP_ID}
There is no "Steam API all games and all their details in one go".
You use GetAppList to get all the steam apps. Then you have to query each app with appdetails which will take a long time.
GetAppList : http://api.steampowered.com/ISteamApps/GetAppList/v0002/?format=json
{
"applist": {
"apps": [
{"appid": 10, "name": "Counter-Strike"},
{"appid": 20, "name": "Team Fortress Classic"},
{"appid": 30, "name": "Day of Defeat"},
{"appid": 40, "name": "Deathmatch Classic"}
]
}
}
appdetails : http://store.steampowered.com/api/appdetails?appids=10
{
"10": {
"success": true,
"data": {
"type": "game",
"name": "Counter-Strike",
"steam_appid": 10,
"required_age": 0,
"is_free": false,
"detailed_description": "...",
"about_the_game": "...",
"short_description": "...",
"developers": ["Valve"],
"publishers": ["Valve"],
"EVEN_MORE_DATA": {}
}
}
}
There is a general API rate limit for each unique IP adress of 200 requests in five minutes which is one request every 1.5 seconds.
Another solution would be to use a third-party service such as SteamApis which offers more options but they are inevitably bound to what Steam offers in their API.
A common method here is to cache the results.
So for example, if your using something like PHP to query the results, you would do something like json_encode and json_decode with an array / object to hold the last results.
You can get fancy depending on what you want, but basically you'll need to cache and then perform an update of the oldest.

ember-data - best strategy for loading complex data structures

I am using ember-data 2.0. I want to load all data from the server at startup. Then the app is really fast after the initial loading. The application is mobile, and may have varying bandwidth. I want to load the data as fast as possible. I want to use JSON API to comply with the newest ways of using ember-data.
Each object is small, just a handful of attributes. But there may be 200 such objects to load.
Is it best to do many small ajax calls or one big one? I imagine it is faster to use one big call that contains all my objects, but is that possible with the JSON API and the out-of-the-box adapters?
Of course, I could just use findAll() but that will only load all objects of one type. I have several object types in my model that are connected with hasMany and belongsTo relationships.
If I use the RestAdapter out of the box, that will result in many ajax calls, right?
What is the best strategy to use and how to implement that with ember-data and the adapters?
You can load different types to the store from single payload out of the box with Ember Data. By default JSON serializer assumes that each root node in payload corresponds to a type.
Let's say you have models:
/models/post.js
import DS from 'ember-data';
export default DS.Model.extend({
title: DS.attr('string'),
body: DS.attr('string'),
comments: DS.hasMany('comment');
});
/models/comment.js
import DS from 'ember-data';
export default DS.Model.extend({
comment: DS.attr('string'),
post: DS.belongsTo('post');
});
And you have an endpoint /posts which responds with payload like this:
{
"posts": [
{
"id": 1,
"title": "Post 1",
"body": "Post 1 body",
"comment_ids": [1, 2]
},
{
"id": 2,
"title": "Post 2",
"body": "Post 2 body",
"comment_ids": [3]
}
],
"comments": [
{
"id": 1,
"comment": "Comment 1",
"post_id": 1
},
{
"id": 2,
"comment": "Comment 2",
"post_id": 1
},
{
"id": 3,
"comment": "Comment 3",
"post_id": 2
}
]
}
So when you do store.findAll('post'), Ember Data will request /posts endpoint, recieve the payload and push both posts and comments to the store. And all relations will be resolved properly.
But there are some notes:
if you define model relation with {async: true}, Ember data will always hit the server to fetch relation records.
Everywhere in the app you would need to use store.peek() and store.peekAll() instead of store.find() and store.findAll() in order to get records from the store but not from server.

Resources