I want to start by saying that this question is not for telegram bot API. I am trying to fetch images from a channel using telegram core API. The image is in the media property of the message object
"_": "message",
"pFlags": {
"post": true
},
"flags": 17920,
"post": true,
"id": 11210,
"to_id": {
"_": "peerChannel",
"channel_id": 1171605754
},
"date": 1550556770,
"message": "",
"media": {
"_": "messageMediaPhoto",
"pFlags": {},
"flags": 1,
"photo": {
"_": "photo",
"pFlags": {},
"flags": 0,
"id": "6294134956242348146",
"access_hash": "11226369941418527484",
"date": 1550556770,
I am using the upload.getFile API to fetch the file. Example is
upload.getFile({
location: {
_: 'inputFileLocation',
id: '6294134956242348146',
access_hash: '11226369941418527484'
},
limit: 1000,
offset: 0
})
But the problem is it throws the error RpcError: CODE#400 LIMIT_INVALID. From looking at the https://core.telegram.org/api/files it looks like limit value is invalid. I tried giving limit as
1024000 (1Kb)
20480000 (20Kb)
204800000 (200kb)
But it always return the same error.
For anyone who is also frustrated with the docs. Using, reading and trying out different stuff will ultimately work for you. If possible someone can take up the task of documenting the wonderful open source software.
Coming to the answer, the location object shouldn't contain id or access hash like other APIs rather it has its own parameters as defined in telegram schema.
There is a media property to a message which has a sizes object. This will contains 3 or more size options (thumbnail, preview, websize and so on). Choose the one that you will need and use the volume_id, local_id and secret properties. The working code will look something like this.
upload.getFile({
location: {
_: 'inputFileLocation', (This parameter will change for other files)
volume_id: volumeId,
local_id: localId,
secret: secret
},
limit: 1024 * 1024,
offset: 0
}, {
isFileTransfer: true,
createClient: true
})
The following points should be noted.
Limit should be in bytes (not bits)
Offset will be 0. But if its big file use this and limit to download parts of the file and join them.
Additional parameters such as isFileTransfer and createClient also exists. I haven't fully understood why its needed. If I have time I'll update it later.
Try using a library that's built on top the original telegram library. I'm using Airgram, a JS/TS library which is a well maintained Repo.
Related
During developing pipeline which will use Elasticsearch as a source I faced with issue related paging. I am using SQL Elasticsearch API. Basically, I've started to do request in postman and it works well. The body of request looks following:
{
"query":"SELECT Id,name,ownership,modifiedDate FROM \"core\" ORDER BY Id",
"fetch_size": 20,
"cursor" : ""
}
After first run in response body it contains cursor string which is pointer to next page. If in postman I send the request and provide cursor value from previous request it return data for second page and so on. I am trying to archive the same result in Azure Data Factory. For this I using copy activity, which store response to Azure blob. Setup for source is following.
copy activity source configuration
This is expression for body
{
"query": "SELECT Id,name,ownership,modifiedDate FROM \"#{variables('TableName')}\" WHERE ORDER BY Id","fetch_size": #{variables('Rows')}, "cursor": ""
}
I have no idea how to correctly setup pagination rule. The pipeline works properly but only for the first request. I've tried to setup Headers.cursor and expression $.cursor but this setup leads to an infinite loop and pipeline fails with the Elasticsearch restriction.
I've also tried to read document at https://learn.microsoft.com/en-us/azure/data-factory/connector-rest#pagination-support but it seems pretty limited in terms of usage examples and difficult for understanding.
Could somebody help me understand how to build the pipeline with paging abilities utilization?
Responce with the cursor looks like:
{
"columns": [
{
"name": "companyId",
"type": "integer"
},
{
"name": "name",
"type": "text"
},
{
"name": "ownership",
"type": "keyword"
},
{
"name": "modifiedDate",
"type": "datetime"
}
],
"rows": [
[
2,
"mic Inc.",
"manufacture",
"2021-03-31T12:57:51.000Z"
]
],
"cursor": "g/WuAwFaAXNoRG5GMVpYSjVWR2hsYmtabGRHTm9BZ0FBQUFBRUp6VGxGbUpIZWxWaVMzcGhVWEJITUhkbmJsRlhlUzFtWjNjQUFBQUFCQ2MwNWhaaVIzcFZZa3Q2WVZGd1J6QjNaMjVSVjNrdFptZDP/////DwQBZgljb21wYW55SWQBCWNvbXBhbnlJZAEHaW50ZWdlcgAAAAFmBG5hbWUBBG5hbWUBBHRleHQAAAABZglvd25lcnNoaXABCW93bmVyc2hpcAEHa2V5d29yZAEAAAFmDG1vZGlmaWVkRGF0ZQEMbW9kaWZpZWREYXRlAQhkYXRldGltZQEAAAEP"
}
I finally find the solution, hopefully, it will be useful for the community.
Basically, what needs to be done it is split the solution into four steps.
Step 1 Make the first request as in the question description and stage file to blob.
Step 2 Read blob file and get the cursor value, set it to variable
Step 3 Keep requesting data with a changed body
{"cursor" : "#{variables('cursor')}" }
Pipeline looks like this:
pipeline
Configuration of pagination looks following
pagination . It is a workaround as the server ignores this header, but we need to have something which allows sending a request in loop.
I am using the composer to publish a bot to fetch data from an azure storage table.
In short, the bot composer needs to construct a bot to iterate through an XML deserialized JSON object returned by the azure storage rest API.
In my code generated by the composer, the bot does a "set property" step immediately following the successful return of the REST API (storage table query). Given the deserialized object returned by the storage REST API, how should the "set property" statement be constructed so the bot can print our the individual data field,
Another way to phrase the question: how can I use the composer to construct the bot to iterate through a returned deserialized object (coded in XML JSON format)?
Where can I find a document that can shed some light on this matter?
Is there any place I can find a good example? Can it be done via composer?
Thanks in advance.
Yes, it can be done. If the API returns XML, make sure you configure your api call to ask for content type application/xml.
Then you can use use the xPath built in function. Make note that it will return an array if results in more than value matches the expression, in which you can use the foreach function to iterate over it with. I needed to run the nightly build of Composer (with bot-builder 4.12.0) to get it to work for me. See here for some more info:
https://github.com/microsoft/botbuilder-js/pull/3093
Here's an example that worked for me:
"actions": [
{
"$kind": "Microsoft.SendActivity",
"$designer": {
"id": "rGv7XC"
},
"activity": "${SendActivity_rGv7XC()}"
},
{
"$kind": "Microsoft.HttpRequest",
"$designer": {
"id": "TDA1wO"
},
"method": "GET",
"url": "http://www.geoplugin.net/xml.gp?ip=157.54.54.128",
"resultProperty": "dialog.api_response",
"contentType": "application/xml"
},
{
"$kind": "Microsoft.SetProperty",
"$designer": {
"id": "ipNhfY"
},
"property": "dialog.timezone",
"value": "=xPath(dialog.api_response.content,'/geoPlugin/geoplugin_timezone/text()')"
},
{
"$kind": "Microsoft.SendActivity",
"$designer": {
"id": "DxohEx"
},
"activity": "${SendActivity_DxohEx()}"
}
]
You can (if needed/you wish) use the json and jPath built in functions to convert xml to json and then query with. Something like:
${json(user.testXml)} and then
${jPath(user.testJson , "automobiles")}
I am trying out the long running recognize method of the Speech-to-Text API (https://cloud.google.com/speech-to-text/docs/reference/rest/v1p1beta1/speech/longrunningrecognize) and specified all needed parameters such as:
{
"audio":
{
"uri": "gs://xyz/blabla.mp3"
},
"config":
{
"languageCode": "en-US",
"encoding": "AMR_WB",
"sampleRateHertz": 16000
}
}
This returned a name I can use with the get operation (https://cloud.google.com/speech-to-text/docs/reference/rest/v1/operations/get).
The documentation says the "operation" JSON object returned by get would include parameters that I do not see in the response.
For example, there is no "done" node. Instead this is all I get:
{
"name": "xxxxx",
"metadata": {
"#type": "type.googleapis.com/google.cloud.speech.v1.LongRunningRecognizeMetadata",
"progressPercent": 100,
"startTime": "2018-06-08T14:40:54.663240Z",
"lastUpdateTime": "2018-06-08T15:05:01.161911Z"
}
}
Any idea why that is? Should at least return a status and maybe an error (https://cloud.google.com/speech-to-text/docs/reference/rest/v1p1beta1/operations#Operation)?
UPDATE: Now I am getting results. Server issues, however? Is it only a temporary glitch?
{
"name": "xxxxx",
"metadata": {
"#type": "http://type.googleapis.com/google.cloud.speech.v1.LongRunningRecognizeMetadata …",
"progressPercent": 100,
"startTime": "2018-06-08T14:40:54.663240Z",
"lastUpdateTime": "2018-06-08T15:05:01.161911Z"
},
"done": true,
"error": {
"code": 13,
"message": "Server unavailable, please try again later."
}
}
At first sight your request is mixing an unsupported mp3 format versus a supported audio encoding (AMR_WB).
Let's suppose that this mixture is ok. If you receive an empty response (a transcript is not returned and no errors have occurred), it's probably that the encoding in your file is wrong. Check some validation steps in the preceding link to determine if your sound file have troubles, for example Cloud Speech-to-Text service currently supports only one audio channel.
To narrow down your issue, you can convert your sound file following the best practices. It will be enough to transcode your file to lossless FLAC or LINEAR16 encodings with a sampling rate of 16,000 Hz or higher, however for whole recommendations please read the prior link.
The error in your last update it seems to be temporary, do you still face the issue?
If your issue persists with the new file, it could be a good idea to report this situation in their public issue tracker.
Regards!
I've been reading forums and trying Steam APIs, I'm searching for an API which provides all Steam Games.
I found the API providing all SteamApps, and the Steam Store API which provides information for Apps (I'm looking for the type: 'game'), but for this, I need to call the store API once for each SteamApp... And the Store API is limited to 200 calls every 5 minutes! Is it the only solution?
EDIT:
All Apps API : http://api.steampowered.com/ISteamApps/GetAppList/v0002/?key=STEAMKEY&format=json
App details API : http://store.steampowered.com/api/appdetails?appids={APP_ID}
There is no "Steam API all games and all their details in one go".
You use GetAppList to get all the steam apps. Then you have to query each app with appdetails which will take a long time.
GetAppList : http://api.steampowered.com/ISteamApps/GetAppList/v0002/?format=json
{
"applist": {
"apps": [
{"appid": 10, "name": "Counter-Strike"},
{"appid": 20, "name": "Team Fortress Classic"},
{"appid": 30, "name": "Day of Defeat"},
{"appid": 40, "name": "Deathmatch Classic"}
]
}
}
appdetails : http://store.steampowered.com/api/appdetails?appids=10
{
"10": {
"success": true,
"data": {
"type": "game",
"name": "Counter-Strike",
"steam_appid": 10,
"required_age": 0,
"is_free": false,
"detailed_description": "...",
"about_the_game": "...",
"short_description": "...",
"developers": ["Valve"],
"publishers": ["Valve"],
"EVEN_MORE_DATA": {}
}
}
}
There is a general API rate limit for each unique IP adress of 200 requests in five minutes which is one request every 1.5 seconds.
Another solution would be to use a third-party service such as SteamApis which offers more options but they are inevitably bound to what Steam offers in their API.
A common method here is to cache the results.
So for example, if your using something like PHP to query the results, you would do something like json_encode and json_decode with an array / object to hold the last results.
You can get fancy depending on what you want, but basically you'll need to cache and then perform an update of the oldest.
I know gravatar allows this, and I'd like to look into facebook, google plus, twitter, instagram, and anything else.
Do you guys know of any services like this?
EDIT: to clarify, by "without authentication" do I mean user authentication, I expect to use some sort of API
One possible approach, using Facebook Graph Api would be to perform a search against the name for users objects and also retrieve their pictures, using API version 2.2 it would be a call like:
GET graph.facebook.com
/search?q=miles davis&type=user&fields=id,name,picture
This would retrieve id, name and picture object for a search on miles davis, first results as example:
{
"data": [
{
"id": "805353876185303",
"name": "Miles Davis",
"picture": {
"data": {
"is_silhouette": true,
"url": "https://fbcdn-profile-a.akamaihd.net/hprofile-ak-xfp1/v/t1.0-1/c15.0.50.50/p50x50/10354686_10150004552801856_220367501106153455_n.jpg?oh=XXXXXXXXXXXXXXXX&oe=XXXXXX&__gda__=XXXXXXX_XXXXXXXXXXXXX"
}
}
},
{
"id": "1572370603017419",
"name": "Miles Davis",
"picture": {
"data": {
"is_silhouette": false,
"url": "https://fbcdn-profile-a.akamaihd.net/hprofile-ak-xap1/v/t1.0-1/p50x50/1501761_1382020035385811_1384168153_n.jpg?oh=XXXXXXXXX&oe=XXXXXXXXXXX&__gda__=XXXXXX_XXXXXX"
}
}
}, ...........
The field is_silhouette is false if the user has a profile picure.