I'm trying to queue a song to Spotify Desktop (Windows 8.1) making use of Spotify Remote Control Bridge. I want that song to be appended after the current playing track.
Due to the restrictions Spotify applies to this API, there's no public documentation and I can't get in contact with their developers. This is one of the posts I've been following to understand how this API works: https://medium.com/#b3ngr33ni3r/hijacking-spotify-web-control-5014b0a1a360
I've successfully played a song with https://XXXX.spotilocal.com/remote/play.json?oauth=XXXX&csrf=XXXX&uri=XXXX, but it jumps to playing queue instantly and replaces it entirely.
When I call https://XXXX.spotilocal.com/remote/queue.json?oauth=XXXX&csrf=XXXX&uri=XXXX it always returns "Method not implemented". Do I need a special Oauth token? Or CSRF token?
Just giving an update, you can now add tracks to a queue via a BETA endpoint.
https://developer.spotify.com/documentation/web-api/reference/player/add-to-queue/
I've tested it, and it seems to work as expected.
queue.json endpoint
This endpoint appeared in their js-library, although it never worked and it's, like you said, not implemented. It doesn't matter which arguments you supply.
play.json endpoint
So, this endpoint is more interesting. First of all, in the past you could use the following parameter ?action=queue to add a song to the queue, but sadly this doesn't work with the latest versions for whatever reason. The only thing you currently can supply is the play-context ?context. A context basically tells spotify what to play next (like setting a new queue). So if you want to play a track of an album and simultaneously want the album to finish after the specific song ends, you can supply a ?context=spotify:album:albumid. There is some more information about it in this issue of my library.
Summarizing, you currently can't add songs to the spotify queue, but add your own context which will be used as a future queue
Although it would be nice to know why spotify isn't releasing any documentation about the local-API.
Related
We are working on implementing Flex with callback and voicemail functions, as documented in: https://www.twilio.com/docs/flex/solutions-library/queued-callback-and-voicemail
Its quite close to what we want, but we would like to be able to access voicemail messages even after an agent handled the task, and the task was completed. I can't seem to find where those recordings can be available from?
In that plugin, the voicemail audio URL is attached to a Task's attributes. According to the TaskRouter documentation on the lifecycle of a Task, once a Task has reached a terminal state (canceled or completed) it is deleted 5 minutes later).
So, you cannot retrieve the voicemail URL from the task.
Call recordings are available through the Twilio REST API though. If you wanted to build a page in Flex that listed the call recordings that have been left as part of this workflow, you could, for example, fetch the recordings from the API (using a Twilio Function) and render them into the Flex page.
That will give you the recordings without any metadata though. If you need to store the metadata further, you will likely need to dig further into the queued callbacks and voicemail plugin to find somewhere to store the data in your own permanent store.
I've been following this tutorial and have reached the point where I am able to receive push notifications (only working with android for now). My code is almost identical to the tutorial's. I'm now looking to expand the functionality. In the tutorial, when the app receives a RemoteMessage object, it parses out the "action" value from the data. It then passes that string to the NotificationActionService which triggers an action.
public override void OnMessageReceived(RemoteMessage message)
{
if (message.Data.TryGetValue("action", out var messageAction))
NotificationActionService.TriggerAction(messageAction);
}
The downside to this is that the only information it passes to the rest of the program is the name of the action. I want to add additional information. I would usually just add another parameter to the TriggerAction method, but the implementation of INotificationActionService is pretty involved. I'm wondering if its like that for a reason, or if I can just process my message in the OnMessageReceived. What makes me hesitant to change this is that the this action string is also pulled from the Intent on start up, and I'm not sure if if this will break it. I'm not entirely sure how android intents work, but both the RemoteMessage and the Intent would require this extra data inside the dictionary.
So, what is the best way to modify this tutorial to allow extra context to be passed in the push notification?
This is a good question - and realistically there isn't really one answer. Basically, all Android applications are going to be a collection of Activities and Services. You can think of them like independent threads that the OS is aware of and can help manage. Intents are a standardized way to communicate between these threads using a small set of types that are safe to serialize, so the OS can make stronger guarantees about the how and when it'll be delivered. There's a lot of documentation, and a whole world of different ways to architect your application with these. Each approach will have pros and cons, with some options being way too sophisticated for some applications, and others way too simple.
The Xamarin sample you're referencing keeps two separate threads: one for receiving remote notifications and one for rendering notifications. In principle, a developer may do this to allow notifications to be rendered in response to a message from a remote service OR in response to events local to the phone. For instance, my banking app alerts me that I'm being logged-out after 15 minutes of inactivity, and also when new tax documents are available. The first scenario is best served locally, where a notification will be rendered because a timer reached 15 minutes without being reset. The second scenario is better served by a remote notification so the app doesn't need to poll for new documents.
Bottom line - the sample app may be using an approach that introduces more overhead than your scenario calls for. For others it will be too simple. Choose what is right for your application.
I’m having some trouble understanding how to inplement presence channels in a real-time laravel application.
From what i’ve read in the documentation and a lot of other online resources about this, i only need to broadcast on a Presence channel and have clients listen on that particular channel. By the way, I'm using laravel 5.6 and on the front end I use Larvel Echo.
So, my problem is that channel name I need to broadcast to. If it’s something generic like “chat”, ALL the users in my application will broadcast to this channel and users who have no ideea who that user is (not a friend) receive this notification and they have to process this new information. Ofcourse I can choose not to update the UI or just do nothing if the user is not in their friends list but this just seems like a lot of useless procesaing of notifications on the client side. Doesn’t seem like a good ideea in my opinion.
Second option would be to broadcast presence to a unique channel name like “chat-[unique]” where “[unique]” would be something like the logged user’s id/hash but this just means that every client that loggs in the application has to listen for ALL friends notifications, so he has to connect to chat-5426, chat-9482, chat-4847 and so on, for all his friends. Again, this does not seem efficient. But that’s not all. The friends list is paginated so a user, after is logged in, only sees his first 20 friends (unless he scrolls down) and I have no limit on how many friend a user can have - I can implement a limit but still, it would be in the thousands so I don’t think I can get all the users from the DB in one query. I had the ideea of using this last method, to listen to each user's channel on the front end just as they are, paginated. Then, when scrolling and navigation arround, if a new user is visible in the viewport, add it to my friends object (no UI change) and start listening on his presence channel. I can see this method failing pretty easy though.
However I think about this, it always seems like online presence is verry resource consuming and almost not worth it for a small startup, I don't know. I have no ideea what a good way would be to implement it as I`ve never done it before. I would greatly appreciate any help with this because all online resources I've found on the subject implement the first method I asked about, all users connectiong to a generic channel but this always works in tutorials because they only have like 2-3 users in the DB and none mantion a user having friends. I can't see this working in the real world, but I may be wrong.
Thanks in advance
I am trying to navigate an known IVR, that ends with the last input forwarding to a real person. When that person picks up, I want to make a call back to the app to play an mp3. Using sendDigits https://www.twilio.com/docs/api/rest/making-calls#post with call creation works for navigating the menu, but the callback to the url parameter happens when the call is answered by the IVR instead of the end user. So in that case, the mp3 is already playing by the time the person the call is forwarded to answers.
The other way I was thinking of trying involved not using sendDigits with call creation, but using a different url callback to grab TwiML and use Play https://www.twilio.com/docs/api/twiml/play to play the DTMF tones needed. In this scenario though, reading though the docs, I don't see a way to send a callback url that would be called when the call is forwarded and the person picks up.
Any suggestions?
Twilio developer evangelist here.
If you want a webhook when your DTMF tones are finished and the call is put through to a human, you could try your second option using <Play> to send DTMF tones and then use a <Redirect> to cause a new webhook to occur. Like this:
<Response>
<Play digits="1234"></Play>
<Redirect>http://example.com/play_mp3</Redirect>
</Response>
If you find that you are still playing the mp3 before a person has actually answered, you can use <Pause> to wait before sending the <Redirect>.
Let me know if that helps at all.
I am developing a application for Windows Phone 7 in which on a button click I need to first send some text messages and then make a call. But as both process are user dependent so I am not getting how should I make it such a way that unless user first finishes the sending messages my app should not initiate call. Because unless I do so it will give thread abort exception.
Thanks;
nil
With the current SDK there is no way to know if the SMS was actually sent. It could also have been changed by the user before being sent!
Lots of people have asked for this functionality (or similar but for other tasks). Let's hope it comes in a future update.
I believe you can't do it in parallel, because WP7 isn't really multitask.
Do you really need to do it in parallel?
Search for the events deactivated and activated. They are in App.cs.
After you make a call, and back to the program, the activated event will detect it, so you can add code there to send SMS.
Done in reverse way. First make a call and then when user comes back after tombstoning send an Email...but flag manipulation need to be saved in isolated storage.