I am using Streamlink to help some of my technically challenged older friends watch streams from selected sites that webcast LIVE two or thee times a week and are ingested into Youtube. In between webcasts, it would nice to show the User when the next one will begin via the apps Status page.
The platform is Raspberry Pi 3 B+. I have modified the Youtube plugin to allow/prohibit non-live streams. If '--youtube-live-required' is in the command line, then only LIVE streams will play. This prevents the LIVE webcast from re-starting after it has ended, and also prevents videos that Youtube randomly selects, from playing. I have also applied a 'soon to be released' patch that fixes a breaking-change that Youtube made recently. I mention these so you know that I have at least a minimal understanding of the Streamlink code, and am not looking for a totally free ride. But for some reason, I cannot get my head around how to add a feature to get the 'scheduledStartTime' value from the Youtube.py plugin. I am hoping someone with a deep understanding of the Streamlink code can toss me a clue or two.
Once the 'scheduledStartTime' value is obtained (it is in epoch notation), a custom module will send that value to the onboard Python server, via socketio, which can then massage the data and push it to the Status page of connected clients.
Within an infinite loop, Popen starts Streamlink. The output of Popen is PIPEd and observed in order to learn what is happening, and then sends that info to the Server, again using socketio. It is within this loop that the 'scheduledStartTime' data would be gleaned (I think).
How do I solve the problem?
I have a solution to this problem that I am not very proud of, but it solves the problem and I can close this project, finally. Also, it turns out that this solution did not have to utilize the streamlink youtube.py plugin, but since it fetches the contents of the URL of interest anyways, I decided to hack the plugin and keep all of this business in one place.
In a nutshell, a simple regex gets the value of scheduledStartTime IF it is present in the fetched URL contents. The Hack: That value is printed out as a string 'SCHEDULE START TIME:epoch time value', which surfaces through streamlink via Popen PIPE which is polled for such information, in a custom module. Socket.io then sends the info to the on-board server, that sends a massaged version of the info to the app's Status Page (Ionic framework, typescript, etc). Works. Simple. Ugly. Done.
Related
I have an app which is built with libVLC, accessing the standard installed libvlc-dev libraries, plug ins, headers etc on Ubuntu Linux.
My app generally works perfectly, and its mostly there to receive UDP streams and convert to something else.
However, I am having a really weird issue in a specific mode, and so far I have invested around 30 development hours in trial and error trying to solve it - and I am hoping some VLC genius here can unlock the puzzle.
It revolves around http:// based URL sources. Typically for HLS, but the same issue happens with any http based source.
IMPORTANT: If I launch my app in terminal, everything works perfectly (including http streams). However, if I 'sublaunch' the same app with the same launch parameters from my parent process using fork() and then execv(), it fails to play any http based streams (although things like UDP do still work perfectly).
I have checked the obvious things like ensuring the VLC_PLUGIN_PATH is set correctly, and I have exhaustively compared all other environment variables in the 2 launch states, without finding anything obviously related.
After enabling full logging, I can see there is a glaring difference during the url opening process - and it seems something is amiss when evaluating plug in suitability.
In Terminal Launch:
looking for access_demux module matching "http": 20 candidates
no access_demux modules matched
creating access: http://...snip....myfeed.m3u8
looking for access module matching "http": 28 candidates
resolving ..snip....myfeed
outgoing request:
and the stream plays fine
However, when forked, and execv, I see the following:
looking for access_demux module matching "http": 40 candidates
no access_demux modules matched
creating access: http://...snip....myfeed.m3u8
looking for access module matching "http": 56 candidates
and it sticks right there and does not ever even make an http call out.
Of course the odd thing which I hope may be a clue is that the forked environment finds twice as many candidates when matching. However, it fails to complete the http access stage, and goes no further.
This is driving me crazy, and I have given up 5 times so far, only to come back for another try. However, I have exhausted what I can discover via logging, and I am really hopeful a VLC developer here might be able to point me in the right direction.
Many thanks for any ideas, tips, gut instincts or whatever.
Thanks !
SOLVED: In the parent app, we happen to be calling: signal(SIGCHLD,SIG_IGN); at some point before the fork(). (so this is inherited by the child presumably) In this condition when forked and execved libVLC cannot work with http sources. There must be some behaviour which relies on SIGCHLD in VLC's http handling. We could solve the problem either by removing signal(SIGCHLD,SIG_IGN); from the parent, or by adding signal(SIGCHLD, SIG_DFL); to the child libVLC app. Once we do this, libVLC behaves as expected.
I have an app which build on Xcode objective-c, I have a code and I need that code to run even if the user press the home button.
Is it possible to do it?
Refer to the Background Execution chapter of the App Programming Guide for iOS.
There are three difference scenarios for background network requests:
The user initiates a simple request and expect the server to respond reasonably quickly, but want to make sure that if the user leaves the app before the request complete, that it really has a chance to finish gracefully in the background.
See the Executing Finite-Length Tasks section of the aforementioned guide for a discussion on how to request a little extra minutes after the user leaves the app, and that may be sufficient to finish the network request.
You are requesting large volumes of data (or uploading a lot of data), where it is anticipated to possibly require more than a few minutes to finish, especially on slow connection.
In this case, as Phillip Mills pointed out, you can use a background NSURLSession (as discussed in the Background Transfer Considerations section of the URL Loading System Programming Guide: Using NSURLSession guide.
You want to periodically make very quick calls to your web service to check to see if there is any new data, even if the user isn't using your app at the time.
In this case, you should look into "Background Fetch". See the Fetching Small Amounts of Content Opportunistically section of the App Programming Guide for iOS. You can't control precisely when it checks, but it is a way to initiate short network requests even when the app isn't currently running.
Note, if this opportunistic background fetch determines that there is a large volume of data to be downloaded, you can combine this pattern with the previous pattern (the background NSURLSession I discussed in point #2).
For more information on this, see the WWDC 2013 video, What's New with Multitasking.
I've spent hours trying to figure out the answer to this and just continue to come up empty handed. I've setup a XMPP server through OpenFire that is fully functional. My goal with creating the server was placing an alert system for when an event is completed on my server. For example when one of my renders is finished rendering (takes hours, sometimes days), it has the option of running a command when it's finished. This command would then run a .bat file telling a theoretical program to send a message via the broadcast plugin in OpenFire to all parties involved in the render. So it needs to be able to receive parameters such as %N for name of the render and %L for the label of it.
I've located two programs that do exactly what I'm looking to do but one does not work and from the sounds of the comments may have never worked and the second one is seemingly LINUX only. The render server is Windows as is the OpenFire server so naturally it would not work. Here are the links though so you can get an idea.
http://thwack.solarwinds.com/media/40/orion-npm-content/general/136769/xmpp-command-line-client/
http://manpages.ubuntu.com/manpages/jaunty/man1/sendxmpp.1.html
Basically the command I want to push is identical to that of the first link.
xmppalert.exe -m "%N is complete." %L#broadcast.myserver
This would broadcast to everyone in the labels Group that the named render is complete.
If anyone has any idea how to get either of the above links working, know of another way or simply have a better idea on how to accomplish what I'm trying to do please let me know. This is something that has been eating at me for 2 days now.
Thanks.
you can take a look at PoshXMPP which allows you to use XMPP from the Powershell.
http://poshxmpp.codeplex.com/
Alex
I am trying to create a Django webapp that utilizes the Twitter Streaming API via the tweepy.Stream() function. I am having a difficult time conceptualizing the proper implementation.
The simplest functionality I would like to have is to count the number of tweets containing a hashtag in real time. So I would open a stream, filtering by keywords, every time a new tweet comes over the connection i increment a counter. That counter is then displayed on a webpage and updated with AJAX or otherwise.
The problem is that the tweepy.Stream() function must be continuously running and connected to twitter (thats the point). How can I have this stream running in the background of a Django app while incrementing counters that can be displayed in (near) real time?
Thanks in advance!
There are various ways to do this, but using a messaging lib (celery) will probably be the easiest.
1) Keep a python process running tweepy. Once an interesting message is found, create a new celery task
2) Inside this carrot task persist the data to the database (the counter, the tweets, whatever). This task can well run django code (e.g the ORM).
3) Have a regular django app displaying the results your task has persisted.
As a precaution, it's probably a good ideal to run the tweepy process under supervision (supervisord might suit your needs). If anything goes wrong with it, it can be restarted automatically.
I'm working on a consumer web app that needs to do a long running background process that is tied to each customer request. By long running, I mean anywhere between 1 and 3 minutes.
Here is an example flow. The object/widget doesn't really matter.
Customer comes to the site and specifies object/widget they are looking for.
We search/clean/filter for widgets matching some initial criteria. <-- long running process
Customer further configures more detail about the widget they are looking for.
When the long running process is complete the customer is able to complete the last few steps before conversion.
Steps 3 and 4 aren't really important. I just mention them because we can buy some time while we are doing the long running process.
The environment we are working in is a LAMP stack-- currently using PHP. It doesn't seem like a good design to have the long running process take up an apache thread in mod_php (or fastcgi process). The apache layer of our app should be focused on serving up content and not data processing IMO.
A few questions:
Is our thinking right in that we should separate this "long running" part out of the apache/web app layer?
Is there a standard/typical way to break this out under Linux/Apache/MySQL/PHP (we're open to using a different language for the processing if appropriate)?
Any suggestions on how to go about breaking it out? E.g. do we create a deamon that churns through a FIFO queue?
Edit: Just to clarify, only about 1/4 of the long running process is database centric. We're working on optimizing that part. There is some work that we could potentially do, but we are limited in the amount we can do right now.
Thanks!
Consider providing the search results via AJAX from a web service instead of your application. Presumably you could offload this to another server and let you web application deal with the content as you desire.
Just curious: 1-3 minutes seems like a long time for a lookup query. Have you looked at indexes on the columns you are querying to improve the speed? Or do you need to do some algorithmic process -- perhaps you could perform some of this offline and prepopulate some common searches with hints?
As Jonnii suggested, you can start a child process to carry out background processing. However, this needs to be done with some care:
Make sure that any parameters passed through are escaped correctly
Ensure that more than one copy of the process does not run at once
If several copies of the process run, there's nothing stopping a (not even malicious, just impatient) user from hitting reload on the page which kicks it off, eventually starting so many copies that the machine runs out of ram and grinds to a halt.
So you can use a subprocess, but do it carefully, in a controlled manner, and test it properly.
Another option is to have a daemon permanently running waiting for requests, which processes them and then records the results somewhere (perhaps in a database)
This is the poor man's solution:
exec ("/usr/bin/php long_running_process.php > /dev/null &");
Alternatively you could:
Insert a row into your database with details of the background request, which a daemon can then read and process.
Write a message to a message queue which a daemon then read and processed.
Here's some discussion on the Java version of this problem.
See java: what are the best techniques for communicating with a batch server
Two important things you might do:
Switch to Java and use JMS.
Read up on JMS but use another queue manager. Unix named pipes, for instance, might be an acceptable implementation.
Java servlets can do background processing. You could do something similar to this technology in a web technology with threading support. I don't know about PHP though.
Not a complete answer but I would think using AJAX and passing the 2nd step to something thats faster then PHP (C, C++, C#) then a PHP function pick the results off of some stack most likely just a database.