Can i read blob(or binary) file with VLCJ? - vlcj

I know that i can play music with path with vlcj with this code :
AudioPlayer.getMediaPlayer().playMedia(path)
but it want to know if i can play blob(or binaryfile) file with vlcj ?

This can not be done with vlcj as the required functionality simply does not exist in LibVLC.
What you want is usually achieved by using the VLC imem plugin, but this plugin is not exposed by LibVLC.
There have been attempts to patch VLC to expose imem (at least the "access" aspect of it), but they were not so far acceptable.
Even if it were possible there would be limitations. If you want to play from an InputStream, you can't seek (at least not usefully) - in fact you can't properly seek a Java InputStream generally anyway. That leaves using something like RandomAccessFile if you want to seek, but if you are going to use RandomAccessFile that implies you have access to a local file anyway, so you would just play the file normally. But that's all moot.

Related

How to (properly) use the ExtendRtspStream command in Google's Nest Device Access API?

Using Google's Nest Device Access API, I can generate an RTSP camera stream using the GenerateRtspStream command and subsequently stop the stream using the StopRtspStream command. Makes sense, however these streams are only alive for 5 minutes by default - so the API also features another command: ExtendRtspStream.
On the face of it, this sounds like it should "extend" the stream you had originally created, however these RTSP stream urls include an auth query parameter and extending a stream simply issues a new token to use for this, which means that the url for the stream changes every time it gets extended. So in reality the stream isn't getting extended at all as the url you use to access the stream still gets invalidated, and you have to restart it with a new url to continue watching the stream. So what's the point? You may as well just call the GenerateRtspStream command and switch over to that one once the first expires. Is there some way to seamlessly change the RTSP url mid-stream that I'm not aware of using FFMPEG, perhaps? Or to have a proxy server that broadcasts a static RTSP url and seamlessly switches the actual url each time it gets extended?
Rant starts here: I'm really hoping that this behaviour is actually a bug or oversight in the design of the API, and that ExtendRtspStream is supposed to keep the same url alive for as long as needed, because it's awfully pointless to have an RTSP stream that only stays alive for a max of 5 minutes. Heck, it'd be more useful to have an API that just returns the latest single-image snapshot from the camera every 10 seconds or so - but alas, there's no API for that either.

Very Strange Behaviour when Forking into libVLC

I have an app which is built with libVLC, accessing the standard installed libvlc-dev libraries, plug ins, headers etc on Ubuntu Linux.
My app generally works perfectly, and its mostly there to receive UDP streams and convert to something else.
However, I am having a really weird issue in a specific mode, and so far I have invested around 30 development hours in trial and error trying to solve it - and I am hoping some VLC genius here can unlock the puzzle.
It revolves around http:// based URL sources. Typically for HLS, but the same issue happens with any http based source.
IMPORTANT: If I launch my app in terminal, everything works perfectly (including http streams). However, if I 'sublaunch' the same app with the same launch parameters from my parent process using fork() and then execv(), it fails to play any http based streams (although things like UDP do still work perfectly).
I have checked the obvious things like ensuring the VLC_PLUGIN_PATH is set correctly, and I have exhaustively compared all other environment variables in the 2 launch states, without finding anything obviously related.
After enabling full logging, I can see there is a glaring difference during the url opening process - and it seems something is amiss when evaluating plug in suitability.
In Terminal Launch:
looking for access_demux module matching "http": 20 candidates
no access_demux modules matched
creating access: http://...snip....myfeed.m3u8
looking for access module matching "http": 28 candidates
resolving ..snip....myfeed
outgoing request:
and the stream plays fine
However, when forked, and execv, I see the following:
looking for access_demux module matching "http": 40 candidates
no access_demux modules matched
creating access: http://...snip....myfeed.m3u8
looking for access module matching "http": 56 candidates
and it sticks right there and does not ever even make an http call out.
Of course the odd thing which I hope may be a clue is that the forked environment finds twice as many candidates when matching. However, it fails to complete the http access stage, and goes no further.
This is driving me crazy, and I have given up 5 times so far, only to come back for another try. However, I have exhausted what I can discover via logging, and I am really hopeful a VLC developer here might be able to point me in the right direction.
Many thanks for any ideas, tips, gut instincts or whatever.
Thanks !
SOLVED: In the parent app, we happen to be calling: signal(SIGCHLD,SIG_IGN); at some point before the fork(). (so this is inherited by the child presumably) In this condition when forked and execved libVLC cannot work with http sources. There must be some behaviour which relies on SIGCHLD in VLC's http handling. We could solve the problem either by removing signal(SIGCHLD,SIG_IGN); from the parent, or by adding signal(SIGCHLD, SIG_DFL); to the child libVLC app. Once we do this, libVLC behaves as expected.

Streamlink with Youtube plugin: How to get scheduledStartTime value?

I am using Streamlink to help some of my technically challenged older friends watch streams from selected sites that webcast LIVE two or thee times a week and are ingested into Youtube. In between webcasts, it would nice to show the User when the next one will begin via the apps Status page.
The platform is Raspberry Pi 3 B+. I have modified the Youtube plugin to allow/prohibit non-live streams. If '--youtube-live-required' is in the command line, then only LIVE streams will play. This prevents the LIVE webcast from re-starting after it has ended, and also prevents videos that Youtube randomly selects, from playing. I have also applied a 'soon to be released' patch that fixes a breaking-change that Youtube made recently. I mention these so you know that I have at least a minimal understanding of the Streamlink code, and am not looking for a totally free ride. But for some reason, I cannot get my head around how to add a feature to get the 'scheduledStartTime' value from the Youtube.py plugin. I am hoping someone with a deep understanding of the Streamlink code can toss me a clue or two.
Once the 'scheduledStartTime' value is obtained (it is in epoch notation), a custom module will send that value to the onboard Python server, via socketio, which can then massage the data and push it to the Status page of connected clients.
Within an infinite loop, Popen starts Streamlink. The output of Popen is PIPEd and observed in order to learn what is happening, and then sends that info to the Server, again using socketio. It is within this loop that the 'scheduledStartTime' data would be gleaned (I think).
How do I solve the problem?
I have a solution to this problem that I am not very proud of, but it solves the problem and I can close this project, finally. Also, it turns out that this solution did not have to utilize the streamlink youtube.py plugin, but since it fetches the contents of the URL of interest anyways, I decided to hack the plugin and keep all of this business in one place.
In a nutshell, a simple regex gets the value of scheduledStartTime IF it is present in the fetched URL contents. The Hack: That value is printed out as a string 'SCHEDULE START TIME:epoch time value', which surfaces through streamlink via Popen PIPE which is polled for such information, in a custom module. Socket.io then sends the info to the on-board server, that sends a massaged version of the info to the app's Status Page (Ionic framework, typescript, etc). Works. Simple. Ugly. Done.

Is it possible to capture the rendering audio session from another process?

I am taking my first dives in to the WASAPI system of windows and I do not know if what I want is even possible with the windows API.
I am attempting to write program that will record the sound from various programs and break each in to a separate recorded track/audio file. From the reseacrch I have done I know the unit I need to record is the various audio sessions being rendered to a endpoint, and the normal way of recording is by taking the render endpoint and performing a loopback. However from what I have read so far in the MSDN the only interaction with sessions I can do is through IAudioSessionControl and that does not provide me with a way to get a copy of the stream for the session.
Am I missing something that would allow me to do this with the WASAPI (or some other windows API) and get the individual sessions (or individual streams) before they are mixed together to form the endpoint or is this a imposable goal?
The mixing takes place inside the API (WASAPI) and you don't have access to buffers of other audio clients, esp. that they don't exist in the context of the current process in first place. Perhaps one's best (not so good, but there are no better alternatives) way would be to hook the API calls and intercept data on its way to WASAPI, if the task in question permits dirty tricks like this.

Best practice when using a Rails app to overwrite a file that the app relies on

I have a Rails app that reads from a .yml file each time that it performs a search. (This is a full text search app.) The .yml file tells the app which url it should be making search requests to because different version of the search index reside on different servers, and I occasionally switch between indexes.
I have an admin section of the app that allows me to rewrite the aforementioned .yml file so that I can add new search urls or remove unneeded ones. While I could manually edit the file on the server, I would prefer to be able to also edit it in my site admin section so that when I don't have access to the server, I can still make any necessary changes.
What is the best practice for making edits to a file that is actually used by my app? (I guess this could also apply to, say, an app that had the ability to rewrite one of its own helper files, post-deployment.)
Is it a problem that I could be in the process of rewriting this file while another user connecting to my site wants to perform a search? Could I make their search fail if I'm in the middle of a write operation? Should I initially write my new .yml file to a temp file and only later replace the original .yml file? I know that a write operation is pretty fast, but I just wanted to see what others thought.
UPDATE: Thanks for the replies everyone! Although I see that I'd be better off using some sort of caching rather than reading the file on each request, it helped to find out what the best way to actually do the file rewrite is, given that I'm specifically looking to re-read it each time in this specific case.
If you must use a file for this then the safe process looks like this:
Write the new content to a temporary file of some sort.
Use File.rename to atomically replace the old file with the new one.
If you don't use separate files, you can easily end up with a half-written broken file when the inevitable problems occur. The File.rename class method is just a wrapper for the rename(2) system call and that's guaranteed to be atomic (i.e. it either fully succeeds or fully fails, it won't leave you in an inconsistent in-between state).
If you want to replace /some/path/f.yml then you'd do something like this:
begin
# Write your new stuff to /some/path/f.yml.tmp here
File.rename('/some/path/f.yml.tmp', '/some/path/f.yml')
rescue SystemCallError => e
# Log an error, complain loudly, fall over and cry, ...
end
As others have said, a file really isn't the best way to deal with this and if you have multiple servers, using a file will fail when the servers become out of sync. You'd be better off using a database that several servers can access, then you could:
Cache the value in each web server process.
Blindly refresh it every 10 minutes (or whatever works).
Refresh the cached value if connecting to the remote server fails (with extra error checking to avoid refresh/connect/fail loops).
Firstly, let me say that reading that file on every request is a performance killer. Don't do it! If you really really need to keep that data in a .yml file, then you need to cache it and reload only after it changes (based on the file's timestamp.)
But don't check the timestamp every on every request - that's almost as bad. Check it on a request if it's been n minutes since the last check. Probably in a before_filter somewhere. And if you're running in threaded mode (most people aren't), be careful that you're using a Mutex or something.
If you really want to do this via overwriting files, use the filesystem's locking features to block other threads from accessing your configuration file while it's being written. Maybe check out something like this.
I'd strongly recommend not using files for configuration that needs to be changed without re-deploying the app though. First, you're now requiring that a file be read every time someone does a search. Second, for security reasons it's generally a bad idea to allow your web application write access to its own code. I would store these search index URLs in the database or a memcached key.
edit: As #bioneuralnet points out, it's important to decide whether you need real-time configuration updates or just eventual syncing.

Resources