I would like to know the file formats that IP based surveillance cameras produces. I would then need to build or use available codec/ source code to convert to a format Flash player 10 can support. These are formats other than the usual .FLV for my website which is run on JBoss with Flex 3. I would like to support as many file formats as possible.
I do not want to introduce a streaming server (FMS or RED5 Open source) because of various reasons.
Have anyone any idea about this? Any help would be amazing to get because I have not done anything like this before.
Thanks in advance,
Ranjith
There's no reason to expect these to be standard or consistent; they can do anything they want as long as the camera and the base station agree.
You would have to acquire the target surveillance system and read its documentation, or sniff its network traffic and see what it does. If the first brand seems too difficult, move on to another brand.
Related
I'd like to use the same webcam and microphone across multiple streams, for example Google Meets and Microsoft Teams at the same time.
When one of the sources use the webcam/mic, they lock it in and they can't be used anywhere else. Is there a way to replicate or unfreeze them?
I tried via ffmpeg, specificaly by outputting as dshow, but got:
Requested output format 'dshow' is not a suitable output format
I also tried this solution: https://superuser.com/a/1531380/934167 but didn't manage to replicate the device.
If possible, I'd like to stick to ffmpeg, but if it requires third party (preferably open source) applications, I can try that out.
Edit: I should also mention, I'd prefer this for Windows/Mac, other solutions were only relevant for linux.
I work for a performing arts institution and have been asked to look into incorporating wearable technology into accessibility for our patrons. I am interested in finding out more information regarding the use of SmartEyeglasses for supertitles (aka, subtitles) in live or pre-recorded performance. Is it possible to program several glasses to show the user(s) the same supertitles at the same time? How does this programming process work? Can several pairs of SmartEyeglasses connect with the same host device?
Any information is very much appreciated. I look forward to hearing from you!
Your question is overly broad and liable to be closed as such, but I'll bite:
The documentation for the SDK is available here: https://developer.sony.com/develop/wearables/smarteyeglass-sdk/api-overview/ - it describes itself as being based on Android's. The content of the wearable display is defined in a "card" (an Android UI concept: https://developer.android.com/training/material/lists-cards.html ) and the software runs locally on the glasses.
Things like subtitles for prerecorded and pre-scripted live performances could be stored using file formats like .srt ( http://www.matroska.org/technical/specs/subtitles/srt.html ) which are easy to work with and already have a large ecosystem around them, such as freely available tools to create them and software libraries to read them.
Building such a system seems simple then: each performance has an .srt file stored on a webserver somewhere. The user selects the performance somehow, and you'd write software which reads the .srt file and displays text on the Card based on the current timecode through until the end of the script.
...this approach has the advantage of keeping server-side requirements to a minimum (just a static webserver will do).
If you have more complex requirements, such as live transcribing, support for interruptions and unscripted events then you'd have to write a custom server which sends "live" subtitles to the glasses, presumably over TCP, this would drain the device's battery life as the Wi-Fi radio would be active for much longer. An alternative might be to consider Bluetooth, but I don't know how you'd build a system that can handle 100+ simultaneous long-range Bluetooth connections.
A compromise is to use .srt files, but have the glasses poll the server every 30 seconds or so to check for any unscripted events. How you handle this is up to you.
(As an aside, this looks like a fun project - please contact me if you're looking to hire someone to build it :D)
Each phone can only host only 1 SmartEyeglass. So you would need separate host phones for each SmartEyeglass.
I am creating an application that will pre-record a user's voice for each letter on the keyboard and when the app is running, if the user calls out '5', the system types 5 to which ever application is capable of accepting the input at that time. I am .NET person and venturing into XCode.
I have done some research and I am pretty sure of using AV Foundation for recording the audio. The question is how to use speech recognition in OSX and use it to identify a particular key on the keyboard...Will highly appreciate any feedback even if it might be general advice for the approach that I should take to tackle this project!
ThankS IN ADVANCE :) !
Let me be clear first. I have never done this before, but i have a general idea of how it is done. You need to Bind a audio file to a certain number/Key. Whenever a user speaks into the mic, you record their voice and upload it to a server, which compares the Audio File from the User to the pre recorded audio file the user made.
Here is a SO Question that talks about Audio Fingerprinting.
How can I Compare 2 Audio Files Programmatically?
You can compare the audio files in PHP/Python, and have it return a value. For example. If audio file a.mp3 (on server) matches to the newRecorded.mp3 the user just recorded, return a.mp3, then just strip the .mp3 and keep the key.
As far as recording sentences and commands, you might be able to do the same. I will continue to do more research on this and help you out as much as i can.
Hopefully this gives you a better idea and easier way of doing things.
Also there is this
https://developer.apple.com/library/mac/documentation/cocoa/reference/ApplicationKit/Classes/NSSpeechRecognizer_Class/Reference/Reference.html
and
https://developer.apple.com/library/mac/documentation/cocoa/conceptual/speech/Articles/RecognizeSpeech.html#//apple_ref/doc/uid/20002081-BCIHEBFH
This could be really helpful and would use built in speech recognition.
I need to access web camera using Java. This is what I want to do
Access web cam
Now the user can see web cam working because his face is visible on screen
(have heard some libs are there which doesn't show the video output of webcam)
when user click save button, take a snapshot and save it
I have tried number of ways to do this, from a long time.
JMF - Now it is dead
FMJ - Now it is dead too
VLCJ - too much because I am not creating a music/video player and it expect VLC to be installed
Xuggler - too much and hard work
JMyron - didn't work
JavaFX - I thought it could do it, but seems like it can't
I am even satisfied if the library is just ONLY doing the above mentioned, because that's enough for me. But I expect it to be simple too. Really great if it is not using DLLs, because it is not platform independent if it does. Really appreciate if it can DETECT the camera, without manually passing the camera name and other info as have do in VLCJ (because there might be thousands of camera brands, so I can't create a list of thousand elements in it). And, I am creating a desktop application, not web app.
If you know a library like this, please be kind enough to let me know. Other libraries (which might not suit to all of my requirements, but suits to the basic requirement) also welcome. Please help
I think the project you are looking for is: https://github.com/sarxos/webcam-capture (I'm the author)
There is an example working exactly as you've described - after it's run, the window appear where, after you press "Start" button, you can see live image from webcam device and save it to file after you click on "Snapshot" (source code available, please note that FPS counter in the corner can be disabled):
The project is portable (WinXP, Win7, Win8, Linux, Mac, Raspberry Pi) and does not require any additional software to be installed on the PC.
API is really nice and easy to learn. Example how to capture single image and save it to PNG file:
Webcam webcam = Webcam.getDefault();
webcam.open();
ImageIO.write(webcam.getImage(), "PNG", new File("test.png"));
I am looking for the open source or paid tools which help me to create a movie from the still image and music. I also want to put some text on the images.
My main concern is the video quality. I need a high quality video as output.
Can anyone please give me some suggestion.
I also want to know that, can we achieve this with the help of ffmpeg?
I am not interested for GUI tools, I am mainly looking for some API or service which takes input as images,texts,audio and gives output as a video.
OS does not matters. I can go with any os windows or linux.
The quality of the video should High.
Thanks in advance.
Since you don't tell us which OS you're using you make it difficult. This is possible with ffmpeg, the output quality depends on your input quality and the output format/codec you choose. Google has lots of ffmpeg tutorials.
Openshot is a great GUI based video editor for Linux and also available for Windows and Mac. Your task would be trivial to achieve using Openshot.