I have 3 webcams and I would like to store all the frames on my HDD in Delphi. I have done this, but the problem is that it's quite slow. I was thinking about storing the data into a big file like an iso I tried, with BlockWrite and it is about two times slower than saving them with a different name in a folder as bitmaps.
Edit: I attached a a new screenshot, where you can see it's performances. In this test, it had only one hd webcam with 15 frames/sec and saving the frames as JPG(using Delphi XE2 native JPEG library) in the a HDD folder.I was able to see that the software actually store only 2 I/O output Mega byte of data each second on my HDD from only one high resolution 3D camera. But in one minute the software loose 70-80 frames.
Any suggestions, solutions? Thanks
if you want to write video you can use component TAVIRecorder of GLScene.
I wrote four HD(1280*720)*25fps video from IP cams and have good result with it and x264 codec and less than 40% of processor using i7 4770
So, after writing complete you can play it with any videoplayer and get nedded picture
Related
I have a industrial camera which takes 120 frames per second. It connects with my PC through a USB 3.0 cable and the frames are stably transferred into my PC at a frame rate of 120fps if I just show them. However, when I try to save the frames into my SSD (M.2 PCIE interface) in jpeg format, I can only save 30 frames per second. Each jpeg file is about 80KB and it will be about 10MBps if I save all 120 frames per second. I have tested my SSD and it has at least 100MBps for writing data. So where can it go wrong?
Btw, the API provided by the camera is Windows C++ API and I am using Visual studio for the capturing and writing program. One way I can think of is to save the images in the memory for a certain time and then I stop capturing and dump the images out to the disk. My application needs to keep the camera capturing all the time. So I am thinking if there is a way to save those images continuously in the SSD. My PC has i7 and 32GB memory.
The bottleneck is not I/O but the processing of converting raw to jpeg which is done by SDK on PC. Try to save raw in real-time and convert it to jpeg offline.
I am making a webpage which will contain around 20-25 small-resolution (~56x56) and short-length (~3 sec) movies which will be set to autoplay and loop, so they will be looping on the page at all times. They are mostly dispersed throughout the page, so cannot easily be merged into bigger movies.
I'm trying to decide on the right format to use, balancing filesize, quality, and processor overhead.
mp4 seems the best option in terms of quality and filesize, however embedding many small mp4s on the page felt to me slow and made my computer get hot. Despite the fact that if they were one mp4, it would be only around 300x240 -- it seems there is a lot of CPU overhead if they are divided.
gif is lower quality and bigger filesize, but the CPU performance felt smoother. I can't prove it though because I didn't measure it -- are gif's known to be better performance than mp4?
I have not tried other formats (webm, avi, ogg, etc) but I am unsure of how supported all of these formats are by most browsers and I want the webpage to be viewable from multiple browsers/countries.
How can I determine the best format to use for these videos? Is there a tool which can measure the CPU performance of my webpage so I can quantify the performance issues?
Playing many videos on a single page is a problem for most OS's as video decoding and playback is CPU intensive.
Some systems will also have hardware elements (HW acceleration) in the video playback 'pipeline' (the series of functions the OS, browser and player perform to unpack, decode, prepare and display the video) and these elements may not support or have limited capacity for parallel playbacks.
There is a fairly common workaround to this scenario if you know in advance what videos you want to play on the page, and if you don't have too many different video combinations for different users etc: this is to combine the videos on the server side into a single video. This means the users still sees what looks like multiple videos but you are doing all the heavy lifting on the server side.
The drawback is that you can't start or stop individual videos or quickly change the mix of videos.
If you plan to support mobile browsers also you should be aware that most mobile devices do not support Autoplay (to help conserve users bandwidth), and smaller devices such as phones often do not support inline video (the video will always play full screen). [Update Feb 2017: mobile devices are beginning to support autoplay as mobile data rates increase, and most will now support inline, with iOS adding this in iOS 10]
I am working on this project where we are doing a live performance with about 6 musicians placed away from each other in a big space. The audience will be wearing their headphones and as they move around we want them to hear different kinds of effects in different areas of the place. For calculating the position of users we are using bluetooth beacons. We're expecting around a 100 users and we can't have a latency of more than 2 seconds.
Is such kind of a setup possible?
The current way we're thinking of implementing this is that we'll divide the place into about 30 different sections.
For the server we'll take the input from all the musicians and mix a different stream for every section and stream it on a local WLAN using the RTP protocol.
We'll have Android and iOS apps that will locate the users using Bluetooth beacons and switch the live streams accordingly.
Presonus Studio One music mixer - Can have multiple channels that can be output to devices. 30 channels.
Virtual Audio Cable - Used to create virtual devices that will get the output from the channels. 30 devices.
FFMpeg streaming - Used to create an RTP stream for each of the devices. 30 streams.
Is this a good idea? Are there other ways of doing this?
Any help will be appreciated.
Audio Capture and Mixing
First, you need to capture those six channels of audio into something you can use. I don't think your idea of virtual audio cables is sustainable. In my experience, once you get more than a few, they don't work so great. You need to be able to go from your mixer directly to what's doing the encoding for the stream, which means you need something like JACK audio.
There are two ways to do this. One is to use a digital mixer to create those 30 mixes for you, and send you the resulting stream. Another is to simply capture the 6 channels of audio and then do the mixing in software. Normally I think the flexibility of mixing externally is what you want, and typically I'd recommend the Behringer X32 series for you. I haven't tried it with JACK audio myself, but I've heard it can work and the price point is good. You can get just a rackmount package of it for cheap which has all the functionality without the surface for control (cheaper, and sufficient for what you need). However, the X32 only has 16 buses so you would need two of them to get the number of mixes you need. (You could get creative with the matrix mixes, but that only gets you 6 more, a total of 22.)
I think what you'll need to do is capture that audio and mix in software. You'll probably want to use Liquidsoap for this. It can programmatically mix audio streams pulled in via JACK, and create internet radio style streams on the output end.
Streaming
You're going to need a server. There are plenty of RTP/RTSP servers available, but I'd recommend Icecast. It's going to be easier to setup and clients are more compatible. (Rather than making an app for example, you could easily play back these streams in HTML5 audio tags on a web page.) Liquidsoap can send streams directly to Icecast.
Latency
Keeping latency under 2 seconds is going to be a problem. You'll want to lower the buffers everywhere you can, particularly on your Icecast server. This is on the fringe of what is reasonably possible, so you'll want to test to ensure the latency meets your requirements.
Network
100 clients on the same spectrum is also problematic. What you need depends on the specifics of your space, but you're right on the line of what you can get away with using regular consumer access points. Given your latency and bandwidth requirements, I'd recommend getting some commercial access points with built-in sector antennas and multiple radios. There are many manufacturers of such gear.
Best of luck with this unique project! Please post some photos of your setup once you've done it.
I'm programming video capturing app and need to have 2 input sources (USB cams) to record from at the same time.
When I record only the raw footage simultaneously without compression at is working quite well (Low CPU load, no video lags), but when the compression is turned on the CPU is very high and the footage is lagging.
How to solve it? Or how to tune-up the settings so that it can be accomplished?
Note: the Raw streams are to big and thus cannot be used, otherwise I would not bother with compression at all and just leave it as it is.
The AVFoundation framework in its current configuration is setup to provide HW acceleration only for one source at time. For multiple accelerated sources one need to go deeper to VideoToolbox framework and even deeper.
I have a machine with 2x3 3ghz dual-core xeon and 4x10krpm scsi 320 disks in raid0.
The capture card is an osprey 560 64 bit pci card.
Operating system is currently Windows Server 2003.
The video-stream that I can open with VLC using direct show is rather nice quality.
However, trying to save this video-stream without loss of quality has proven quite difficult,
using the h264 codec I am able to achieve a satisfying quality, however, all 4 cores jump to 100% load after a few second and then it start dropping frames, the machine is not powerful enough for realtime encoding. I've not been able to achieve satisfying mpeg1 or 4 quality, no matter which bitrate I set..
Thing is, the disks in this machine are pretty fast even by todays standard, and they are bored.. I don't care about disk-usage, I want quality.
I have searched in vain for a way to pump that beautiful videostream that I see in VLC onto the disk for later encoding, I reckon the disks would be fast enough, or maybe something which would apply a light compression, enough that the disks can keep up, but not so much as to loose visible quality.
I have tried FFMPEG as it seems capable of streaming a yuv4 stream down to the disk, but ofcause FFMPEG is unable to open the dshow device ( same error as this guy Ffmpeg streaming from capturing device Osprey 450e fails )
Please recommend a capable and (preferably) software which can do this.
I finally found it out, it is deceptively simple:
Just uncheck the "transcode" option in the window where you can select an encoding preset.
2 gigabytes per minutes is a low price to pay for finally getting those precious memories off of old videotapes in the quality they deserve.