I have a camera that is ONVIF compatible.
If I want to zoom in/out, I presently have to send this URL to the camera:
http://192.168.2.88/cgi-bin/ptz_cgi?action=FocusAdd&steps=50&user=admin&pwd=admin
This is proprietary to my camera so I would like to do the same with ONVIF.
My question:
Is using onvif as simple as sending:
ONVIF://192.168.2.88:2010/some command ?
If so, what is the command :)
I am using Delphi XE2
Thank you.
No, it is not easy easy as a CGI protocol. The main differences are:
ONVIF is based on SOAP, while many proprietary protocols are based on REST or just parameters encoded in the URL
the ONVIF device model is more complicated, because it supports a wider set of use cases.
Thus, after you either generate the code from the WSDL files or get a library that implements the necessary functions, you have to do:
get the device services
verify that it has a PTZ service
verify that it has a Media service, either 1 or 2 (the latter is for profile T devices)
get the list of media profiles
select the media profile that has a PTZNode and that is actually the one you are looking for
select an adeguate coordinate space from the PTZ service capabilities
send the Move command with the correct parameters
This could seem overcomplex, but you need to remember that the ONVIF protocol needs to support devices with more that one input, such as multichannels encoders. These encoders may have a few fixed cameras and other cameras connected may have a PTZ controlled by the encoder. In practice, the list I just gave you lets you understand what the device you are controlling looks like.
Related
As a beginner at working with these kinds of real-time streaming services, I've spent hours trying to work out how this is possible, but can't seem to work out I'd precisely go about it.
I'm prototyping a personal basic web app that does the following:
In a web browser, the web application has a button that says 'Stream Microphone' - when pressed it streams the audio from the user's microphone (the user obviously has to consent to give permission to send their microphone audio) through to the server which I was presuming would be running node.js (no specific reason at this point, just thought this is how I'd go about doing it).
The server receives the audio close enough to real-time somehow (not sure how I'd do this).
I can then run ffmpeg on the command line and take the real-time audio coming in real-time and add it as the sound to a video file (let's just say I'm going to play testmovie.mp4) that I want to play.
I've looked at various solutions - such as maybe using WebRTC, RTP/RTSP, Piping audio into ffmpeg, Gstreamer, Kurento, Flashphoner and/or Wowza - but somehow they look overly complicated and usually seem to focus on video along with audio. I just need to work with audio.
As you've found there are numerous different options to receive the audio from a WebRTC enabled browser. The options from easiest to more difficult are probably:
Use a WebRTC enabled server such as Janus, Kurento, Jitsi (not sure about wowzer) etc. These servers tend to have plugin systems and one of them may already have the audio mixing capability you need.
If you're comfortable with node you could use the werift library to receive the WebRTC audio stream and then forward it to FFmpeg.
If you want to take full control over the WebRTC pipeline and potentially do the audio mixing as well you could use gstreamer. From what you've described it should be capable of doing the complete task without having to involve a separate FFmpeg process.
The way we did this is by creating a Wowza module in Java that would take the audio from the incoming stream, take the video from wherever you want it, and mix them together.
There's no reason to introduce a thrid party like ffmpeg in the mix.
There's even a sample from Wowza for this: https://github.com/WowzaMediaSystems/wse-plugin-avmix
I am working with Google Tango to extract data from the tablet and use it at the same time in another device. I am trying to record the data and use it with another laptop via live streaming.
I've looked at various topics about it and I found the Paraview topic. However the App records the data, save as ZIP file and send it via bluetooth(which is fine for me). I do not want to save the file as ZIP format and send it to another device. I want to record and use the data via live streaming(bluetooth or Wi-Fi).
Is that possible? How can I do it?
Paraview shared the source code so I think I can change it make it work for me. However I am not really used to programming.
Thank you very much for your help. I really appreciate it.
You might want to use sockets: setting a socket server on your computer and connecting to it via your tablet. It is a way to transfer information via what is called, for example, a TCP protocol.
However, there might be a more efficient way to do what you want to achieve with USB debugging. I did not got able to make USB debugging work yet on the tablet and computer I worked with.
You could use TangoAnywhere, I developed it so you can broadcast Google Tango position and orientation data to any device.
https://play.google.com/store/apps/details?id=de.grauonline.tangoanywhere
You connect using a TCP client on your PC/Mac (simple TCP client written Python or something) to port 8080 of your Tango device and will get the position data in realtime.
I recommend the Tango ROS Streamer app which enables you to choose which data (position, point cloud, RGB image) to stream or not.
You will need ROS to retrieve the data. On Linux ROS is easy to install, otherwise use a docker image of ROS.
Caution: the theorical WiFi bandwidth does not enable to stream all the RGB frames at full resolution without dropping some.
The problem background - there are two different windows applications that are trying to access webcam on the computer at the same time. Currently, only one application is able to access to it. I want to be able to allow both applications to simultaneously access the webcam. A common example of my problem is, skype and yahoo messenger trying to access the webcam on the computer at the same time.
I found a few softwares (manycam.com, http://www.splitcamera.com/) that allow this on windows. But I am not sure how they implemented it. I want to write the code myself to achieve this since my code needs to be integrated with other APIs.
I appreciate if anyone can shed light on how to write a device wrapper to achieve this.
The kernel camera driver registers several OS-defined callbacks. One of the callbacks is used for the output stream. Dedicated Windows applications have an interface to this stream - you'll need to do some reading on this subject, it's not something that can be covered in scope of SO. You need a component that will be layered in between the client applications and the camera driver. This component should intercept your camera driver output and duplicate it for the registered clients. This can be achieved either in kernel (filter driver) or in user mode (preferable). http://msdn.microsoft.com/en-us/library/windows/hardware/ff557573%28v=vs.85%29.aspx is a good place to start.
Note: this functionality might be already supported by your camera software (though I think the chances are very slim) and in this case you should dig into the corresponding documentation.
I would like to achieve the following:
Set up a proxy server to handle video requests by clients (for now, say all video requests from any Android video client) from a remote video server like YouTube, Vimeo, etc. I don't have access to the video files being requested, hence the need for a proxy server. I have settled for Squid. This proxy should process the video signal/stream being passed from the remote server before relaying it back to the requesting client.
To achieve the above, I would either
1. Need to figure out the precise location (URL) of the video resource being requested, download it really fast, and modify it as I want before HTTP streaming it back to the client as the transcoding continues (simultaneously, with some latency)
2. Access the raw byte stream, pipe it into a transcoder (I'm thinking ffmpeg) and proceed with the streaming to client (also with some expected latency).
Option #2 seems tricky to do but lends more flexibility to the kind of transcoding I would like to perform. I would have to actually handle raw data/packets, but I don't know if ffmpeg takes such input.
In short, I'm looking for a solution to implement real-time transcoding of videos that I do not have direct access to from my proxy. Any suggestions on the tools or approaches I could use? I have also read about Gstreamer (but could not tell if it's applicable to my situation), and MPlayer/MEncoder.
And finally, a rather specific question: Are there any tools out there that, given a YouTube video URL, can download the byte stream for further processing? That is, something similar to the Chrome YouTube downloader but one that can be integrated with a server-side script?
Thanks for any pointers/suggestions!
You should ask single coding questions. What you asked is more like a general "how would a write my application". A few comments though:
squid is a http proxy, video use usually streamed over e.g. rtsp.
yes there are tools that grab the rtsp url from a youtube url, be sure to understand the terms of use for the video servie before going that way though.
gstreamer has a gst-rtsp-server module that contains a rtsp server, that also can be used as a proxy for a given rtsp stream.
I want to have a stress/performance testing for my content management site, especially for hosted streamed video part. I am using IIS to host the videos. More specifically, I am using the new Windows Server 2008 x64 and IIS 7.0.
The confusion is,
I plan to write code to start a lot of threads, and in each thread I will send web request to video URL, and read response stream from server, but I am not sure whether in this way, it behaves the same as a real user using player to render the video (in my code, I just read the stream, without really play it or write to anywhere). I want to test similar to the real scenario as much as possible;
I also plan to use real Media Player to render video (or what-so-ever media player), but my concern is if I start multiple Media Players on my test machine, since Media Player will utilize some H/W or some other resources (video card specific memory?) to decode/render the video (not sure, needs guru help to check and confirm), if I start multiple players, are there any potential H/W or resource contention between the players? If there is contention, it is also not actual ens user scenario, i.e. few user will start 100 players on his/her machine. :-)
Does anyone have any advice to me?
BTW: I prefer to use any .Net based solution, but not a must.
thanks in advance,
George
You should use mplayer. It has a lot of command line options. I don't know how all theses options are available under Windows, but under linux something like this is possible :
mplayer some_url -dump-video -dump-file=some_file
It will behave the same as a "normal" player I think, and your test machine won't need to handle hundreds of decompression thread, sot it fits your need 1 and 2
If you know the bit rate of your video stream, you can pace your downloading request to simulate video player clients. The bit rate can be calculated from the information carried in the stream, but it's a little more complicated. There is software for stressing testing video server too, such as this IP Video Monitor.