How to set maximum connection axis camera? - ip-camera

I use the AXIS P5532-E camera. I need more connections to cam. However, I get the error "503 service unavailable, the maximum number of clients are already connected".
Is it possible to increase the maximum number of connections to the camera?

Most likely it is not. According to the Axis FAQ it may be limited due to hardware limitations.

Related

Code to autoland DJI matrice 100

I am working on developing program for autonomous control of DJI matrice drone. I like to have precise landing of DJI drone with tolerance of approx 5cm in X and Y axes. For this I am trying to control the drone until 50cm from ground and then immediately landing() function is called. But after the landing() function is called, I see that the drone moves in X and Y directions as well. Due to this landing within defined tolerance fails. Is there a way, that I may write an own code for autolanding program with different constraints.
Thanks.
you can use a combination of a few tools:
Use flightCtrl APIs such as positionAndYawControl documented here to command the drone in the z-axis, slowly reducing the z position desired
Monitor flight status using broadcast->getStatus() as documented here, and keep sending lower z-setpoints till the drone's flight status changes to M100FlightStatus::LANDING
Send disArm() command to turn the motors off and ensure that the status changes to M100FlightStatus::FINISHING_LANDING.
You might have to experiment a little with the z set points and the time duration to keep sending lower setpoints in order to get this to work.

DirectShow - changing white balance property

I am capturing data from the web camera by using DirectShow api. To change white balance value I call IAMVideoProcAmp::Set method.
I have noticed that for some cameras white balance value is being changed immediately (after 1-2 frames new values is already applied). But for other cameras it is being applied incrementally during 50-60 frames. It is too long for me.
May be someone has faced with the same problem. Can I configure how fast new value will be applied or does it depend on the camera's driver?
IAMVideoProcAmp::Set is all you have. There is no generic way to change white balance or affect the way changes take effect. If you are interested in specific models of cameras, you might check with tech support if there is SDK available and model-specific ways to setup the device.

low depth resolution on google-project-tango

I see that the resolution of the depth camera is 320*180, however, each depth capture frame produces only 10K to 15 K points. Am I missing a setting?
I looked at the transformation matrices keeping the device fixed and with an area_learn update method, with no ADF loaded. I see non-zero offsets on the translation values. I expected 0 offsets.
Is there a published motion estimation performance document for Tango that specifies latency and performance of the IMU + ADF? I am looking for detailed test information.
Thanks
You are right about the resolution of the depth camera and your results align with mine. Depending of where the depth camera is pointing at, i'll get between 5K and 12K points. Scanning the floor surface would generate more points since it is flat and uniform.
I think that you are experiencing drift. This is expected when not using Area Learning (no ADF loaded) There is a known issue of drift occuring because of android 4.4. (source https://plus.google.com/+ChuckKnowledge/posts/dVk4ZgVikgT)
Loading an ADF should help this but i wouldn't expect it to be perfect.
I don't know about this. Sorry!

Does a size of pixel differ in different devices?

Just a curiosity, since my earlier question was put onto hold and couldn't communicate any further on that question, so my curiosity according to this link is whether the size of the pixel changes in different environments physically such as computer screen, ipda, mobile devices. I was wondering about that and if that does then can pixel be considered relative unit in relative to computer devices? Other curiosity is that what I found out from the video on youtube, which says that the size of pixel changes logically when we change the resolution of the screen, but even after changing the resolutions, I could not find the size of image being changed. Hence, I would like to get your answers whether the size of the pixel stays the same in every devices or they change just logically according to the resolution of the screen and resolution of image.
Consider screens with a native maximum (physical) resolution of 1600x900. If you have a 21" monitor with that resolution, the physical pixel size will be different than a 42" monitor with the same native maximum resolution. Logical resolution is different, but the (logical) pixel grouping will similarly be different with disparate underlying physical display size characteristics.

Using windows phone combined motion api to track device position

I'd like to track the position of the device with respect to an initial position with high accuracy (ideally) for motions at a small scale (say < 1 meter). The best bet seems to be using motionReading.SensorReading.DeviceAcceleration. I tried this. But ran into few problems. Apart from the noisy readings (which I was expecting and can tolerate), I see some behaviors that are conceptually wrong - e.g. If I start from rest, move the phone around and bring it back to rest- and in the process periodically update the velocity vector along all the dimensions, I would expect the magnitude of the velocity to be very small (ideally 0). But I don't see that. I have extensively reviewed available help including the official msdn pages but I don't see any examples where the position/velocity of the device are updated using the acceleration vector. Is the acceleration vector that the api returns (atleast in theory) supposed to be the rate of change of velocity or something else? (FYI - my device does not have a gyroscope, so the api is going to be the low accuracy version.)

Resources