Control commands of the Onboard-SDK are published with different frequencies - dji-sdk

I am using the Onboard-SDK for DJI M100 on ROS.
I developed a code for controlling the position of the M100 to certain target position.
However, it doesn't reach the specified target.
For that reason I checked the published control signals with ROS, and as I saw, in some experiments the frequency of the control signal is not "constant" at all. For example, sometimes I have 50Hz, some others 5Hz, 10Hz etc.
I would like to know what is the actual reason behind this.

Assuming you 3.3V FTDI works and have perfect working hardware, I would guess some 1 changed the DJI assistant2 SDK setting for you. Otherwise, it will not changes. I had some simar problem before, but the problem is I burn the API port by using 5V FTDI
Besides, you control should be sent to the drone in a fixed time loop by running a ros loop rate and ros sleep routing. not send at each callback. The reason being that you need to control your drone position with PID or other control methods which is time depended.

Related

How do I change the default playout device on Mac Native WebRTC?

I'm using WebRTC (Native Mac & Native Windows -- not JS) and am trying to change the default playout and recording devices and am having a lot of trouble. This is starting to drive me nuts since it should be very simple.
Question: What's the recommended way to change audio playout and recording devices while a call is ongoing on Mac & Windows, natively?
Here's what I've tried:
Method 1
Mac
I noticed the audio device module listens to Core Audio API notifications and adjusts playout and recording devices properly. This works, but I'm not sure if this is the recommended way to change devices.
Windows
I was not able to find a system-wide way of setting the default audio playout/recording device. The only way I could tell MIGHT work is by getting a reference to the audio device module and calling SetPlayoutDevice / SetRecordingDevice on it manually...which leads to Method 2 below:
Method 2
Mac
If possible, I'd rather use SetPlayoutDevice (link) / SetRecordingDevice (link) to change the audio input/output (so Mac & Windows work the same way).
The unit tests to test real audio IO devices shows we should be able to call StartPlayout and StopPlayout after a call to SetPlayoutDevice -- but this makes my app freeze. I've tried it without the call to StopPlayout and StartPlayout however it doesn't seem to do anything. This makes sense since it looks like only internal state is modified but nothing is modified.
Q: How can I change the default audio playout device and recording device on Mac?
Windows
I haven't had a chance to try this out on Windows yet, but Mac not working makes me think there's something I'm missing here.
Answering this myself.
VoEHardwareImpl (https://chromium.googlesource.com/external/webrtc/stable/webrtc/+/refs/heads/master/voice_engine/voe_hardware_impl.cc) seems to have some relevant code.
For playout:
StopPlayout
...set device index...
StereoPlayoutIsAvailable
SetStereoPlayout
InitPlayout
StartPlayout
For recording:
StopRecording
...set device index...
StereoRecordingIsAvailable
SetStereoRecording
InitRecording
StartRecording
Depending on the commit you're on, you might need to ensure you won't get into a deadlock situation. Some of these methods acquire locks so make sure the lock isn't acquired yet if you're calling a method that requires locks. Better yet -- do this a level above or wrap the adm if possible.

OSDK4.x: Can you use DJI Assistant 2 for Matrice with payload actions?

Is it possible with OSDK4.x to command payload and flight actions and use the DJI Assistant 2 for Matrice concurrently?
Previously, we have been using the M210V1 with OSDK3.9. Using the DJI Assistant 2 for Matrice to simulate the drone flight is key to our ability to develop our system.
However, the M210V2 and OSDK4.x require the USB port of the drone to be connected to the Linux device running the OSDK, otherwise any payload (GimbalManager, CameraManager) actions throw an error - such as GimbalManager::resetSync.
This is not ideal for development since we cannot use the simulator (on MacOS) and connect the USB to the Linux device (there is only one USB port on the drone). Has anyone solved this problem?
Yes and no.
For 210, There is only one USB2 port and it is either for connecting to PC side assitant2 or connecting to onboard PC to get osdk stream. You can think it as a bug in design phases.
Yes, you can run part of osdk and run the simulator without the payload camera-related function. If I recall correctly, I can still "rostopic echo" the Gimbal orientation topic from the drone. Its only image topic that is disabled. You can simulate the GPS based flight in simulator and try to set gimbal direction. I remember this was achievable.
There is no way for you to run both simulators and get payload functions such as images from OSDK. so to get both image running and drone running in simulator that's not achievable.
If you really wants both at same time. I suggest you move to M300 which they have dual USB C interface for both camera stream and for simulation.

How can I get a shutter signl of the Matrice 600 (PRO) A3?

I have an M600 PRO (A3) and I need to connect an Arduino to it to receive a drone shutter.
In fact, I would like to use Drone Deploy with a photo of each waypoint and at the time of each photo I would like my arduino to receive this signal to perform a specific task. I have no camera attached to the drone.
Could someone help me with this task? I've already been able to connect A3 to the Arduino. I still can not understand the data bus fault.
i already connect one Arduino to the DJI MAtrice 600 Pro, but I do not found the hexa-Code related with shutter sign
I would like to receive the shutter sign from Matrice 600 in my Arduino, in order to program a special task related with this sign.
Since you say you want to use Drone Deploy, you won't be able to use the Mobile SDK and will need to build something with the Onboard SDK. It's unclear what you want since you say you have no camera but want to trigger something every time a picture is taken.
If you just want to detect when Drone Deploy tries to take a picture, you may get an error depending on if the sdk thinks there is a camera detected or not. I haven't tried the Onboard SDK much or especially tried without a camera attached so I don't know if an error will result or if you can set the trigger in the take picture callback.
Either way the best place to start is probably the Camera sample onboard sdk app.

Does NodeMCU support low power mode?

I'm trying to make a wireless system that detects if someone is sitting on the toilet or not.
The idea is using a NodeMCU connected to a wifi-network, updating a database with info about the state of the toiled, "busy" or "not busy".
I would like to know if NodeMCU supports low power mode via interrupts so that I could maintain the system with a battery.
Thank you in advance, :)
In low power mode nodemcu cannot be waken up via an internal event.
You have to trig it externally via a button, a sensor interrupt, etc.
So, I assume that you'll need to use a sensor like PIR for motion detection, if you got a battery friendly sensor, you can wake it up with trigging GPIO16 when it lays in deep sleep.

Does using a barcode scanner as a keyboard wedge imply you can't confirm receipt of the scan?

I have an extremely simple application running off a series of deprecated scanners that picks up a barcode scan off a serial port and sends back to the scanner an ok that it received the scan. Based on that, the scanner flashes green and the user knows they can continue.
I like this model over my understanding of a keyboard wedge because if something happens to the application picking up the scan (the application hangs, the form with the focus gets changed, the PC hangs, the PC can't keep up picking up the scans), the person holding the scan gun will know there is a problem because they won't receive the green flash and they won't be able to continue scanning.
I'm looking at adding some scanners and it seems many people are using barcode scanners that effectively act as keyboard wedges. Some of these scanners have ranges that exceed 100 feet, implying people are using them far away from the PC (as my users are). So I'm wondering if I'm missing something regarding the keyboard wedge model. Is there some mechanism I'm missing to ensure that a scan decoded by a scanner acting as a keyboard wedge actually reaches the application running on the PC? A full-blown hand-held computer running something like Windows Mobile seems like massive overkill for just wanting to ensure my user is not scanning data that isn't going into the application and so does even a mid-range scanner with a keypad and screen, but is the latter the entry point for any sort of programmatibility of the scanner?
You are correct- there isn't a feedback loop to the scanner when running as a wedge. We use wedge scanners a lot, and in a modern environment (ie, Windows, multiple apps, etc), focus, "dropped scans", etc, are all real problems.
We're in the middle of switching over to a different way. If you have your choice of hardware, many new USB barcode scanners have the ability to operate in a serial emulation mode that allows the same kind of interaction you describe (where you can prevent a second scan until the host has ACK'd the first, or you can beep/flash something on the scanner as an ACK). Also, there's a USB HID POS (point of sale) mode that some higher-end USB scanners support that gives you an even greater degree of flexibility, with the added bonus of "driver free" installation (it looks like a generic HID device to the system, like a joystick or keyboard, but with 2-way comm ability). The downside of POS mode is that it's a little harder than serial programming, but there are abstraction layers available for different platforms.
RF mobile computers with built in scanners, like the Symbol MC9090-G, are by far the most flexible and what we use the most. As for wedges, depending on the distance from the PC and factory environment - we have used visual feedback via the PC screen and audio via the PC speakers. The users listen for the audio feedback after each scan and when they don't hear it they look back to the PC screen for visual feedback as to the problem. Not perfect but it has worked well.

Resources