Using Zenmuse Gimbal Connector for 12V Supply of Laser Scanner - dji-sdk

we've developped a light weighted survey-grade laser scanner (350 grams) and for practical use it would be perfect to attach it to the gimbal connector as long as it's position assures a good field of sight which is similar to the requirements of a camera. Therefore the idea is to replace the standard gimbal camera (Z3, X3, X5, etc.) with the laser scanner (similar to other 3rd party devices) and use the attachment both for mechanical fixing and for power supply at the same time. Further integration via SDK might be in future, but right now the scanner works independently via GSM (needed anyway for GNSS-RTK) and cloud connection.
My trial setup is as follows (I'm aware that there are several differences between gimbals):
A3 controller
LB2
Camera Adapter (in fact: the gimbal controller itself?) with 3pin + 7pin connectors to LB2 and A3 and power supply direct from battery. Cam connector is 20pin
Gimbal Z3
Mechanical integration is obvious, but I can't get out the 12V power supply. My findings so far:
The controller contains a 12V / 5A regulator and should be more than sufficient for the laser scanner (max. 700 mA against up to 2A for the gimbal)
Pins 12, 14 and 16 which are supposed to carry 12V are in fact open / floating
There is some supply voltage of about 3.2V on the connector (for logic?)
The gimbal initializes even without connection to LB2 or A3 and therefore the question of providing power is something between the gimbal and the adapter
There is a difference in connections of the 20pin connector: Pin 1 and 20 of the gimbal side are grounded, on the adapter side there is a voltage level of 3.2V
It seems that there has to be activated is a switch to provide 12V to the pins and the final question is: HOW?
Is there a magic circuitry needed, e.g. tearing pin 1 or 20 of the adapter to ground?
Or is the 12V output activated via commands on CAN / USB bus between gimbal and adapter?
Any hints are highly appreciated.
Thanks,
DJIMUC

The canonical way for integrating with the DJI gimbal connector is through their Payload SDK which nominally only supports the Matrice 200 series. As the interface routes through their SKYPORT adapter which handles the integration with the aircraft, they don't provide anymore details on the gimbal connection itself. Their hardware introduction page from their Payload SDK documentation may be helpful to you.

Related

Is there real-time DC offset compensation in Ettus N310?

I am working with an Ettus N310 that is being controlled by some 3rd party software. I don't have much of an insight of how they set up and control the device, just tell it what center frequency to tune to and when to grab IQ. If I receive a signal, let's say a tone, at or very near the center frequency, I end up with a large DC offset that jumps around every few 100 usec. If I offset the signal well away from the center frequency, the DC offset is negligible. From what I see in Ettus' documentation, DC offset compensation is something that's set once when the device starts receiving but it looks to me like here it is being done periodically while the USRP is acquiring data. If I receive a signal near center frequency, the DC offset compensator gets messed up and creates a worse bias. Is this a feature on the N310 that I am not aware of or is this probably something that the 3rd party controller is doing?
Yes, there's a DC offset compensation in the N310. The N310 uses an Analog Devices RFIC (the AD9371), which has these calibrations built-in. Both the AD9371 and the AD9361 (used in the USRP E3xx and B2xx series) don't like narrow-band signals close to DC due to their calibration algorithms (those chips are optimized for telecoms signals).
Like you said, the RX DC offset compensation is happening at initialization. At runtime, the quadrature error correction kicks in. The manual holds a table of those: https://uhd.readthedocs.io/en/latest/page_usrp_n3xx.html#n3xx_mg_calibrations). You can try turning off the QEC tracking and see if it improves your system's performance.

Esp32 Low Frequency PWM

Good morning I need to perform a 0.4 Hz pwm and with LEDC I can only reach 1 hz in esp32. Could you tell me if there is a possibility to do it?
I assume you are currently using the functions from some esp32 library. Like in the Arduino world, there is another way. You can set the right Bits so you manually create a PWM signal. Here is the Technical Reference Manual:
https://www.espressif.com/sites/default/files/documentation/esp32_technical_reference_manual_en.pdf
You will find all the relevant Information in chapter 14 (LED_PWM).

PWM transistor heating - Rapberry

I have a raspberry and an auxiliary PCB with transistors for driving some LED strips.
The strips datasheets says 12V, 13.3W/m, i'll use 3 strips in parallel, 1.8m each, so 13.3*1.8*3 = 71,82W, with 12 V, almost 6A.
I'm using an 8A transistor, E13007-2.
In the project i have 5 channels of different LEDs: RGB and 2 types of white.
R, G, B, W1 and W2 are directly connected in py pins.
LED strips are connected with 12V and in CN3, CN4 for GND (by the transistor).
Transistor schematic.
I know that that's a lot of current passing through the transistors, but, is there a way to reduce the heating? I think it's getting 70-100°C. I already had a problem with one raspberry, and i think it's getting dangerous for the application. I have some large traces in the PCB, that's not the problem.
Some thoughts:
1 - Resistor driving the base of the transistor. Maybe it won't reduce heating, but i think it's advisable for short circuit protection, how can i calculate this?
2 - The PWM has a frequency of 100Hz, is there any difference if i reduce this frequency?
The BJT transistor you're using has current gain hFE of roughly 20. This means that the collector current is roughly 20 times the base current, or the base current needs to be 1/20 of the collector current, i.e. 6A/20=300mA.
Rasperry PI for sure can't supply 300mA current from the IO pins, so you're operating the transistor in linear region, which causes it to dissipate a lot of heat.
Change your transistors to MOSFETs with low enough threshold voltage (like 2.0V to have enough conduction at 3.3V IO voltage) to keep it simple.
Using a N-Channel MOSFET will run much cooler if you get enough gate voltage to force to completely enhance. Since this is not a high volume item why not simply use a MOSFET gate driver chip. Then you can use a low RDS on device. Another device is the siemons BTS660 (S50085B BTS50085B TO-220). it is a high side driver that you will need to drive with an open collector or drain device. It will switch 5A at room temperature with no heat sink.It is rated for much more current and is available in a To220 type package. It is obsolete but available as is the replacement. MOSFETs are voltage controlled while transistors are current controlled.

PWM control of a 12V DC motor with L9110S driver and Raspberry Pi

My project uses a simple L9110S driver module and it controls 2 motors of the type JGA25-370 (DC 12V, 100RPM). So far I managed to easily control it by setting the pins of the L9110S H-bridge to either high or low.
However, I would like to add PWM control via the Raspberry Pi for this type of motor. Would anyone have any suggestions what would be the best frequency to set for PWM, so that I can safely change the motor speed with the duty cycle, without having the motors jerk and twitch? I have been experimenting and with values higher than 250 Hz, the motors are behaving bad. What would be the "safest" setting?
The motors are currently powered by 8x 1.2V NiMH batteries, so they get around 10 V.

how interface both pcf8563 rtc and 24lc512 eeprom with 1K pullup resistor on sda and scl

i have been working on a code where both 24lc512 and pcf8563 are interfaced together. Here in the breakup board of pcf8563 there are two 1K pull-up resistor on SDA and SCL line so am planning on using the same resistors for eeprom.
I had a code for eeprom which worked perfectly before with 4.7K pull up resistor, so i for making the code work for 1K pull-up resistor i made the following changes.(Coding was done for PIC16f877a with XC8 compiler)
SSPSTAT=0x80
SSPADD=(_XTAL_FREQ/(4*c))-1 //where c is 400,000
But sadly the code is not working as expected.Could someone please lend me help by saying what all changes should i bring in the earlier code so that it can work with 1K pull-up resistor.
Thanks in advance :)
The datasheet says
R = tr/Cb
where
tr is rise time (maximum specified at 1us)
Cb is capacitive load for each bus line with specified max. of 400 pF.
1x10^6 / 400x10^12 = 2500, so 2.7K would be the best choice if you're close to the maximum capacitance.
1K ohm sounds a bit low though, I'd try to unsolder the resistors and use 2.7 to 4.7k ohm instead. Only one set is needed if the bus lines are kept short.
Use an oscilloscope to check the signal shape. If the traces aren't nice and square then you need to adjust the resistors or shorten the bus wires. If the rise time is longer than 1us it may have problems too.
It would make much more sense to use a much lower bus speed, capacitance won't be much of a big deal. For a calender and a small eeprom 100K or even lower is plenty fast enough in most circumstances.

Resources