how to connect wires of step motor 42BYGH403 to driver drv8825 pins - arduino-uno

I am new to step motor and its driver. I bought a step motor 42BYGH403 with leads : yellow-red-green-blue.
and I want to connect this wires to related pins on driver drv8825. such as:
but i don't know which color (lead) should be connected to which pin A1, A2, B1 or B2?

Well one option is to try different things and see what happens.
On the motor driver data sheet, there should be a diagram somewhere that looks similar to the one shown above, with the wires from A1, A2, etc going into the motor, and it should show the wire color. Or maybe something referencing phase A/B. (From what I recall, red and blue go together on a similar motor, but check that.) I think as long as the connected wires are the same as in the data sheet, the motor should turn.

Related

Line follower bot location

I am working on a line follower bot that travels on a map consist of nodes But the confusion is how to let the bot know that at which node he is standing, in other words, what approach should be taken to feed the map to the bot so that it knows every node of the map and also know which node he is at present time.
I searched over the internet a lot but that doesn't seem to be worthy.
Line followers usually do not have any map. Instead they usually have a pair of front sensors pointing downwards (usually IR photo diodes and LEDs) which detects line crossing from left and right side and the robot just turns toward the line.
Its usually done by controlling the speed of left and right motor with brightness of detected light from right and left sensor (usually without any MCU or CPU, the analog version uses just 2 comparators and power amplifier to drive motors which results in much more smooth movement instead of the zig-zag like pattern)
Better bots have also in-build algorithms to search for line if it has gaps (that usually requires CPU or MCU).
If you insist on having map then you need interface to copy it in (ISP for example) however to detect where the robot is needs to actually follow the line remembering the trajectory and compare it against map until detected trajectory corresponds to only one location and orientation in the map however you will just end up with more complex and less reliable robot that has more or less the same or worse properties than simple line follower.
Another option is to use positioning system so either there is a build in positioning system on the maze or map (can be markers or transponders or whatever) or you place your robot to predetermined position and orientation and hit reset button Or you use accelerometers and gyros to integrate the position over time however as mentioned I see no benefit in any of this for line follower. This kind of stuff is better for unknown maze solver robots (they usually uses the SONAR or also IR photodiode+LED however oriented forward and to sides instead of downwards).

Keep ESP32 non-RTC GPIO pin state HIGH in deep sleep

I'm using Arduino IDE, and I'm trying to keep a pin's state held (HIGH, in my case).
I have GPIO16, GPIO17, GPIO18 (which I believe are all non-RTC GPIO pins). They're connected to three P-channel MOSFETs, which power an RGB LED. So my three pins need to be held HIGH while sleeping.
When my ESP32 goes into deep sleep, the RGB LED slowly fades up to full white brightness 😅
After searching around, the method posted elsewhere doesn't work as expected. I've tried the below, but no luck.
Arduino IDE:
gpio_hold_en((gpio_num_t) 16);
gpio_hold_en((gpio_num_t) 17);
gpio_hold_en((gpio_num_t) 18);
gpio_deep_sleep_hold_en();
delay(10000);
esp_deep_sleep_start();
Strangely though, now the blue LED (on GPIO 18) doesn't come on, just the other two (16 and 17) snap to full brightness as soon as I call deep sleep.
Does it matter that they're GPIO vs RTC GPIO?
Is it possible to hold them HIGH during deep sleep? I've also tried INPUT_PULLUP but no luck either.
They're SMD soldered on custom PCBs so I'm trying to solve this in software, before I consider swapping for N-Channel MOSFETs or just making new boards entirely.

[dji-sdk][onboard-sdk] Loosing GPS while in Onboard control flight mode

How does the DJI UAV (A3 or M600) behave if the GPS signal was completely lost during the flight and the setpoint was given as Horizontal command in ground_ENU frame.
According to this appendix:
Only when the GPS signal is good (health_flag >=3),horizontal position control (HORI_POS) related control modes can be used.
Only when GPS signal is good (health_flag >=3),or when Guidance system is working properly with Autopilot,horizontal velocity control(HORI_VEL)related control modes can be used.
Will the DJI switch to Attitude Flight mode?
Will you still have the authority to control over Onboard-SDK? And if yes, does this mean that you could control it only via HORI_ATTI_TILT_ANG mode?
Thanks!
I never test the full case which hook up the dji a3 and osdk and let it crash.
What I tested is using ground set up with A3 and turn off the ESC and motor. Run the mission and plot the GPS. But I moutned many other sensors to get the correct position/velocity command.
When next to the building. GPS went to crash on the building. The DJI GPS mission control follows that. The GPS from DJI sdk is denoted in the green line. I have to use aullixary visual and Lidar based navigation to get to correct position

How to map a house layout, room by room to be used for simple room to room navigation by a robot?

I am planning on a robot, basically an Arduino coupled with a webcam and RC car to navigate from a point in the house to another using a map of the house layout made possibly by a webcam tour of the place.
It should receive a command to where it should go based on input from my smartphone or PC. Each room will have an ID code which the robot should use to determine the travel path.
Also, it should be able to go to the room where I am based on locating me using Bluetooth or Wifi.
Sensors: Proximity sensors and light sensors
I live in the house, so that is not an issue.
Any ideas on where I can start?
I participated in a similar project, it will be more difficult than you think now.
We used bluetooth beacons. Fix their positions, then you can measure the signal strength with the robot. If you know the positions of the beacon (they are fix), then you can calculate where is the robot actually. But they are very inaccurate, and takes a couple of seconds to scan all the beacons.
If you want to navigate through your house, I think the easiest way that you plant the beacons, go around in the house with the robot and measure the signals (the more the better). This way you can create a discrete layout of your house. In my opinion the easiest way to store the map if you represent the layout as a graph. The nodes are the discrete points you measured, and there exists an edge between two nodes if the robot can travel between them in "one step". This way you can represent temporary obstacles too, for example delete an edge. And the robot can easily determine which way to go, just use Dijsktra's algorithm.

Tracking multi-touch movements inside the frame with transmitters and receivers

The problem with tracking multi-touches (at least two finger touches) on the following frame device.
White circles are LEDs and black circles are receivers. When user moves fingers inside this frame we can analyze which receivers received light from the LEDs and which has not received. Based on that we need to track movements of the fingers somehow.
First problem that we has separate x and y coordinates. What is the effective way to combine them?
Second problem concerns analyzing coordinates when two fingers are close to each other. How to distinct between them?
I found that k-means clustering cam be useful here. What are other algorithms I should look more carefully to handle this task?
As you point out in your diagram, with two fingers different finger positions can give the same sensor readings, so you may have some irreducible uncertainty, unless you find some clever way to use previous history or something.
Do you actually need to know the position of each finger? Is this the right abstraction for this situation? Perhaps you could get a reasonable user interface if you limited yourself to one finger for precise pointing, and recognised e.g. gesture commands by some means that did not use an intermediate representation of finger positions. Can you find gestures that can be easily distinguished from each other given the raw sensor readings?
I suppose the stereotypical computer science approach to this would be to collect the sensor readings from different gestures, throw them at some sort of machine learning box, and hope for the best. You might also try drawing graphs of how the sensor readings change over time for the different gestures and looking at them to see if anything obvious stands out. If you do want to try out machine learning algorithms, http://www.cs.waikato.ac.nz/ml/weka/ might be a good start.

Resources