arduino - servo stuttering - used to work fine - arduino-uno

I am using an Arduino Uno to control two ANNIMOS 20KG Digital Servo, DS3218MG.
The servos have each had 100,000ish movements.
For some unknown reason, after sitting inactive for several weeks, one of the servos started moving super slow-mo and stuttering while it was moving from A to B or B to A.
At first I thought it was a power supply issue so I replaced the power supply. Nope. Then I soldered all of my connections. Nope.
Then I thought the Arduino was confused and I replaced that. Nope.
Then I replaced the servo with a new one. That fixed it.
The servo that failed has to apply more force than the one that did not. I don't know how much more force, but some, and I would think it wasn't too much force for the servo to handle. It doesn't seem to strain at all when it is working properly.
My question is: do these servos have a limited number of operations they can perform? Why would sitting around doing nothing cause this problem?
As a result of this experience, I ordered several more of these servos so that I have spares on hand. But it would be good to know what is causing this so I can either fix the issue or plan on the number of spares I need.

Related

Golang GPIO Edge Detection with rpio library

I have recently started experimenting on the intersection of software and hardware, by playing with the GPIO pins on a Raspberry Pi, with software written in Go. Mostly everything is working out well so far, but there's one thing I can't really wrap my head around yet and that's how to efficiently do edge detection on an input pin and trigger code accordingly.
I am using the following library (rpio) as an abstraction over the raw GPIO layer and it contains functions to start watching for edge events and checking if an edge event occurred. Other libraries seem to employ similar patterns. I can't imagine it's actually intended to be completely poll-based though. Granted I'm new to this, but I'm not new to software development and polling for change seems both inefficient and slow. I could write a simple loop polling for the change maybe a hundred times per second, but it feels wrong. Especially when watching more pins than one.
Therefore I am fairly confident I'm missing or misreading something and I wonder if you can point me into a better direction.

How to add a time tracking feature to an iBeacon project?

How do you add a time tracking feature to an iBeacon Project? Additionally, are there any projects with available code that have already implemented this?
I want to implement time tracking for a small project. I want to place a beacon scanner on my door and check at what time I leave my house and when I arrive back at my house.
This is a little more complex than it sounds. The two simplest approaches do not work:
Every time your phone detects the beacon assume you are going out or coming back.
Every time your phone stops detecting the beacon, assume you are gone.
Approach #1 only works in a very large building where you are always outside radio range unless you are at the door. This is almost never true, so it fails. In addition, this technique also fails if you simply go to the door without exiting or entering.
Approach #2 is more reliable, but only works in a very small building where the beacon is always within radio range. This is unlikely in anything larger than a single room. You can improve this by deploying many beacons to ensure coverage.
No solution will ever be perfect. But you can combine the two approaches to come up with a reasonable guess of when the phone entered and exited the building based on the time of last detection and how long it is reasonably true that someone could be “inside” the the building without detecting the beacon. The best algorithm depends on the specifics of the building and the use case.

Arduino Uno: Running multiple servos

I have a Arduino Uno, and I am pretty new to the Arduino stuff. I am new to circuits also. I am thinking about working on a simple spider robot and making it more complex as I learn about the Arduino. Anyways, yesterday I tried seeing if I could run 10 server motors (small ones) with the Arduino. I linked all the positive wires in parrel and connected it to the 5v on the board. I observed that not all the servers moved like the code says. I looked it up and found out I can't do that or else I might fry it. I did not fry it thank goodness. I then found out that I have to have a separate power supply and connect the servers red wires(positive wire) to the power supplies red wire and connect the ground wires to ground on the Arduino. I found this picture showing this.
https://www.google.com/search?q=arduino+uno+connecting+multiple+servos&biw=1920&bih=974&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjU5r6ovv7RAhVp5oMKHV_cDqAQ_AUIBigB#imgrc=DmQfCRVK9SXwAM:
I also saw that in another forum someone said something about using a capacitor. So I have some questions still. My first question is do I have to have a capacitor or can I just do what the picture in the link I gave does? It shows the power supply being a 6v NiMH 2800 mAh battery. I looked online I could not find that exact same battery with a charger, but I found this on ebay.
http://www.ebay.com/itm/121963271143
So I thought if I connect 5 of the batteries in series that way the amps is left alone for 2800 mAh and 6v, the exact spec of the battery in the picture. So my second question is will this work? I have this pack from Radio Shack near my house that I got.
https://www.radioshack.com/products/radioshack-8-aa-battery-holder#
There is a problem with this one though and that is it holds 8 and not 5. So my third question is will it still work with only 5 batteries in it? The forth question is how do I connect the wires to this item since it has no wires coming off of it? My fifth question is are these battery packs connected in series since they add the volts? Thank you for reading this and taking your time to clear up my confusion. I will restate the questions down below.
Do I have to have a capacitor or can I just do what the picture in the first link I gave does?
Will this work?
Connecting 5 of the batteries in series that way the amps is left alone for 2800 mAh and 6v, the exact spec of the battery in the picture.
Will it still work with only 5 batteries in the 8 battery holder?
How do I connect the wires to this battery holder since it has no wires coming off of it?
Are these battery packs connected in series since they add the volts?
Again, thank you for reading this long forum question and thank you for your time.

How do I debug Verilog code where simulation functions as intended, but implementation doesn't?

I'm a bit stumped.
I have a fairly large verilog module that I've tested in Simulation (iSim) and it functions as I want. Now I've hooked up it up in real life to another device using SPI, and some stuff works, and some stuff doesn't.
For example,
I can send a value using command A, and verify that the right value was received using command B. Works no problem.
But if I send a value using command C, I cannot verify that it was received using command D. In simulation it works fine, so I feel I can't really gain anything from simulating any more.
I have looked at the signals on a logic analyzer, and the controller device (not my design) sends the right messages. When I issue command B, I can see the return values correct from my device (I know SPI works anyways). I don't know whether C or D work correctly. D just returns 0s, so maybe C didn't work in the first place. There is no way to step through Verilog, and this module is packaged as IP for Vivado.
Here are two screenshots. First is simulation (I send 5, then 2, then I expect it to return 4 on the next send, which it does; followed by zeros).
Here is what I get in reality (the first two bytes don't matter, 5 is a left over from previously sent value):
Here is a command (B) that works in returning a correct value (it responds to the 0x01 being sent):
Does anyone have any advice for debugging this? I have literally no idea how to proceed.
I can't really reproduce this behaviour in simulation.
Since you are synthesizing to an FPGA, you have a few more options on how to debug your synthesized, on-chip design. As you are using Vivado, you can use ChipScope to look at any signal in your system; allowing you to view a waveform of that signal over time just as you would in simulation (though more restricted). By including the ChipScope IPs into your synthesis, you can sent waveform data back to the Vivaod software which will display a waveform of your selected signals to help you see whats going on inside the FPGA as the system runs. (Note, if you were using Altera's stuff, you can use their equivalent called SignalTap; its pretty much the same thing)
There are numerous tutorial online on how to incorporate and run ChipScope, heres one from the Xilinx website:
http://www.xilinx.com/support/documentation/sw_manuals/xilinx2012_4/ug936-vivado-tutorial-programming-debugging.pdf
Many other use ISE, but the steps are very similar as both typically involve using the coregen tool (though I think you can also add ChipScope via synthesis flow, so there are multiple options on how to incorporate it into your design).
Once on the FPGA, you have access to what is effectively an internal logic analyzer. Note that it does take up some LEs on the FPGA and can take up a fair amount of block RAM depending on how many samples you want to take out your signals.
Tim's answer provides a good description of how to deal with on-chip debugging if you are designing purely for ASIC; so see his answer if you want more information about standard, non-FPGA debugging solutions.
In cases like this you might want to think about adding additional logic which is used just for debugging. ('Design for debug') is a common term used for thinking about this kind of logic.
So you have one chip interface (SPI), which you don't know if it works correctly. Since it seems not to be working, you can't trust debugging over this interface, because if you get an odd result you can't determine what it means.
Since you're working on an FPGA, are there any other interfaces other than SPI which you can get working correctly? Maybe 7-segment display, LEDs, JTAG, VGA, etc?
Try to think of other creative ways to get data out of your chip that don't require the SPI interface.
If you have 4 LEDs, A through D, can you light up each LED for 1 second each time a command of that type is received?
Can you have a 7-seg display the current state of your SPI receiver's state machine, or have it indicate certain error codes if some unknown command is received?
Can you draw over VGA to a monitor a binary sequence of the incoming SPI bitstream?
Once you can start narrowing down with data what is actually happening inside your hardware, you can narrowing the problem space to go inspect for possible problems.
There are multiple reasons why code that runs ok in RTL simulation behaves differently in the FPGA. It is important to consider all possibilities. Chipscope suggested above is definitely a step in right direction and it could give you hint, where to look further. These reasons could be:
The FPGA implementation flow was not executed properly. Did you have right timing constraints, were they met during implementation, especially P&R phase, pin placements, I/O properties, right clock properties. Usually you can find hints inspecting FPGA implementation reports. This is a tedious part, but needed sometimes. Incorrect implementation flow can also result in FPGA implementations that work or don't depending on the run or small unrelated changes (seen this problem many times!).
RTL/netlist discrepancies, e.g. due to incorrect usage `ifdef within design or during synthesis phase, selecting incorrect file for synthesis or the same verily module defined in multiple places. Often, the hint could be found by inspecting removed flop list or synthesis warnings.
Discrepancy between RTL simulation and board environment. They could be external like the clock/data alignment on the interface, but also internal: improper CDC, not handling clock or reset tree delays properly. Note, that X-propagation and CDC is not handled properly in RTL, unless you code in a certain way. Problems with those could be often only seen in netlist simulation environment.
Lastly, the FPGA board problems, like faulty clock source or power supply, heat can also be at fault. They worth checking, but I'd leave those as a last resource. Some folks have a dedicated board/FPGA test design proven to work on the good board that would catch some of those problems.
As a final note, the biggest return is given by investing in simulation environment. Some folks think that since FPGA can be debugged with chipscope and reprogrammed quickly, there is no need in good simulation environment. It probably depends on the size of the project, but my experience is that for most of modern FPGA projects the good simulation environment saves a lot of time spent in the lab looking through chipscope and logic analyzers captures.

Theoretically knowing how powerful a micro controller you need, to run your program?

With the vast array of micro controllers out there and even different levels of arduinos providing more power than the last, is there a mathematical way or some way of knowing how much processing power you need, just by analysis, to run your program as designed in order to choose the right micro?.
Without just trial and error. i.e without just trying it and if it is too slow buying the next chip up.
I've had to do performance projections for computer systems that did not exist yet. Things like cycle time ratios can only give a very rough guide. Generally, I had to resort to simulation, the nearest I could get to measuring on actual hardware.
That said, you may be able to find numbers for benchmarks similar to your code that will at least give you a starting point.
I would not do it by working up one chip at a time - your code may have a problem that makes it too slow for any feasible chip. I would try to find a chip that is fast enough, and work down if it is much faster than needed.

Resources