Can a programmer damage micro-controllers? - avr

There is a question that can micro-controllers programmer damage micro-controllers in a way that they can't be programmed any more? (I have usb programmer)
This question came in my mind when i found out that my new-bought micro-controllers become unprogrammable after that i programmed them for some times, but except that they can't be programmed any more they work correctly in the way they have been programmed.
Thanks for reading.

If your AVR has power issues during programming, it is possible for its fuse bits to become messed up. You should make sure your batteries are fully charged (if applicable) and be careful not to disconnect power during programming.
If the AVR fuse bits that specify its clock source get corrupted and the AVR expects you to connect an external clock or crystal, but you do not have such a clock or crystal in your circuit, then the AVR will not have a clock signal and you be unable to program it.
Luckily, there is actually a way to revive such AVRs: you can get another microcontroller to generate a PWM signal and apply it to the XTAL2 or XTAL1 pin of your AVR as a low-speed clock signal (e.g. 100 kHz). Then use your programmer (configured to use a low enough ISP frequency like 2 kHz) to connect to the AVR and fix its fuse bits so it uses the correct clock source.
The Pololu USB AVR Programmer v2.1 has a feature to generate such clock signals. A procedure for reviving AVRs is documented in the "Using the clock output to revive AVRs" of that programmer's user's guide. There is at least one person who successfully revived an AVR using this principle. If you try it, please let me know whether it works for you!
In general, there are lots of other ways for microcontrollers to be damaged or destroyed depending on what you are doing, so you might consider posting the details of your setup to a more AVR-focused forum that allows free-form discussion instead of just a question/answer format.

Related

How windows progran can transmit an input and get an output to an FPGA

I am new to programming and FPGA. I like to run a program on my windows 10 PC and like to send input to the FPGA and when processing is done I like to receive output to the same program. Is it possible and how it can be achieved. I need some direction to start finding a way.
Thank you.
I recommend you to buy a Digilent Arty A7 board. It is low cost and very nice to work with.
To communicate with a PC/Windows you can use the USB to UART that you have on that board. However I think the best and easiest way to do it is to use an IP core that has support for Ethernet and TCP/IP. Using TCP/IP is very simple on the PC side using Python, Matlab, Telnet or any programming tool.
The best IP for the Xilinx FPGA that I have found so far is the ones from fpga-cores.com. There you only have to implement an AXI4 Stream to communicate with the client. I don't think it gets easier than that.
That core also include remote programming of the FPGA over Ethernet and a logic analyzer. All that is for free.
Good question. A lot of people ask about data processing on FPGA but never think about how to get the data to and from it. (Until it is too late)
The best way is to find an FPGA which has also has an SOC. That is: a processor, DDR interface and one or more high speed interfaces. Ethernet, USB, PCIe. Make sure they come with complete working example code, often some RTOS.
As to which FPGA to choose greatly depends on what you want it to do. You also need to have enough programmable gates to implement the function you want.
Nowadays all vendors have free HDL compilers up to a certain size FPGA.
Every FPGA manufacturer also has one or more prototyping boards, but the price of those varies a lot.
If you have some FPGA code which is capable of very high data throughput your interface is likely to become the bottleneck.
A PCIe board offers the highest data throughput, but for that you need to have matching drivers on both the FPGA board and the PC. In that case check that it has example drivers for the PC side too.
Yes, I fell into that trap a few years back

Using usb cable for random number generation

I have a thought, but am unsure how to execute it. I want to take a somewhat long usb cable and plug both ends into the same machine. Then I would like to send a signal from one end and time how long it would take to reach the other end. I think this should cause signal to arrive at different times and that would cause me to get random numbers.
Can someone suggest a language in which I could do this the quickest? I have zero experience in sending signals over usb and don't know where to start or how to start. Any help will be greatly appreciated.
I simply want to do this as a fun in home project, so I don't need anything official and just would like to see if this idea can work.
EDIT: What if I store the usb cable in liquid nitrogen or a substance just as cold in order to slow down the signal as much as possible (I have access to liquid nitrogen).
Sorry I can't comment (not enough rep), but the delay should always be the same through the wire. This might limit the true randomness of your numbers. Plus the acutal delay time in the wire might be shorter than even a CPU cycle.
If your operating system is Windows, you may run into this type of issue:
Why are .NET timers limited to 15 ms resolution?
Apparently the minimum time resolution on Windows is around 15ms.
EDIT: In response to your liquid nitrogen edit, according to these graphs, you may have more luck with heat! Interestingly enough...
Temperature vs Conductivity http://www.emeraldinsight.com/content_images/fig/1740240120008.png
I want to take a somewhat long usb cable and plug both ends into the same machine.
Won't work. A USB connection is always Host -> Device, a PC can only be Host. And the communication uses predictable 1 ms intervals - bad for randomness.
Some newer microcontrollers have both RNG and USB on chip, that way you can make a real USB RNG.
What if I store the usb cable in liquid nitrogen or a substance just as cold in order to slow down the signal
The signal would travel a tiny bit faster, as the resistance of the cable is lower.

Daisy chain programming with PIC Microcontrollers

Is it possible to program multiple PIC microcontrollers using only 1 PICKit2 programmer? The microcontrollers is connected via daisy chain. With PGC, PGD and MCLR of the PIC to be programmed is connected to the GPIO of the programming PIC.
I may be wrong, but I do not think this will work well as MPLBX will want to read back the written data to verify the programming operation succeed.
Alternatively, have you considered using PICkit3's in their "independent of a computer" mode? The PICKit3's can be configured to burn a specific program into a target PIC independent of a computer. I am wondering if having an "army" of these might address your issues.
I don't believe so. Just for fun after finding this question I took two 12f508's that were known to be good.
To prove that they were good I used IPE to load a previously tested program onto two devices. The devices worked as expected. I then used IPEs "fill memory" tool to program both devices to all empty (every address has 0x00), less the oscillator calibration memory location (I've had trouble with this area in the pass, so I always disable reading and writing to that location).
I then connected both chips up to the programmer in parallel and tried to program them with the same program. This is where everything went horribly awry.
For some reason, the programmer got confused and wrote a value of 0xFF to all addresses, including the out of range addresses. I verified that this was what actually happened by disconnecting the chips from the circuit and reading them independently.
Luckily for me I ran into this problem repeatedly before, and so have built a programmer out of an arduino and some extra circuits, so that I can ignore the stupid "oscillator calibration data invalid" error and reprogram that location to the correct instruction. It takes a long time to read and write memory, but it saves otherwise bricked chips.
In shorter words: No, this does not work, and it may actually "brick" your chips.

What are some practical applications of an FPGA?

I'm super excited about my program powering a little seven-segment display, but when I show it off to people not in the field, they always say "well what can you do with it?" I'm never able to give them a concise answer. Can anyone help me out?
First: They don't need to have volatile memory.
Indeed the big players (Xilinx, Altera) usually have their configuration on-chip in SRAM, so you need additional EEPROM/Flash/WhatEver(TM) to store it outside.
But there are others, e.g. Actel is one big player that come to mind, that has non-volatile configuration storage on their FPGAs (btw. this has also other advantages, as SRAM is usually not very radiation tolerant, and you have to require special measurements when you go into orbit).
There are two big things that justify FPGAS:
Price - They are not cheap. But sometimes you can't do something in software, and you need hardware for it. And when you are below a certain point in your required volume (e.g. because its just small series, or a prototype) an FPGA is MUCH cheaper than an ASIC. Also, while developing ASICs this allows - before a final state is reached - much higher turn-around times.
Reconfiguration - You can reconfigure your FPGA. That is something a processor or an ASIC can't do. There are some applications where you can use this: E.g. When you need the ability to fix something in the design, but you can't get physically to the device. Example for this: The mars orbiters/rovers used Xilinx FPGAs. When someone finds there a mistake (or wants to switch to a different coding for transmitting data or whatever), you can't replace the ship, as it is just not reachable. But with an FPGA you can just reconfigure and can apply your changes. Another scenario is, that you can have one single chip which is able to perform different accelerations, depending on the scenario. Imagine a smartphone, when telephoning the FPGA can be configured to make audio en-/decoding, when surfing it can work as a compression engine, when playing videos it can be configured as h264 decoder/accelerator. Another thing you could do is that you can match your hardware to your problem instance. E.g. Cisco uses many FPGAs in their hardware. You need the hardware to perform switching/routing/packet inspection with the required speed, and you can generate from actual setting matching engines directly into hardware.
Another thing which might come up soon (I know some car manufacturer thought about it), is for devices which include a lot of different electronics and have a big supply chain. It's more or less a combination of price and reconfiguration. It's more expensive to have 10 ASICs than 10 FPGAs - where both perform the same task, but it's cheaper to have 10 FPGAs with just one supplier and the need to hold just 1 type of chip at service and supply than to have 10 suppliers with the necessity to hold and manage 10 different chips in supply and service.
True story.
They allow you to fix design flaws in the custom data-acquisition boards for a multi-million dollar particle physics experiment that become obvious only after you have everything installed and are doing integration work and detector characterization.
You can evolve circuits, this is a bit old school evolutionary algorithms but starting from a set of random individuals you can select the circuits that score higher in a fitness function than the rest and breed them to create a new population ad infinitum. read up about Evolutionary Hardware, think this book covers FPGA's http://www.amazon.co.uk/Introduction-Evolvable-Hardware-Self-Adaptive-Computational/dp/0471719773/ref=sr_1_1?ie=UTF8&qid=1316308403&sr=8-1
Say for example you wanted a DSP circuit, you have an input signal and a desired output signal, starting with a random population you select perhaps only the fittest (bad) or perhaps a mixture of fitties and odd ones to create the next generation. after a number of generations you can open the lid and discover low and behold evolution has taken place and you have a circuit that may even out perform your initial expectations!
also read the field guide to genetic programming, it's free on the web somewhere.
There are limitations to software. On software, you're running at the CPU's clock rate, enabling you to only execute one instruction per clock cycle. On software, everything is high level, you do not control details that happen in the low level. You'll always be limited by the operating system or development board you are programming. This is true for popular development boards out there such as Arduinos and Raspberry Pi.
In FPGA hardware, you can precisely program and control what happens between each clock cycle, providing your computations the speed at the electron level (note: speed of electrons determines speed of electric signal transfers between hardware)
Now, we know FPGA implies Hardware, Speed of Electrons, which is much better than
CPU that implies Software, 1 instruction per clock cycle.
So why use FPGA when we can design our own boards using Printed Circuit Board, transistor level?
This is because FPGA's are programmable hardware! It is built such that you can program the connections of a board instead of wiring it up for a specific application. This explains why FPGA's are expensive! It is sort of a 'general hardware' or Programmable Hardware.
To argue why you should pick FPGA's despite their cost, the programmable hardware component allows:
Longer product cycle (you can update the programmable hardware on the customer's products which contains your FPGA by simply allowing them to programmed your updated HDL code into their FPGA)
Recovery for hardware bug. You simply allow them to download the corrected program onto their FPGA. (note: you cannot do this with specific hardware designs as you will have to spend millions to gather back your products, create new ones, and ship them back to customers)
For examples on the cool things FPGA can do, refer to Stanford's infamous ECE5760 course.
http://people.ece.cornell.edu/land/courses/ece5760/FinalProjects/
Hope this helps!
Soon Chee Loong,
University of Toronto
FPGA are also used to test/research circuit design before they start mass production. This is happening in several sectors: image processing, signal processing, etc.
Edit - after few years we can now see more practical applications including finance and machine earning:
aerepospace
emulation
automotive
broadcast
high performance computers
medical
machine learning
finance (including cryptocoins)
I like this article: http://www.hpcwire.com/hpcwire/2011-07-13/jp_morgan_buys_into_fpga_supercomputing.html
My feeling is that FPGA's can sit directly in your streaming data at the point where it enters your the systems under your control. You can then crunch that data without going through the steps a GPGPU would require (bringing the data in off the network, passing it across the PCI Express bus and crunching it a Gb at a time).
There are good reasons for both, but I think the notion of whether you mind buffering the data is a good bellwether.
Here's another cool FPGA application:
https://ehsm.eu/m-labs.hk/m1.html
Automotive image processing is one interesting domain:
Providing lane-keeping support to the driver (disclosure: I wrote this page!):
http://www.conekt.co.uk/capabilities/50-fpga-for-ldw
Providing an aerial view of a car from 4 fisheye-lens cameras (with video):
http://www.logicbricks.com/Solutions/Surround-View-DA-System/Xylon-Test-Vehicle.aspx

How to convert 24MHz and 12MHz clock to 8MHz clock using VHDL?

I am writing a code using VHDL to convert 24MHz and 12 MHz clock to 8 MHz clock. Can anyone please help me in this coding? Thanks in advance.
Is this for an FPGA? Or something else? Are you really dividing a clock, or just a signal? For a divide by three counter, try this link:
http://www.asic-world.com/examples/vhdl/divide_by_3.html
And for a 2/3:
http://www.edaboard.com/thread42620.html
As Martin has already said, use a clock management device by Xilinx recommendations in order to divide your clock down to a lower rate.
While you might be tempted to implement a clock divider using logic and a counter, you will not obtain good synthesis results.
Here are some tips:
Be sure to closely read and follow recommendations for the clock management hardware for your device. There can be quite a few "gotchas" related to power-up, reset, loss of clock lock, etc.
Make sure that you are operating the clock management device within its specifications. See your device's datasheet for more information (in this case for the S3-A).
Use FPGA Editor to verify correct placement and configuration of your clock management units (i.e. did it end up in the right spot on the chip)
Adhere to recommended practices for feedback clocks, and clock buffering.
Use a DCM or PLL (depending on the family of FPGA) - there's examples in the documentation. If you tell us which family, I might be able to point you more directly.
EDIT:
As you say Spartan 3ADSP - you need to either:
Use the Core Generator Clocking Wizard to create you a VHDL or Verilog file with the components you need in and hope you never need to understand what's going on
Read the libraries guide and the DCM section of the Userguide for that chip and instantiate a DCM on your own and apply the correct generics/parameters to it.
Don't forget to apply a reset pulse to the DCM after configuration has finished 0 and make sure that pulse lasts long enough. The min pulse length is different for each family, I don't recall off the top of my head what it is for that chip, so check the datasheet.

Resources