What I'm trying to achieve is to switch on and off specific I/Os via gcode call.
In some motion controllers I worked before, that could be done with an M-command (i.e. M10Pn, where n indicates the pin address).
For example:
Move to position X10 Y10
Turn on cooling
Move to position X0 Y0
Turn off cooling
Assuming that the cooling is linked to a dedicated ethercat module, how do I address it?
I browsed the beckhoff infosys, and it looks like M commands are the starting point, but I'm not sure how to program them.
Does anyone know where to find an example?
Related
I'm working on a guitar effects "pedal" using the NEXSYS A7 Board.
For this purpose, I've purchased the I2S2 PMOD and successfully got it up and running using the example code provided by Digilent.
Currently, the design is a "pass-through", meaning that audio comes into the FPGA and immediately out.
I'm wondering what would be the correct way to store the data, make some DSP on this data to create the effects, and then transmit the modified data back to the I2S2 PMOD.
Maybe it's unnecessary to store the data?
maybe I can pass it through an RTL block that's responsible for applying the effect and then simply transmit the modified data out?
Collated from comments and extended.
For a live performance pedal you don't want to store much data; usually 10s of ms or less. Start with something simple : store 50 or 100ms of data in a ring (read old data, store new data, inc address modulo memory size). Output = Newdata = ( incoming sample * 0.n + olddata * (1 - 0.n)) for variable n. Very crude reverb or echo.
Yes, ring = ring buffer FIFO. And you'll see my description is a very crude implementation of a ring buffer FIFO.
Now extend it to separate read and write pointers. Now read and write at different, harmonically related rates ... you have a pitch changer. With glitches when the pointers cross.
Think of ways to hide the glitches, and soon you'll be able to make the crappy noises Autotune adds to most all modern music from that bloody Cher song onwards. (This takes serious DSP : something called interpolating filters is probably the simplest way. Live with the glitches for now)
btw if I'm interested in a distortion effect, can it be accomplished by simply multiplying the incoming data by a constant?
Multiplying by a constant is ... gain.
Multiplying a signal by itself is squaring it ... aka second harmonic distortion or 2HD (which produces components on the octave of each tone in the input).
Multiplying a signal by the 2HD is cubing it ... aka 3HD, producing components a perfect fifth above the octave.
Multiplying the 2HD by the 2HD is the fourth power ... aka 4HD, producing components 2 octaves higher, or a perfect fourth above that fifth.
Multiply the 4HD by the signal to produce 5HD ... and so on to probably the 7th. Also note that these components will decrease dramatically in level; you probably want to add gain beyond 2HD, multiply by 4 (= shift left 2 bits) as a starting point, and increase or decrease as desired.
Now multiply each of these by a variable gain and mix them (mixing is simple addition) to add as many distortion components you want as loud as you want ... don't forget to add in the original signal!
There are other approaches to adding distortion. Try simply saturating all signals above 0.25 to 0.25, and all signals below -0.25 to -0.25, aka clipping. Sounds nasty but mix a bit of this into the above, for a buzz.
Learn how to make white noise (pseudo-random number, usually from a LFSR).
Multiply this by the input signal, and mix or match with the above, for some fuzz.
Learn digital filtering (low pass, high pass, band pass for EQ), and how to control filters with noise or the input signal, the world of sound is open to you.
can someone help me understand why between line 1 and 3 we don't need forwarding (there is no green arrow as between 1 and 2)
I think we need it because sub uses the value of t0 which add determines and both are doing read and write of that value at same time.(To be precise write for add happens more lately when the clock rises)
You are correct that in the third instruction (sub), has already read an incorrect (e.g. stale) value in decode stage, and thus requires mitigation such as forwarding.
In fact, that sub instruction has read two incorrect (stale) values, one for the first operand, t0, and one for the second operand, t3, as that register is updated by the immediately prior instruction.
The first actual register update (of t0 by add) is available in cycle 5 (1-based counting), yet the decode of the sub happens in cycle 4. A forward is required: here it could be from the W stage of the add to the ALU stage of the sub -or- it could be done from the M stage of the add to the D stage of the sub.
Only in the next cycle after (4th instruction, not shown) could the decode obtain the proper up-to-date value from the earlier instruction's W stage — if the W stage overlaps with a subsequent instruction's D stage, no forward is necessary since the W stage finishes early in the cycle and the D stage is able to pick up that result.
There is also a straightforward ALU-ALU dependency, a (read-after-write) hazard, on t3 between instruction 2 (the writer) and instruction 3 (the reader) that the diagram does not call out, so that is good evidence that the diagram is incomplete with respect to showing all the hazards.
Sometimes educators only show the most clear example of the read-after-write hazard. There are many other hazards that are often overlooked.
Another involve load hazards. Normally, a load hazard is seen as requiring both a forward and a stall; this if there is a use of the load result done in the next instruction at the ALU. However, if a load instruction is succeeded by a store instruction (storing the loaded data), a forward from M (of load) to M of store can mitigate this hazard without a stall (much the same way that X to X forward can mitigate and ALU dependency hazard).
So we might note that a store instruction has two register sources, but the register for the value being stored isn't actually needed until the M stage, whereas the register for the base address computation is needed in the X (ALU) stage. (That makes store somewhat different from, say, add which also has two register sources, in that there both are needed for the X stage.)
I have 2 K-type thermocouples running on individual MAX31855 boards on a Raspberry Pi 3.
The MAX31855 boards share a CLK pin but have separate CS and DO pins, similar to the set up given here:
Multiple thermocouples on raspberry pi
Everything works great until i place both thermocouples on a metal surface which causes both thermocouple readings to be "NAN". I guess its a grounding issue? Is there a way to solve this?
Thanks in advance
You can glue them with epoxy to the devices you are monitoring. Thermo-couples should not be connected together electrically and it sounds as though you may have grounded them. Also, I don't know how the MAX works precisely, but in order to be able to read negative temperatures, the thermo-couples are probably biased UP slightly (maybe 10mV).
If you ground them this means they will read a voltage that translates as -10mV which would be a temperature lower than absolute zero. Hence, NaN, i.e. not a number.
I do know how the TH7 works and the python code for driving that (its a raspberry pi thing) is here https://github.com/robin48gx/TH7
I know not to use them, but there are techniques to swap two variables without using a third, such as
x ^= y;
y ^= x;
x ^= y;
and
x = x + y
y = x - y
x = x - y
In class the prof mentioned that these were popular 20 years ago when memory was very limited and are still used in high-performance applications today. Is this true? My understanding as to why it's pointless to use such techniques is that:
It can never be the bottleneck using the third variable.
The optimizer does this anyway.
So is there ever a good time to not swap with a third variable? Is it ever faster?
Compared to each other, is the method that uses XOR vs the method that uses +/- faster? Most architectures have a unit for addition/subtraction and XOR so wouldn't that mean they are all the same speed? Or just because a CPU has a unit for the operation doesn't mean they're all the same speed?
These techniques are still important to know for the programmers who write the firmware of your average washing machine or so. Lots of that kind of hardware still runs on Z80 CPUs or similar, often with no more than 4K of memory or so. Outside of that scene, knowing these kinds of algorithmic "trickery" has, as you say, as good as no real practical use.
(I do want to remark though that nonetheless, the programmers who remember and know this kind of stuff often turn out to be better programmers even for "regular" applications than their "peers" who won't bother. Precisely because the latter often take that attitude of "memory is big enough anyway" too far.)
There's no point to it at all. It is an attempt to demonstrate cleverness. Considering that it doesn't work in many cases (floating point, pointers, structs), is unreadabe, and uses three dependent operations which will be much slower than just exchanging the values, it's absolutely pointless and demonstrates a failure to actually be clever.
You are right, if it was faster, then optimising compilers would detect the pattern when two numbers are exchanged, and replace it. It's easy enough to do. But compilers do actually notice when you exchange two variables and may produce no code at all, but start using the different variables after that. For example if you exchange x and y, then write a += x; b += y; the compiler may just change this to a += y; b += x; . The xor or add/subtract pattern on the other hand will not be recognised because it is so rare and won't get improved.
Yes, there is, especially in assembly code.
Processors have only a limited number of registers. When the registers are pretty full, this trick can avoid spilling a register to another memory location (posssibly in an unfetched cacheline).
I've actually used the 3 way xor to swap a register with memory location in the critical path of high-performance hand-coded lock routines for x86 where the register pressure was high, and there was no (lock safe!) place to put the temp. (on the X86, it is useful to know the the XCHG instruction to memory has a high cost associated with it, because it includes its own lock, whose effect I did not want. Given that the x86 has LOCK prefix opcode, this was really unnecessary, but historical mistakes are just that).
Morale: every solution, no matter how ugly looking when standing in isolation, likely has some uses. Its good to know them; you can always not use them if inappropriate. And where they are useful, they can be very effective.
Such a construct can be useful on many members of the PIC series of microcontrollers which require that almost all operations go through a single accumulator ("working register") [note that while this can sometimes be a hindrance, the fact that it's only necessary for each instruction to encode one register address and a destination bit, rather than two register addresses, makes it possible for the PIC to have a much larger working set than other microcontrollers].
If the working register holds a value and it's necessary to swap its contents with those of RAM, the alternative to:
xorwf other,w ; w=(w ^ other)
xorwf other,f ; other=(w ^ other)
xorwf other,w ; w=(w ^ other)
would be
movwf temp1 ; temp1 = w
movf other,w ; w = other
movwf temp2 ; temp2 = w
movf temp1,w ; w = temp1 [old w]
movwf other ; other = w
movf temp2,w ; w = temp2 [old other]
Three instructions and no extra storage, versus six instructions and two extra registers.
Incidentally, another trick which can be helpful in cases where one wishes to make another register hold the maximum of its present value or W, and the value of W will not be needed afterward is
subwf other,w ; w = other-w
btfss STATUS,C ; Skip next instruction if carry set (other >= W)
subwf other,f ; other = other-w [i.e. other-(other-oldW), i.e. old W]
I'm not sure how many other processors have a subtract instruction but no non-destructive compare, but on such processors that trick can be a good one to know.
These tricks are not very likely to be useful if you want to exchange two whole words in memory or two whole registers. Still you could take advantage of them if you have no free registers (or only one free register for memory-to-memoty swap) and there is no "exchange" instruction available (like when swapping two SSE registers in x86) or "exchange" instruction is too expensive (like register-memory xchg in x86) and it is not possible to avoid exchange or lower register pressure.
But if your variables are two bitfields in single word, a modification of 3-XOR approach may be a good idea:
y = (x ^ (x >> d)) & mask
x = x ^ y ^ (y << d)
This snippet is from Knuth's "The art of computer programming" vol. 4a. sec. 7.1.3. Here y is just a temporary variable. Both bitfields to exchange are in x. mask is used to select a bitfield, d is distance between bitfields.
Also you could use tricks like this in hardness proofs (to preserve planarity). See for example crossover gadget from this slide (page 7). This is from recent lectures in "Algorithmic Lower Bounds" by prof. Erik Demaine.
Of course it is still useful to know. What is the alternative?
c = a
a = b
b = c
three operations with three resources rather than three operations with two resources?
Sure the instruction set may have an exchange but that only comes into play if you are 1) writing assembly or 2) the optimizer figures this out as a swap and then encodes that instruction. Or you could do inline assembly but that is not portable and a pain to maintain, if you called an asm function then the compiler has to setup for the call burning a bunch more resources and instructions. Although it can be done you are not as likely to actually exploit the instruction sets feature unless the language has a swap operation.
Now the average programmer doesnt NEED to know this now any more than back in the day, folks will bash this kind of premature optimization, and unless you know the trick and use it often if the code isnt documented then it is not obvious so it is bad programming because it is unreadable and unmaintainable.
it is still a value programming education and exercise for example to have one invent a test to prove that it actually swaps for all combinations of bit patterns. And just like doing an xor reg,reg on an x86 to zero a register, it has a small but real performance boost for highly optimized code.
I am doing a VLSI Project and I am implementing a Barrel Shifter using a tool called DSCH.The schematic for the same is realized using Transmission Gates.
What the circuit does is, it ROTATES the 8 bit word(8-bit shifter) with as many rotations chosen from a decoder in one clock cycle.
But I want to know the use of a rotator and why is it still called a shifter even though it's rotating.
Also please help me with some applications regarding Rotator which could be added to the present circuit to show it's use?
Rotation is just shifting with the bit exiting from one end fed back into the input at the other end, possibly by way of the carry flag bit. At the level of a simple implementation, it would make sense to have one circuit for both operations, with some additional control lines to select the source at the input side between the output of the other side, 0, or 1. Sign extension during right shift of 2's complement numbers would be another selectable option often built in.
The stackexchange sites aren't really suited to "list" questions including about applications, but a couple come to mind:
If you want a vector to test every bit of another value in turn, and to do so repeatedly, you could just keep rotating an initial one-bit-active value over an over, never having to re-initialize it.
You could swap a two-part (typically double-byte) value to imitate the opposite endieness of encoding by rotating it half way. Or putting it another way, it can be a single-operation swap of the values of two pairable but also independently accessible registers (think AL and AH together making up AX in real mode x86). But this will not work two endian swap a four-part value, such as a 32-bit value on a byte-addressable machine.
Various coding, checksum, and hashing schemes may wish to transform a value