PIC18 Signal Measurement Timer SMT1 (Counter Mode) Not Incrementing [closed] - pic

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm trying to use SMT1 on a PIC18F45K42 to count cycles of a square wave on pin RB0. I can't get the counter to increment, not sure what I'm doing wrong. If I understand correctly, SMT1TMR should be incrementing but it's not. (I also checked SMT1TMRL, etc, directly and it's not changing).
1) I am trying to do a normal counter, not gated, so I'm not using the Window signal at all (I don't want to have to use it, I just want to zero the counter from time to time then check to see how many square cycles have arrived).
2) I have registers set as follows (pic below) according to the paused debugger in MPLAB X. I am putting a scope probe directly on the pin and I can see the square wave is arriving. I can also pause the debugger occasionally to read PORTB and see PORTB.0 is changing between high and low, so I believe it is being received.
3) I'm playing with square waves from 20 Hz to around 400 Hz created from a function generator.
I have attached an image of the registers. Here are the values for reference:
SMT1SIGPPS 0x08 (should be RB0)
SMT1CON0 0x80
SMT1CON1 0xC8
SMT1STAT 0x05
SMT1SIG 0x00
TRISB 0xE3
PMD6 0x17 (SMT1MD is 0, which should be "not disabled")
Any suggestions much appreciated. This seems like it should be so simple and straightforward.
Thank you.

I figured it out. The key is in data sheet 25.1.2 Period Match Interrupt. The Period register has to be set to longer than the counter will run. It defaults to 0, so the counter couldn't increment. Fixed it by manually loading the 3 period registers with max value... added the following to my ini code, seems to be working as expected now.
SMT1PRU = 0xFF; //set max period for SMT1 so counter doesn't roll over
SMT1PRH = 0xFF;
SMT1PRL = 0xFF;

Related

Stack implementation the Trollface way [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In my software engineering course, I encountered the following characteristic of a stack, condensed by me: What you push is what you pop. The fully axiomatic version I Uled here.
Being a natural-born troll, I immediately invented the Troll Stack. If it has more than 1 element already on it, pushing results in a random permutation of those elements. Promptly I got into an argument with the lecturers whether this nonsense implementation actually violates the axioms. I said no, the top element stays where it is. They said yes, somehow you can recursively apply the push-pop-axiom to get "deeper". Which I don't see. Who is right?
The violated axiom is pop(push(s,x)) = s. Take a stack s with n > 1 distinct entries. If you implement push such that push(s,x) is s'x with s' being a random permutation of s, then since pop is a function, you have a problem: how do you reverse random_permutation() such that pop(push(s,x)) = s? The preimage of s' might have been any of the n! > 1 permutations of s, and no matter which one you map to, there are n! - 1 > 0 other original permutations s'' for which pop(push(s'',x)) != s''.
In cases like this, which might be very easy to see for everybody but not for you (hence your usage of the "troll" word), it always helps to simply run the "program" on a piece of paper.
Write down what happens when you push and pop a few times, and you will see.
You should also be able to see how those axioms correspond very closely to the actual behaviour of your stack; they are not just there for fun, but they deeply (in multiple meanings of the word) specify the data structure with its methods. You could even view them as a "formal system" describing the ins and outs of stacks.
Note that it is still good for you to be sceptic; this leads to a) better insight and b) detection of errors your superiours make. In this case they are right, but there are cases where it can save you a lot of time (e.g. while searching the solution for the "MU" riddle in "Gödel, Escher, Bach", which would be an excellent read for you, I think).

What determines the result of overflowed operations? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Example:
int max = a > b ? a : b;
int min = a + b - max;
What determines whether this will work? The processor? The hardware? The language? Help me understand this at as deep a level as possible.
The processor IS the hardware (at least for the purposes of this question).
The language is purely a way for you to express things in such a way as to allow it to convert it to what the processor itself expects. The role of the language here would be to define what "int" means, what arithmetic operators are/do, and what their exceptional behavior is. In the low-level languages (like C/C++), it leaves several things to be "implementation defined", like the overflow behavior of integers. Other languages (like Python) may define "int" to be an abstract (not a hardware) concept and thereby change some of the rules (like detecting overflows and doing custom behavior).
If the language leaves something implementation defined and the implementation offloads that decision to the hardware, then the hardware is what defines the behavior of your code.
The high level programming language provides a way for humans to describe what they want to happen. A compiler reduces that down into a language the processor understands, (ultimately) machine code. The instruction set for a particular processor is designed to be useful for doing tasks, general purpose processors for general purpose tasks including the ones you have described. Unlike pencil and paper math where if we need another column another power of ten, 99+1 = 100 for example two digits wide going in, 3 digits coming put. Processors have a fixed with for their registers, that doesnt mean you cant get creative, but the language and the resources (memory, disk space, etc) have limits. And the processor either directly in the logic or the compiler implementing the right sequence of instructions, can and will detect an overflow if you ask it to, in general. Some processors harder than others and some processors are not general purpose enough, but I dont think we need to worry about those, the one you are reading this web page in definitely can handle this.
Computers(hardware) represent numbers in two's complement. Check this for details of two's complement, and why computers use it.
In two's complement signed numbers(not floating ones for now, for sake of simplicity) have a sign bit as most significant bit. For example:
01111111
Represents 127 in two's complement. And
10000000
represents -128. In both example, the first bit is sign bit, if it's 0, then the number is positive, else negative.
8-bit signed numbers can represent numbers between -128 and 127, so if you add 127 and 3, you won't get 130, you will get -126 because of overflow. Let's see why:
01111111
00000011
+________
10000010 which is a negative number, -126 in two's complement.
How hardware understand if an overflow occurred? In addition for example, if you add two positive numbers and the result gets negative, it means overflow. And if you add two negative numbers and result gets positive it means overflow again.
I hope that would be a nice example for how these things are happening in hardware level.

Designing a System Timer(Porgrammable Logic Timer) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
System timer
Computers contain a timer containing programmable channels. Programmable
channels mean timers of different durations. How to design such a circuit with four
programmable channels, each disabled initially. An enable input, two channel select
inputs and 4 lines for duration input can set any channel to a given duration from 1-
15. Zero means to disable a channel. Four output lines correspond to the channels and
are set high as soon as the corresponding timer expires.
Inputs
Clock Pulse CP
Input Available IA
Channel Select CS0, CS1
Duration D0…D3
Outputs
Timer Expire : TA, TB, TC, TD
I want to use Discrete logic ICs like Flip-Flops,Logic Gates,Decoders,Multiplexers,Encoders, etc.Data input is to be done using buttons(Push-buttons) and output should be displayed on LEDs. The clock should be common.
Single shot Timer consists of:
n-bit binary counter
driven by the input clock source CP and reseted by start input. With each clock pulse increments its value. The reset input should be hooked up to the timer start signal.
n-bit LATCH register
to store the timer Interval value (your per channel duration D0..D3)
n-bit comparator
to compare counted value and the Interval value. The XOR of equal bits is zero so if you or all xored bits together the result is 0 if both LATCH register value and Counter value are the same.
output flip flop
to remember expiration of timer (for non pulse mode operation) the output is your TA.TB,TC,TD The start impulse should also reset the RS on circuit I do this by WR but I suspect you will have Start signal separately instead...
Something like this:
You need to take into account the negations and auxiliary inputs of used ICs to make it work properly (some has negated WR some not ... the same goes for all pins so always check datasheet). So do not forget to add the chip selects and output enables signals to their working conditions.
multi channel timer
Well you just add the LATCH and comparators for each channel each connected to the same counter. The tricky part is the channel selection and starting part. you need to add decoder 1 from 4 to select the correct LATCH while setting the D0..D3. To draw a circuit for that part I would need to know more about the purpose of this ... Also if you set the Intervals only manually then you can use DIP switches instead the LATCH and the selection circuitry making it all much simpler.
All of above can be made just from NAND or NAND gates instead of concrete IC implementation. For that you need to use Karnaugh Maps and Boolean algebra.
It is a while I done something with raw gates as now is much easier cheaper faster to use MCU/FPGA for all of this so beware I could have miss something trivial (like negation gate somewhere)... anyway even then this should got the idea behind timers
BTW C++ representation of this is:
int cnt=0,D=?;
bool TA=0;
for (;;)
{
if (cnt==D) TA=1;
cnt=(cnt+1)&15;
}
[Edit1] the 4 channels version
This is based on the above text. There is also another option with less components that use 4 nibble RAM module instead of LATCH registers and decoder composing of single timer continuously looping through all channels by 4 x times CP multiplied clock (for example by XORing delayed CP signals).

understanding communication delays in parallism [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am reading through "Computer Architecture: A Quantative Approach, 5th ed" and am looking at an example from Chapter 5 on page 350. Attached is a scan of the example in question. I do not quite follow the logic of how they do things in this example.
My questions are, as follows:
Where is the 0.3ns cycle time coming from?
200/0.3 is roughly 666 cycles, I follow this. However, when plugged back into the CPI equation, it makes no sense: 0.2% (0.002) x 666 is equal to 1.332 and not 1.2. What is going on here?
When they say that "the multiprocessor with all local references is 1.7/0.5 = 3.4 times faster", where are they getting that from? Meaning: I see nowhere in the given information stating that local communication is twice as fast...
Any help would be appreciated.
Where is the 0.3ns cycle time coming from?
That comes from the clock rate of 3.3 GHz. 1 / 3.3 GHz = 0.3ns.
200/0.3 is roughly 666 cycles, I follow this. However, when plugged back into the CPI equation, it makes no sense: 0.2% (0.002) x 666 is equal to 1.332 and not 1.2. What is going on here?
I think you're right. That looks like a misprint. That should be
CPI = 0.5 + 1.33 = 1.83
When they say that "the multiprocessor with all local references is 1.7/0.5 = 3.4 times faster", where are they getting that from? Meaning: I see nowhere in the given information stating that local communication is twice as fast...
They don't say anywhere that local communication is twice as fast. They're dividing the effective CPI that they calculated for the multiprocessor with 0.2% remote references by the base CPI of 0.5. This tells you how many times faster the multiprocessor with all local references is. (Of course it should be about 1.83/0.5 = 3.66 times faster.)

Error correcting algorithm for very strange data channel [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please recommend an error correcting algorithm for using very strange data channel.
The channel consists of two parts: Corrupter and Eraser.
Corrupter receives a word consisting of 10000 symbols in 3-symbol alphabet, say, {'a','b','c'}.
Corrupter changes each symbol with probability 10%.
Example:
Corrupter input: abcaccbacbbaaacbcacbcababacb...
Corrupter output: abcaacbacbbaabcbcacbcababccb...
Eraser receives corrupter output and erases each symbol with probability 94%.
Eraser produces word of the same length in 4-symbol alphabet {'a','b','c','*'}.
Example:
Eraser input: abcaacbacbbaabcbcacbcababccb...
Eraser output: *******a*****************c**...
So, on eraser output, approximately 6%*10000=600 symbols would not be erased, approximately 90%*600=540 of them would preserve their original values and approximately 60 would be corrupted.
What encoding-decoding algorithm with error correction is best suited for this channel?
What amount of useful data could be transmitted providing > 99.99% probability of successful decoding?
Is it possible to transmit 40 bytes of data through this channel? (256^40 ~ 3^200)
Here's something you can at least analyze:
Break your 40 bytes up into 13 25-bit chunks (with some wastage so this bit can obviously be improved)
2^25 < 3^16 so you can encode the 25 bits into 16 a/b/c "trits" - again wastage means scope for improvement.
With 10,000 trits available you can give each of your 13 encoded byte triples 769 output trits. Pick (probably at random) 769 different linear (mod 3) functions on 16 trits - each function is specified by 16 trits and you take a vector dot product between those trits and the 16 input trits. This gives you your 769 output trits.
Decode by considering all possible (2^25) chunks and pick the one which matches most of the surviving trits. You have some hope of getting the right answer as long as there are at least 16 surviving trits, which I think excel is telling me via BINOMDIST() happens often enough that there is a pretty good chance that it will happen for all of the 13 25-bit chunks.
I have no idea what error rate you get from garbling but random linear codes have a pretty good reputation, even if this one has a short blocksize because of my brain-dead decoding technique. At worst you could try simulating the encoding transmission and decoding of 25-bit chunks and work it out from there. You can get a slightly more accurate lower bound on error rate than above if you pretend that the garbling stage erases as well and so recalculate with a slightly higher probability of erasure.
I think this might actually work in practice if you can afford the 2^25 guesses per 25-bit block to decode. OTOH if this is a question in a class my guess is you need to demonstrate your knowledge of some less ad-hoc techniques already discussed in your class.

Resources