Is it possible to make conditions in ASCII? [closed] - ascii

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am working on a software that needs commands in ASCII to define alert thresholds.
These thresholds, correspond to a quantity in littre according to a number of gas tank connected each others.
Examples :
-If it's 1 or 2 tanks the threshold will be 30 L and the ASCII code is S6350130 ("S635" to set up, 01 for the first tank, 30 for 30 litters).
-If it's 3 tanks the threshold will be 45 L and the ASCII code is S6350145 ("S635" to set up, 01 for the first tank, 45 for 45 litters).
-If it's 4 or 5 tanks the threshold will be 60 L and the ASCII code is S6350130 ("S635" to set up, 01 for the first tank, 60 for 60 litters).
-If it's 6 or 7 tanks the threshold will be 80 L and the ASCII code is S6350130 ("S635" to set up, 01 for the first tank, 80 for 80 litters).
My problem is I have to repeat the command for each tank, and I want to know if it is possible make conditions in ASCII to write only one command?
I hope it is clear.
Thank you in advance.

No.
ASCII is a way to encode text in bytes.
It is not a programming language. It has no means of expressing logic.
Just about every programming language can be expressed in ASCII and can manipulate ASCII encoded data, so you can pick any programming language you like and write your logic in that.

Related

What are online down-scaling algorithms? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm building a circuit that will be reading PAL/NTSC (576i, 480i) frames from analog input. The microcontroller has 32 kB of memory. My goal is to scale down input to 32x16 resolution, and forward this image to LED matrix.
PAL frame can take ~400 kB of memory. So i thought about down-scaling online. Read 18 pixels, decimate to 1. Read 45 lines, decimate to 1. Peak memory usage: 45 x 32 = 1.44 kB (45 decimated lines awaiting decimation).
Question: What are other online image down-scaling algorithms, other than the above naive one? Googling is extremely hard because online services are being found (PDF resize, etc.)
Note that mentioned formats are interlaced, so you read at first 0th, 2nd, 4th.. lines (first semi-frame), then 1st, 3rd, .. lines (second semi-frame).
If you are using simple averaging of pixel values in resulting cell (I suspect it is OK for so small output matrix), then create output array (16x32=512 entries) and sum appropriate values for every cell. And you need buffer for a single input line (768 or 640 entries).
x_coeff = input_width / out_width
y_coeff = input_height / out_height
out_y = inputrow / y_coeff
for (inputcol = 0..input_width - 1)
out_x = inputcol / x_coeff
out_array[out_y][out_x] += input_line[inputcol]
inputrow = inputrow + 2
if (inputrow = input_height)
inputrow = 1
if (inputrow > input_height)
inputrow = 0
divide out_array[][] entries by ( x_coeff * y_coeff)

Lossless compression of an ordered series of 29 digits (each 0 to 5 Likert scale)

I have a survey with 29 questions, each with a 5-point Likert scale (0=None of the time; 4=Most of the time). I'd like to compress the total set of responses to a small number of alpha or alphanumeric characters, adding a check digit to the end.
So, the set of responses 00101244231023110242231421211 would get turned into something like A2CR7HW4. This output would be part of a printout that a non-techie user would enter on a website as a shortcut to entering the entire string. I'd want to avoid ambiguous characters, such as 0,O,D,I,l,5,S, leaving me with 21 or 22 characters to use (uppercase only). Alternatively, I could just stick with capital alpha only and use all 26 characters.
I'm thinking to convert each pair of digits to a letter (5^2=25, so the whole alphabet is adequate). That would reduce the sequence to 15 characters, which is still longish to type without errors.
Any other suggestions on how to minimize the length of the output?
EDIT: BTW, for context, the survey asks 29 questions about mental health symptoms, generating a predictive risk for 4 psychiatric conditions. Need a code representing all responses.
If the five answers are all equally likely, then the best you can do is ceiling(29 * log(5) / log(n)) symbols, where n is the number of symbols in your alphabet. (The base of the logarithm doesn't matter, so long as they're both the same.)
So for your 22 symbols, the best you can do is 16. For 26 symbols, the best is 15, as you described for 25. If you use 49 characters (e.g. some subset of the upper and lower case characters and the digits), you can get down to 12. The best you'll be able to do with printable ASCII characters would be 11, using 70 of the 94 characters.
The only way to make it smaller would be if the responses are not all equally likely and are heavily skewed. Though if that's the case, then there's probably something wrong with the survey.
First, choose a set of permissible characters, i.e.
characters = "ABC..."
Then, prefix the input-digits with a 1 and interpret it as a quinary number:
100101244231023110242231421211
Now, convert this quinary number to a number in base-"strlen(characters)", i.e. base26 if 26 characters are to be used:
02 23 18 12 10 24 04 19 00 15 14 20 00 03 17
Then, use these numbers as index in "characters", and you have your encoding:
CVSMKWETAPOUADR
For decoding, just reverse the steps.
Are you doing this in a specific language?
If you want to be really thrifty about it you might want to consider encoding the data at bit level.
Since there are only 5 possible answers per question you could do this with only 3 bits:
000
001
010
011
100
Your end result would be a string of bits, at 3-bits per answer so a total of 87 bits or 10 and a bit bytes.
EDIT - misread the question slightly, there are 5 possible answers not 4, my mistake.
The only problem now is that for 4 of your 5 answers you're wasting a bit...you ain't gonna benefit much from going to this much trouble I wouldn't say but it's worth considering.
EDIT:
I've been playing about with it and it's difficult to work out a mechanism that allows you to use both 2 and 3 bit values.
Since your output would be a 97 bit binary value you'd need ot be able make the distinction between 2 and 3 bits values when converting back to the original values.
If you're working with a larger number of values there are some methods you could use, like having a reserved bit for each values that can be used to sort of type a value and give it some meaning. But working with so few bits as it is, it's hard to shave anything off.
Your output at 97 bits could be padded out to 128 bits, which would give you 4 32-bit values if you wanted to simplify it. this 128 bit value would be like a unique fingerprint representing a specific set of answers. There are many ways you can represnt 128 bits.
But in the end borking at bit-level is about as good as it gets when it comes to actual compression and encoding of data...if you can express 5 unique values in less than 3 bits I'd be suitably impressed.

What is the minimum of bytes to encode a number? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to encode a number in range from 0 to 7^20 (integer). I want to use the minimum of bytes to encode a such number. So what is the minimum of bytes to do that? please help me.
Thank you so much
You need to encode from 0 to 720, which means 720 + 1 values in total. Without any information about the input, the best you can do is ⌈log2(720 + 1)⌉ = 57 bits per number.
You can give 8 byte per number, which it also easy to decode. But there is a wastage of 7 bit per number.
Another way is to store exactly 57 bit per number and pack the numbers tightly together. You can save 7 bytes per 8 number being stored (so 8 numbers will take up 57 bytes instead of 64 bytes). However, it will be slightly trickier to recover the original number.
My lack of knowledge does not allow me to talk about any method that can do better.
Rough estimate: 3 bits are enough to encode 0..7, so 3 ⋅ 20 = 60 bits are enough to encode 0..720.
More accurate: ⌈log2 720⌉ = 57 bits are enough.
If the numbers to encode are uniformly distributed over this range, you cannot do better. Otherwise, you can do better on average, by giving shorter codes to the more common numbers.

Error correcting algorithm for very strange data channel [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please recommend an error correcting algorithm for using very strange data channel.
The channel consists of two parts: Corrupter and Eraser.
Corrupter receives a word consisting of 10000 symbols in 3-symbol alphabet, say, {'a','b','c'}.
Corrupter changes each symbol with probability 10%.
Example:
Corrupter input: abcaccbacbbaaacbcacbcababacb...
Corrupter output: abcaacbacbbaabcbcacbcababccb...
Eraser receives corrupter output and erases each symbol with probability 94%.
Eraser produces word of the same length in 4-symbol alphabet {'a','b','c','*'}.
Example:
Eraser input: abcaacbacbbaabcbcacbcababccb...
Eraser output: *******a*****************c**...
So, on eraser output, approximately 6%*10000=600 symbols would not be erased, approximately 90%*600=540 of them would preserve their original values and approximately 60 would be corrupted.
What encoding-decoding algorithm with error correction is best suited for this channel?
What amount of useful data could be transmitted providing > 99.99% probability of successful decoding?
Is it possible to transmit 40 bytes of data through this channel? (256^40 ~ 3^200)
Here's something you can at least analyze:
Break your 40 bytes up into 13 25-bit chunks (with some wastage so this bit can obviously be improved)
2^25 < 3^16 so you can encode the 25 bits into 16 a/b/c "trits" - again wastage means scope for improvement.
With 10,000 trits available you can give each of your 13 encoded byte triples 769 output trits. Pick (probably at random) 769 different linear (mod 3) functions on 16 trits - each function is specified by 16 trits and you take a vector dot product between those trits and the 16 input trits. This gives you your 769 output trits.
Decode by considering all possible (2^25) chunks and pick the one which matches most of the surviving trits. You have some hope of getting the right answer as long as there are at least 16 surviving trits, which I think excel is telling me via BINOMDIST() happens often enough that there is a pretty good chance that it will happen for all of the 13 25-bit chunks.
I have no idea what error rate you get from garbling but random linear codes have a pretty good reputation, even if this one has a short blocksize because of my brain-dead decoding technique. At worst you could try simulating the encoding transmission and decoding of 25-bit chunks and work it out from there. You can get a slightly more accurate lower bound on error rate than above if you pretend that the garbling stage erases as well and so recalculate with a slightly higher probability of erasure.
I think this might actually work in practice if you can afford the 2^25 guesses per 25-bit block to decode. OTOH if this is a question in a class my guess is you need to demonstrate your knowledge of some less ad-hoc techniques already discussed in your class.

Determining the level of dissonance between two frequencies [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Using continued fractions, I'm generating integer ratios between frequencies to a certain precision, along with the error (difference from integer ratio to the real ratio). So I end up with things like:
101 Hz with 200 Hz = 1:2 + 0.0005
61 Hz with 92 Hz = 2:3 - 0.0036
However, I've run into a snag on actually deciding which of these will be more dissonant than others. At first I thought low numbers = better, but something like 1:51 would likely be not very dissonant since it's a frequency up 51 octaves from the other. It might be a screaming high, ear bleeding pitch, but I don't think it would have dissonance.
It seems to me that it must be related to the product of the two sides of the ratio compared to the constituents somehow. 1 * 51 = 51, which doesn't "go up much" from one side. 2 * 3 = 6, which I would think would indicate higher dissonance than 1:51. But I need to turn this feeling into an actual number, so I can compare 5:7 vs 3:8, or any other combinations.
And how could I work error into this? Certainly 1:2 + 0 would be less dissonant than 1:2 + 1. Would it be easier to apply an algorithm that works for the above integer ratios directly to the frequencies themselves? Or does having the integer ratio with an error allow for a simpler calculation?
edit: Thinking on it, an algorithm that could extend to any set of N frequencies in a chord would be awesome, but I get the feeling that would be much more difficult...
edit 2: Clarification:
Let's consider that I am dealing with pure sine waves, and either ignoring the specific thresholds of the human ear or abstracting them into variables. If there are severe complications, then they are ignored. My question is how it could be represented in an algorithm, in that case.
Have a look at Chapter 4 of http://homepages.abdn.ac.uk/mth192/pages/html/maths-music.html. From memory:
1) If two sine waves are just close enough for the human ear to be confused, but not so close that the human ear cannot tell they are different, there will be dissonance.
2) Pure sine waves are extremely rare - most tones have all sorts of harmonics. Dissonance is very likely to occur from colliding harmonics, rather than colliding main tones - to sort of follow your example, two tones many octaves apart are unlikely to be dissonant because their harmonics may not meet, whereas with just a couple of octaves different and loads of harmonics a flute could sound out of tune with a double bass. Therefore dissonance or not depends not only on the frequencies of the main tones, but on the harmonics present, and this has been experimentally demonstrated by constructing sounds with peculiar pseudo-harmonics.
The answer is in Chapter 4 of Music: a Mathematical Offering. In particular, see the following two figures:
consonance / dissonance plotted against the x critical bandwidth in 4.3. History of consonance and dissonance
dissonance vs. frequency in 4.5. Complex tones
Of course you still have to find a nice way to turn these data into a formula / program that gives you a measure of dissonance but I believe this gives you a good start. Good luck!
This will help:
http://www.acs.psu.edu/drussell/demos/superposition/superposition.html
You want to look at superposition.
Discrete or Fast Fourier Transform is the most generic means to get what you're asking for.

Resources