Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
A CSP is k-consistent if, for any set of k - 1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any _k_th variable. A CSP is strongly k-consistent if it is k-consistent and is also (k - 1)-consistent, (k - 2)-consistent, ... all the way down to 1-consistent.
From the definition above, I do not understand how a CSP can just be k-consistent but not strongly k-consistent.
If the CSP is k-consistent, doesn't it necessarily have to be k-1-consistent too? If not, could you provide an example?
Consider, for example, the problem of completing a partially-filled-in Latin square.
Any consistent grid with just one blank cell can always be completed. Since only one cell is blank, the row that cell is in must be missing exactly one digit (if it's missing more than one, then some other digit must appear twice in that row by the pigeonhole principle, making the partial grid inconsistent). The same applies for the blank cell's column, and in fact it must be the same digit missing (proof is left as an exercise to the reader; hint: count the occurrences of each digit). It follows that this missing digit can be consistently assigned to that blank cell. So the CSP of n×n Latin squares is n2-consistent.
On the other hand, there are lots of consistent partial grids (i.e. grids whose filled-in digits haven't broken any of the rules so far) which cannot be filled in without breaking any rules, for example the following 2×2 grid cannot be made into a Latin square by filling in the blanks, because each of the blanks has no consistent assignment:
1 .
. 2
So this is a consistent set of assignments to two variables with no consistent assignment to a third variable, meaning that the CSP of 2×2 Latin squares is not 3-consistent; we already showed that it is 4-consistent, but now we have shown it is not strongly 4-consistent.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I recently decided to write my own symmetric encryption program (which could be used in a custom password manager for example).
I would like your opinion about him, did I make big mistakes? else would it be easily breakable?
It is basically a Vigenere fork trying to get closer to the principles of Vernam encryption but remaining easy to use (you can use any key in order to encrypt your text).
How does it work?
You enter a message (e.g. hello world) and a seed (e.g. seed).
The seed is transformed into a number thanks to a hash function
We add the number of letters of the message to this number, and we hash it another time
A pseudo-random number generator is initialized with the result and a list of random numbers of the text size is generated (it's the key).
We shift each letter with the corresponding number in the list (the first letter of the message is shifted with the first number of our generated list)
Example :
Alphabet: [a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z]
List : [1,18,3,17,0]
Word: "hello"
h+1 = j
e+18 = w
l+3 = o
l+17=c (as the alphabet is finished, we continue at the beginning)
o+0=o
Output: "jwoco"
The principles of Vernam encryption specifies that :
the key used to offset the letters must be at least as large as the text size -> It's okay
The key must only be used once -> It's okay if you change your seed or the size of the message (since we include the text size in the hash used to initialize the key)
The key must be completely random -> This will depend on the random number generation algorithm and the hash algorithm but if they are good we should have an output with which it is impossible without the key to find a text that is more likely than another to be the original message.
Is my explanation clear? Do you agree with me? Do you have any clarifications to add? improvements to propose or algorithms of random number generation and hash to advise me?
have a nice day,
Thomas!
A relevant anecdote from Bruce Schneier:
See https://www.schneier.com/crypto-gram/archives/1998/1015.html#cipherdesign
A cryptographer friend tells the story of an amateur who kept
bothering him with the cipher he invented. The cryptographer would
break the cipher, the amateur would make a change to "fix" it, and the
cryptographer would break it again. This exchange went on a few times
until the cryptographer became fed up. When the amateur visited him to
hear what the cryptographer thought, the cryptographer put three
envelopes face down on the table. "In each of these envelopes is an
attack against your cipher. Take one and read it. Don't come back
until you've discovered the other two attacks." The amateur was never
heard from again.
Use AES.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In my software engineering course, I encountered the following characteristic of a stack, condensed by me: What you push is what you pop. The fully axiomatic version I Uled here.
Being a natural-born troll, I immediately invented the Troll Stack. If it has more than 1 element already on it, pushing results in a random permutation of those elements. Promptly I got into an argument with the lecturers whether this nonsense implementation actually violates the axioms. I said no, the top element stays where it is. They said yes, somehow you can recursively apply the push-pop-axiom to get "deeper". Which I don't see. Who is right?
The violated axiom is pop(push(s,x)) = s. Take a stack s with n > 1 distinct entries. If you implement push such that push(s,x) is s'x with s' being a random permutation of s, then since pop is a function, you have a problem: how do you reverse random_permutation() such that pop(push(s,x)) = s? The preimage of s' might have been any of the n! > 1 permutations of s, and no matter which one you map to, there are n! - 1 > 0 other original permutations s'' for which pop(push(s'',x)) != s''.
In cases like this, which might be very easy to see for everybody but not for you (hence your usage of the "troll" word), it always helps to simply run the "program" on a piece of paper.
Write down what happens when you push and pop a few times, and you will see.
You should also be able to see how those axioms correspond very closely to the actual behaviour of your stack; they are not just there for fun, but they deeply (in multiple meanings of the word) specify the data structure with its methods. You could even view them as a "formal system" describing the ins and outs of stacks.
Note that it is still good for you to be sceptic; this leads to a) better insight and b) detection of errors your superiours make. In this case they are right, but there are cases where it can save you a lot of time (e.g. while searching the solution for the "MU" riddle in "Gödel, Escher, Bach", which would be an excellent read for you, I think).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
As discussed here we now know that what are the traits for telling if a language is functional or not. But where does the immutability fits into this scenario?
Here's a quick mental exercise, in pseudo-code:
1) x = 5;
2) x = x + 1;
3) print x; // prints "6"
4) x = x
// THEREFORE 5 = 6
Right? We know x is 5 from line 1, and we know x is 6 from line 3, so if x = x, then 5 must equal 6.
The joke here is that we're mixing imperative, command-oriented thinking with mathematical, functional thinking. In imperative style, x is a variable, which means we assume its value potentially changes over time. But when we do math, we make a different assumption: we assume "x" is a specific value, meaning that once we know the value of "x", we can substitute that value anywhere "x" appears. That assumption is the whole basis for being able to solve equations. Obviously, if the value of "x" were to change out from under us, like it does in line 2 of the mental exercise above, then all bets are off. Line 2 is not valid math, because there is no value for which the statement x = x + 1 is mathematically true. (At least as far as I ever learned in high school math!)
Another way of looking at it is to say that imperative programming mixes values, functions, and state, which makes it hard to reason about. Because the value of "x" can be different depending on when you look at it, you can't easily know what effect it's going to have on how your code runs, just by looking at your code. You have to "play compiler" and mentally keep track of all the variables and how they're changing over time, and that quickly gets unmanageable. State is the number one source of incidental complexity in computer programming.
Functional programming simplifies things by separating state from function. In a mathematical function like f(x) = (x * x), the value of "x" does not change over time. It's a pure description of the relationship between "x" and "f(x)", and that relationship is always true whether you look at "x" first or "f(x)" first. There's no state involved. You're describing the relationship between values, irrespective of time, and without any state. And because you don't have to worry about state changing out from under you, you can more easily and safely reason about the relationship between your inputs and outputs.
Immutable variables simulate this kind of stateless, mathematical reasoning by removing the element of time and variable-ness from your code. You still need to mutate your state at some point, but you can put that off until later, and only update state after your pure functions have worked out the correct values to store. By separating state management from your pure functions, you make coding simpler, easier to reason about, and usually more reliable. Plus it's a lot easier to test pure functions because there's no need for mocks or extra setup or other state-simulation prerequisites.
What's really cool in all this is that the same holds true even when "x" is something more complex that just a simple number. You can write pure functions whose arguments are arrays, records, Customer objects, etc, and the same principles still apply. By keeping your functions pure and your values immutable, you're writing code that describes the relationships between the function argument(s) and the function output, without the incidental complexity of time and state. And that's huge.
I am currently studying pseudo code, and despite my 3 year history in programming, this 1 particular practice-exam question has me perplexed with its unconventional code (shown below):
Highlighted in Pink, are my 2 main problems with the code. I have experience across 3 languages, yet I have never encountered the control flow method <>, and cannot imagine exactly what it would be used for. In addition to this, the variable average appears in the code in the form of "average:6:2", for which I am equally clueless.
To Summarise:
What is the function of the control flow method "<>"
As is stated in question (a) in the image above, what is the purpose of 'average:6:2'?
<> is common for not equal
While number is not equal to 999
number:filed_width:precision is pascal formatter for real number with filed_width being the space for field and precision is numer of digits after dot. so 3.141519:4:1 will print <space>3.1
What is the function of the control flow method "<>"
- It is "less than or greater than". If the input is equal to 999 then the loop ends. The number 999 is used as a Sentinel Value.
What is the purpose of 'average:6:2'?
- I'm thinking this is 6 digits with 2 decimal places.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am reading about a "trick" (references to Aho,Hopcroft,Ullman) on how to use a data vector without explicitely initializing it.
The trick is to use 2 extra vectors (From To) and an integer Top.
Before accesing an element in the vector DATA[i] if a specific condition between From To and Top is met the element i has been considered as initialized.
If the condition does not meet then the element is initialized and the From To and Top are updated as follows:
Top = Top + 1
From[i] = Top
To[Top] = i
Data[i] = 0
The condition is to know whether an element has been initialized is:
From[i] <= Top && To[From[i]] == i
If true then it has been initialized.
My question is: why are the extra vectors needed?
From my point of view, if I access an element and i<=Top then the element is initialized. Then I increment i i.e. i++.
In this case if i <= TOP means that DATA[i] has been initialized.
Am I not seeing a boundary case? It seems to me this is enough.
Or am I wrong?
If this is the example I am thinking of, then you don't know the order in which the elements of DATA[] will be accessed - it is used as a sparse array, for example the values in an almost empty hash table. So the first 3 items to be accessed might be DATA[113], DATA[29], and DATA[123123], not DATA[0], DATA[1], and DATA[2]. You could in fact get away without From[], in which case To would store {113, 29, 123123} - but then you would have to search all of To every time you wanted to see if an element of DATA was valid, e.g. if you wanted to see if 123123 was valid you would see To[0] = 113 no luck To[1] = 29 no luck To[2] = 123123 Oh yes 123123 is valid.
The time-saving idea is that none of To, From, Data need to be initialized beforehand, and all of them can be arrays so large that initialization takes appreciable time.
At the outset, any entry of any of the arrays can have any value. It could be the case, by chance, that for some i, To[From[i]] == i. (That condition can be true by chance or when Data[From[i]] has been set.) However, Top is counting the number of Data elements set so far, so that the test From[i] <= Top can distinguish cases completely.