Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In my software engineering course, I encountered the following characteristic of a stack, condensed by me: What you push is what you pop. The fully axiomatic version I Uled here.
Being a natural-born troll, I immediately invented the Troll Stack. If it has more than 1 element already on it, pushing results in a random permutation of those elements. Promptly I got into an argument with the lecturers whether this nonsense implementation actually violates the axioms. I said no, the top element stays where it is. They said yes, somehow you can recursively apply the push-pop-axiom to get "deeper". Which I don't see. Who is right?
The violated axiom is pop(push(s,x)) = s. Take a stack s with n > 1 distinct entries. If you implement push such that push(s,x) is s'x with s' being a random permutation of s, then since pop is a function, you have a problem: how do you reverse random_permutation() such that pop(push(s,x)) = s? The preimage of s' might have been any of the n! > 1 permutations of s, and no matter which one you map to, there are n! - 1 > 0 other original permutations s'' for which pop(push(s'',x)) != s''.
In cases like this, which might be very easy to see for everybody but not for you (hence your usage of the "troll" word), it always helps to simply run the "program" on a piece of paper.
Write down what happens when you push and pop a few times, and you will see.
You should also be able to see how those axioms correspond very closely to the actual behaviour of your stack; they are not just there for fun, but they deeply (in multiple meanings of the word) specify the data structure with its methods. You could even view them as a "formal system" describing the ins and outs of stacks.
Note that it is still good for you to be sceptic; this leads to a) better insight and b) detection of errors your superiours make. In this case they are right, but there are cases where it can save you a lot of time (e.g. while searching the solution for the "MU" riddle in "Gödel, Escher, Bach", which would be an excellent read for you, I think).
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
As discussed here we now know that what are the traits for telling if a language is functional or not. But where does the immutability fits into this scenario?
Here's a quick mental exercise, in pseudo-code:
1) x = 5;
2) x = x + 1;
3) print x; // prints "6"
4) x = x
// THEREFORE 5 = 6
Right? We know x is 5 from line 1, and we know x is 6 from line 3, so if x = x, then 5 must equal 6.
The joke here is that we're mixing imperative, command-oriented thinking with mathematical, functional thinking. In imperative style, x is a variable, which means we assume its value potentially changes over time. But when we do math, we make a different assumption: we assume "x" is a specific value, meaning that once we know the value of "x", we can substitute that value anywhere "x" appears. That assumption is the whole basis for being able to solve equations. Obviously, if the value of "x" were to change out from under us, like it does in line 2 of the mental exercise above, then all bets are off. Line 2 is not valid math, because there is no value for which the statement x = x + 1 is mathematically true. (At least as far as I ever learned in high school math!)
Another way of looking at it is to say that imperative programming mixes values, functions, and state, which makes it hard to reason about. Because the value of "x" can be different depending on when you look at it, you can't easily know what effect it's going to have on how your code runs, just by looking at your code. You have to "play compiler" and mentally keep track of all the variables and how they're changing over time, and that quickly gets unmanageable. State is the number one source of incidental complexity in computer programming.
Functional programming simplifies things by separating state from function. In a mathematical function like f(x) = (x * x), the value of "x" does not change over time. It's a pure description of the relationship between "x" and "f(x)", and that relationship is always true whether you look at "x" first or "f(x)" first. There's no state involved. You're describing the relationship between values, irrespective of time, and without any state. And because you don't have to worry about state changing out from under you, you can more easily and safely reason about the relationship between your inputs and outputs.
Immutable variables simulate this kind of stateless, mathematical reasoning by removing the element of time and variable-ness from your code. You still need to mutate your state at some point, but you can put that off until later, and only update state after your pure functions have worked out the correct values to store. By separating state management from your pure functions, you make coding simpler, easier to reason about, and usually more reliable. Plus it's a lot easier to test pure functions because there's no need for mocks or extra setup or other state-simulation prerequisites.
What's really cool in all this is that the same holds true even when "x" is something more complex that just a simple number. You can write pure functions whose arguments are arrays, records, Customer objects, etc, and the same principles still apply. By keeping your functions pure and your values immutable, you're writing code that describes the relationships between the function argument(s) and the function output, without the incidental complexity of time and state. And that's huge.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
So, I have been doing a little research and searching around Google about Algorithms. I was getting the hang of it until I got a little deeper.
I understand that an Algorithm is defined as: a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer (Quoted from Dictionary). A Computer Program is defined as: is a sequence of instructions, written to perform a specified task on a computer (Quoted from Wikipedia)
An analogy I saw on a thread helped me a little bit:
Cake Algorithm:
--Get Ingredients
--Bake
--Serve
Cake Program:
--2fl of flour
--3 eggs
--Mix in pan
--etc.
As I saw the algorithm was more general
So basically how I began to think of a computer program was code that implements the Algorithm, in other words the Algorithm is a blueprint. For example, this is a simple Algorithm:
Step 1: Start
Step 2: Declare variables num1, num2 and sum.
Step 3: Read values num1 and num2.
Step 4: Add num1 and num2 and assign the result to sum.
sum=num1+num2
Step 5: Display sum
Step 6: Stop
I am also aware after more google searches that Algorithms can be represented in pseudo-code:
if a>b
Display a is bigger than b #Simple Example, but you get the point
They can also be formed in real code (making the algorithm as you go) like this:
def foo():
#Blank Code for Algorithm/to be used later
So now this is where my question comes in, a lot of stackoverflow threads I see ask a user to explain/correct their algorithm. However, when I look at the algorithm instead of seeing something like the above examples I will see this:
// I know this isn't Python but that's not the point!
for (int i = 0; i < N; i++) {Console.Write('Hello World !');
}
No blank functions/empty code blocks, its all filled in and such
So now my questions:
Is the above code for example indeed an Algorithm? Or is it a program written that follows a Algorithm?
If it is considered an Algorithm, what is the difference between that being an Algorithm, and that code being a computer program?
If it is both, does that mean the two terms can be used interchangeably?
Any clarification to a beginner would be nice.
// I know this isn't Python but that's not the point!
for (int i = 0; i < N; i++) {
Console.Write('Hello World !');
}
Is the above code for example indeed an Algorithm? Or is it a program
written that follows a Algorithm?
Nope. If you were to remove the printing part and just add print hello world instead of Console.Write(), it could be an algorithm to print hello world N times.
If it is considered an Algorithm, what is the difference between that
being an Algorithm, and that code being a computer program?
An algorithm is language independent, it just shows how to do something, the language defines a stricter set of rules on how to implement something. A program is used to implement an algorithm considering the rules and syntax defined by a language .
If it is both, does that mean the two terms can be used
interchangeably?
Nope. An algorithm is not language specific whereas a program is always used in conjunction with a programming language.
Sample statement : Write a Java program to implement BubbleSort algorithm.
First of all welcome to StackOverflow and the world of coding.
The answers to your questions:
1.Is the above code for example indeed an Algorithm? Or is it a program written that follows a Algorithm?
The above code is a "program" . An algorithm is like you said the blueprint and the whole architecture itself.
The algorithm for the above code could be:
STEP 1: START
STEP 2: Print Hello World 5 times. (That's it actually)
STEP 3: END
You see the algorithm is something which describes a code in simple plain language. You can explain the blueprint of a program in pseudo code or a flowchart. You can't feed it to the PC without actually giving it the form the PC will accept. Therefore You need to convert it into a program using languages like C ,CPP, Python etc
2.If it is considered an Algorithm, what is the difference between that being an Algorithm, and that code being a computer program?
WELL that's is not the algorithm as described above. And the answer to remaining part is explained as above
3.If it is both, does that mean the two terms can be used interchangeably?
No, it shouldn't be because of the exact same reason stated above.
Hope You get it.
Cheers
Same difference as between a class and an instance object.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Recently,I have been trying to understand how the Binary Extended Euclidean Algorithm works at the processor level. This question is all about finding an Inverse element in GF(2^m) with polynomial basis.
Generally I came across the Extended Euclidean Algorithm for evaluating an inverse element but the fact is that it involves too many addition and multiplication operations. The Binary EEA algorithm requires just bit shifting operations (equivalent to division by 2--logical shift right). The algorithm is in this link, page number 8.
In step 3 and 5 of this algorithm, every iteration shifts the parameters u and b by 1 bit to the right adding zero to the MSB at the same time. The loop ends when u == 1 and returns b. My question is how many primitive operations does a processor (say a 32 bit processor for example) perform in step 3 or step 5 of every iteration?
I came across barrel shifter and I am quite confused about how fast the shifting takes place. Should I really consider these primitive operations or should I ignore them if because the shifting may be faster?
It would really help me a lot if someone would show the primitive operations for the case where the size of u is 194 bits.
In case you might be wondering about the denominator x in step 3 and 5 of the algorithm, its the polynomial representation and x means nothing but 10 in binary and parameter u is an N-bit binary number.
There is no generic answer to this question: you can use portable code that will be tedious to optimize or highly machine specific code that will be even more complicated to optimize without breaking.
If you want real performance, you have to use MMX/AVX registers on the maximum width you can get your hands on. Intel provides lightweight wrappers on low-level instructions as macros and inline functions.
Always use unsigned types for your shifting operations to avoid unnecessary steps.
Usually ther is a "right shift" assembly OP code which is able to right shift a register a given number of bits. Such an operation takes one cycle.
This assumes thet your value is already loaded to the register however.
The best answer anyway: Implement this algorithm in a low level language (C, C++) and look at the assembly code produced by the compiler.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying to understand why is it impossible to write a program H that can check whether another program P on a specific input I will halt or not (Halting problem), but i am unable to get a clear idea of it.
Though intuition says that if this program H tries to run a program P which is not going to halt, then H itself will go into a loop. But i dont think this is the idea behind the proof.
Can anybody explain the proof in simple layman terms?
The idea behind the proof is by contradiction.
Assume there is a Halting Problem Machine M that solves the Halting problem, and yields 0 if some input program won't finish, and 1 if it will. M is guaranteed to finish.
Create a new machine H.
H runs M with the input of H (itself), and if M answers 1 - get into endless loop, otherwise - stop.
Now, what will happen if you run M on input H?
If the answer is 1 - than it is wrong, since H runs M, and will get to infinite loop.
If the answer is 0 - it is also wrong, since H will stop.
So, it is contradicting the assumption that M is correct - and thus there is no such M.
Intuitively - it is like saying there is no such thing as an oracle, because if "you" tell me you're an oracle, I ask you - which arm am I going to raise?
Then, I will wait for your answer - and do the opposite - which contradicts the claim that the oracle is indeed an oracle.
Turing used proof by contradiction for this (aka reduction to absurdity):
The idea is to assume there is actually such a machine H that given any program p and an input i will tell us weather p stops.
Given such H, we can modify it and create a new machine.
We'll add another part after H's output such that if H outputs yes, our machine will loop infinitely,
and if H outputs no, our new machine will halt.
We'll call our new machine H+.
Now, the paradox occurs when we feed H+ to itself (as both program p and input i).
That's because if H+ does halt, we get a yes answer (from the H part), but then it loops forever.
However, if H+ doesn't halt, we get a no answer (from the H part), but then it halts.
This is explained very nicely in this computerphile episode.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
2^(n/2+10 log n)
or
2^n?
I was doing an exercise in MIT OCW 6.006. It has a problem which states that later grows faster than the former. But I cant agree with the proof. I say that the former grows faster than the later. Could someone explain if I am wrong and let me know why. Thanks!
You could frame that differently by pulling out the exponent part, then just ask which is bigger (n/2+10logn) or n.
Here its clear the 2nd will be bigger whenever 10logn is less than half n.
That becomes true when n reaches about 30, so from then on, the second is bigger. (for log base 10)
Lets discuss log base 2 further and when might 10LogN be less than N/2?
Well, thats the same as asking when does logN become less than N/20
Loosely speaking, log_2 is the number of bits needed to describe a number in base 2. So:
log_2(32) gives us 5.
log_2(64) gives us 6.
log_2(128) gives us 7. <-- look here 128:7 is about 18:1
log_2(256) gives us 8.
log_2(512) gives us 9.
log_2(1024) gives us 10.
log_2(64000) gives us ~16.
Now we are looking for when the first value (32,64,128,etc) is more than 20 times the second. As you can see this would happens just past the 128/7 pair, and they rapid get much further apart.