Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
We are given a variable that have some constraints over its range of value, we have to overall find out a set that denotes it overall range.
For example and conditions are as follows
x< 10
x> -6
x>= 0
I can do it on real number line and mark the intersection but how to do it logically in programming.
Note : Only > , >= , < , <= are allowed.
ANSWER=[0, 10)
You have to figure out the logic of your solution, then implement that logic in C++. You say you "can do it", which I assume means you find it "easy" to solve as a humain being. What makes it so easy? Identify the method you're using, then write that method in C++.
There are two types of inequalities: > and <. Well, there are also <= and >=, but I suggest leaving those aside until you've written a program that handles < and > correctly.
Imagine you have:
x > 5
x > 7
x > 6
x < 11
x < 10
x < 12.
What is the solution in this case? Try to find the solution without drawing the number line. Then try to describe with words the way you arrived to this solution.
Then try to write pseudo-code that describes the algorithm more formally.
Finally, you're ready to write C++ code that performs the same steps. I suggest not trying to write C++ until you have written pseudo-code. When writing C++ you'll encounter a few cumbersome details; for instance, how to parse each expression, such as x < 5, to find out what inequality it is and which number it's comparing x to. These "details" are not uninteresting but they will get in the way of your logic so it's best to keep them for last.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In my software engineering course, I encountered the following characteristic of a stack, condensed by me: What you push is what you pop. The fully axiomatic version I Uled here.
Being a natural-born troll, I immediately invented the Troll Stack. If it has more than 1 element already on it, pushing results in a random permutation of those elements. Promptly I got into an argument with the lecturers whether this nonsense implementation actually violates the axioms. I said no, the top element stays where it is. They said yes, somehow you can recursively apply the push-pop-axiom to get "deeper". Which I don't see. Who is right?
The violated axiom is pop(push(s,x)) = s. Take a stack s with n > 1 distinct entries. If you implement push such that push(s,x) is s'x with s' being a random permutation of s, then since pop is a function, you have a problem: how do you reverse random_permutation() such that pop(push(s,x)) = s? The preimage of s' might have been any of the n! > 1 permutations of s, and no matter which one you map to, there are n! - 1 > 0 other original permutations s'' for which pop(push(s'',x)) != s''.
In cases like this, which might be very easy to see for everybody but not for you (hence your usage of the "troll" word), it always helps to simply run the "program" on a piece of paper.
Write down what happens when you push and pop a few times, and you will see.
You should also be able to see how those axioms correspond very closely to the actual behaviour of your stack; they are not just there for fun, but they deeply (in multiple meanings of the word) specify the data structure with its methods. You could even view them as a "formal system" describing the ins and outs of stacks.
Note that it is still good for you to be sceptic; this leads to a) better insight and b) detection of errors your superiours make. In this case they are right, but there are cases where it can save you a lot of time (e.g. while searching the solution for the "MU" riddle in "Gödel, Escher, Bach", which would be an excellent read for you, I think).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
As discussed here we now know that what are the traits for telling if a language is functional or not. But where does the immutability fits into this scenario?
Here's a quick mental exercise, in pseudo-code:
1) x = 5;
2) x = x + 1;
3) print x; // prints "6"
4) x = x
// THEREFORE 5 = 6
Right? We know x is 5 from line 1, and we know x is 6 from line 3, so if x = x, then 5 must equal 6.
The joke here is that we're mixing imperative, command-oriented thinking with mathematical, functional thinking. In imperative style, x is a variable, which means we assume its value potentially changes over time. But when we do math, we make a different assumption: we assume "x" is a specific value, meaning that once we know the value of "x", we can substitute that value anywhere "x" appears. That assumption is the whole basis for being able to solve equations. Obviously, if the value of "x" were to change out from under us, like it does in line 2 of the mental exercise above, then all bets are off. Line 2 is not valid math, because there is no value for which the statement x = x + 1 is mathematically true. (At least as far as I ever learned in high school math!)
Another way of looking at it is to say that imperative programming mixes values, functions, and state, which makes it hard to reason about. Because the value of "x" can be different depending on when you look at it, you can't easily know what effect it's going to have on how your code runs, just by looking at your code. You have to "play compiler" and mentally keep track of all the variables and how they're changing over time, and that quickly gets unmanageable. State is the number one source of incidental complexity in computer programming.
Functional programming simplifies things by separating state from function. In a mathematical function like f(x) = (x * x), the value of "x" does not change over time. It's a pure description of the relationship between "x" and "f(x)", and that relationship is always true whether you look at "x" first or "f(x)" first. There's no state involved. You're describing the relationship between values, irrespective of time, and without any state. And because you don't have to worry about state changing out from under you, you can more easily and safely reason about the relationship between your inputs and outputs.
Immutable variables simulate this kind of stateless, mathematical reasoning by removing the element of time and variable-ness from your code. You still need to mutate your state at some point, but you can put that off until later, and only update state after your pure functions have worked out the correct values to store. By separating state management from your pure functions, you make coding simpler, easier to reason about, and usually more reliable. Plus it's a lot easier to test pure functions because there's no need for mocks or extra setup or other state-simulation prerequisites.
What's really cool in all this is that the same holds true even when "x" is something more complex that just a simple number. You can write pure functions whose arguments are arrays, records, Customer objects, etc, and the same principles still apply. By keeping your functions pure and your values immutable, you're writing code that describes the relationships between the function argument(s) and the function output, without the incidental complexity of time and state. And that's huge.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying to understand why is it impossible to write a program H that can check whether another program P on a specific input I will halt or not (Halting problem), but i am unable to get a clear idea of it.
Though intuition says that if this program H tries to run a program P which is not going to halt, then H itself will go into a loop. But i dont think this is the idea behind the proof.
Can anybody explain the proof in simple layman terms?
The idea behind the proof is by contradiction.
Assume there is a Halting Problem Machine M that solves the Halting problem, and yields 0 if some input program won't finish, and 1 if it will. M is guaranteed to finish.
Create a new machine H.
H runs M with the input of H (itself), and if M answers 1 - get into endless loop, otherwise - stop.
Now, what will happen if you run M on input H?
If the answer is 1 - than it is wrong, since H runs M, and will get to infinite loop.
If the answer is 0 - it is also wrong, since H will stop.
So, it is contradicting the assumption that M is correct - and thus there is no such M.
Intuitively - it is like saying there is no such thing as an oracle, because if "you" tell me you're an oracle, I ask you - which arm am I going to raise?
Then, I will wait for your answer - and do the opposite - which contradicts the claim that the oracle is indeed an oracle.
Turing used proof by contradiction for this (aka reduction to absurdity):
The idea is to assume there is actually such a machine H that given any program p and an input i will tell us weather p stops.
Given such H, we can modify it and create a new machine.
We'll add another part after H's output such that if H outputs yes, our machine will loop infinitely,
and if H outputs no, our new machine will halt.
We'll call our new machine H+.
Now, the paradox occurs when we feed H+ to itself (as both program p and input i).
That's because if H+ does halt, we get a yes answer (from the H part), but then it loops forever.
However, if H+ doesn't halt, we get a no answer (from the H part), but then it halts.
This is explained very nicely in this computerphile episode.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
Make a RECURSIVE ruby function "double_fact(n)" defined as follows –
n!! = 1 if n = −1 or n = 0 or n = 1;
n(n − 2)!! otherwise.
Outputs the result of double_fact() respecting to a value specified
from the command line.
//Hint: Ruby has the usual "and", "or" and "not" operators. You may
need "or" to test multiple conditions here. Also, doublefact(8) = 384.
The problem statement is very misleading. You don't need any boolean operators at all, you can just translate the mathematical definition 1:1 into Ruby:
def doublefact(n)
return 1 if (-1..1).include?(n)
n * doublefact(n-2)
end
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
This is the example:
for(int i = 0; i < 10;i++)
{
for(int ? = 0;? < 10 ; ?++)
{
}
}
I usually use an "o" for the second loop, but is there any standard out there?
ideas?
Logically, you would use 'j', but the best is to use something meanlingful, like 'row' and 'column' if you can. If you feel like joking, use 'c' or 'notepad'
When I was in school we always used j, but I don't believe there is a standard. If this is your own project use whatever you want and be consistent. If this is a company project follow the standard set by your Development Standards Document (You do have one?)
Standard? No. i,j,k,l,m,n are popular (probably a throwback to the old FORTRAN rules).
However, a word of advice, don't use single character variables even for tiny iterate loops. The reason? When that little loop grows up and encompasses 20 or 30 lines of code and someone decides to refactor, finding all this 1 character variables names is going to suck. Someday, later in life, you'll thank me :)
If I need to nest loops I try to find meaningful names for them, otherwise it gets very confusing very quickly. If there really is no meaningful name I'd often name them Index and InnerIndex so that it's immediately obvious which is which.
I'm using:
'k'
My teachers used 'j', because it was the first letter after 'i'. But I found it unnecessary harder to read. But the next letter 'k' is much easier to see. Use 'k' and speed up your reading.
for(int i = 0; i < 10;i++)
{
for(int k = 0; k < 10 ; k++)
{
}
}
I use hungarian "k" plus whatever physical meaning the counter is supposed to be.
for (int kRow = ...)
{
for (int kCol = ...)
{
}
}
Sometimes I will use k,m,n,p,q,r,s,t,w,x,y,z if the index means nothing more than permutations (i.e. they all have exactly the same physical meaning).
I usually don't use i,j for the same reason electrical engineers don't use i to denote imaginary number.