So, I was solving this problem of detecting a palindrome in a linked list. I came up with the following solution for it:
class Solution {
public boolean isPalindrome(ListNode head) {
ListNode temp=head;
boolean [] arr = new boolean[10];
int count=0;
if(head==null) return false;
while(temp!=null)
{
if(arr[temp.val]==false)
arr[temp.val]=true;
else
arr[temp.val]=false;
temp=temp.next;
}
for(int i=0;i<10;i++)
{
if(arr[i]==true)
count++;
}
if(count<2)return true;
return false;
}
Now, the logic behind this solution is correct as far as I can see but it fails for cases like this: [1,0,0], [4,0,0,0,0] etc. How do I overcome this? (Pls dont reply with a shorter method I want to know the reason behind why this fails for certain cases.)
First of all Welcome to StackOverflow!
Because of how simple this problem's solution can be I feel obligated to tell you that a solution with an auxiliary stack is not only easy to implement but also easy to understand. But since you asked why your code fails for certain cases I'll answer that question first. Your code in particular is counting the number of digits that have an odd count.
Although this seems to be what you are supposed to do to detect a palindrome notice that a linked list that looks like 1 -> 1 -> 0 -> 0 is also considered a palindrome under your code because the count is always going to be less than 0.
Your solution works for telling us if it is possible to create a palindrome given a set of digits. Suppose that the question was like "given a linked list tell me if you can rearrange it to create a palindrome" but it does not work for "is this linked list a palindrome".
I understand recursion and what the advantages it brings to writing code efficiently. While I can code recursive functions, I cannot seem to wrap my head around how they work. I would like someone to explain me recursion instinctively.
For example, this code:
int fact(int n)
{ if n<0:
return -1
elif n==0:
return 1
else
return n*fact(n-1)
}
These are some of my questions:
Let's say n=5. On entering the function,the control goes to the last return statement since none of the previous conditions are satisfied.
Now, roughly, the computer 'writes' something like this: 5*(fact(4))
Again, the fact() function is called and the same process gets repeated except now we have n=4.
So, how exactly does the compiler multiply 5*4 and so on until 2 since its not exactly 5*4 but 5*fact(4). How does it 'remember' that it has to multiply two integers and where does it store the temporary value since we haven't provided any explicit data structure?
Again let's say n=5. The same process goes on and eventually n gets decremented to 0. My question is why/how doesn't the function simply return 1 as stated in the return statement. Similar to my previous question, how does the compiler 'remember' that it also has 180 stored for displaying?
I'd be really thankful if someone explains this to me completely so that can understand recursion better and intuitively.
Yeah, for beginners recursion can be quite confusing. But, you are already on the right track with your explanation under "1.".
The function will be called recursively until a break condition is satisfied. In this case, the break condition is satisfied when n equals 0. At this point, no recursive calls will be made anymore. The result of each recursive call is returned to the caller. The callers always "wait" until they get a result. That's how the algorithm "knows" the receiver of the results. The flow of this procedure is handled by the so called stack.
Hence, in your informal notation (in this example n equals 3):
3*(fact(2)) = 3*(2*fact(1)) = 3*(2*(1*fact(0))).
Now, n equals 0. The inner fact(0) therefore returns 1:
3*(2*(1*(1)))) = 3*(2*(1)) = 3*(2) = 6
You can see a bit like this
The function fact(int n) is like a class and every time you call fact(int n) you create an instance of that class. By creating them (calling them) from the same function, you are creating a chain of instances. Once you reach break condition, those functions start returning one by one and the value they returned to calculate a new value in the return statement return n*fact(n-1) e.g. return 3*fact(2);
I'm trying to write an algorithm to perform Huffman decoding. I am doing it in Scala - it's an assignment for a Coursera course and I don't want to violate the honor code, so the below is pseudocode rather than Scala.
The algorithm I have written takes in a tree tree and a list of bits bits, and is supposed to return the message. However, when I try it on the provided tree, I get a NoSuchElementException (head of empty list). I can't see why.
I know that my code could be tidied up a bit - I'm still very new to functional programming so I've written it in a way that makes sense to me, rather than, probably, in the most compact way.
def decode(tree, bits) [returns a list of chars]: {
def dc(aTree, someBits, charList) [returns a list of chars]: {
if aTree is a leaf:
if someBits is empty: return char(leaf) + charList
else: dc(aTree, someBits, char(leaf) + charList)
else aTree is a fork:
if someBits.head is 0: dc(leftFork, someBits.tail, charList)
else someBits is 1: dc(rightFork, someBits.tail, charList)
}
dc(tree, bits, [empty list])
}
Thanks in advance for your help. It's my first time on StackOverflow, so I probably have some learning to do as to how best to use the site.
If I understand it correctly, you want to go through forks (with directions from bits) until you will find a leaf. Then you are adding leaf value to your char list and from this point you want to repeat steps.
If I am right, then you should pass original tree to your helper method, not a leftFork or rightFork, which are leafs now.
So it would be something like:
if aTree is a leaf:
if someBits is empty: return char(leaf) + charList
else: dc(tree, someBits, char(leaf) + charList)
I have an homework that ask this question :
"Write a function to check if an integer is divisable by 16 and returns to boolean (false or correct)"
I wrote this:
void setup()
{
{
int i=0
if(i%16==0)
Printls(i)
else
println(+i+" not divisable by 16"
}
}
But I need help because it's not complete. I just want to say to Processing to ask me to put a number with the keyboard to test. I thought it was "keyPressed==true" but it doesn't work. Any solutions? Thanks.
Thanks very much for the explanation. I wrote a new code since my last message but I don't think it's correct also.
void setup()
{void keyReleased() {
int i=0
if key (i%16==0)
printls(i)
println(+i+" is not divisable by 16"
}
}
I would take the requirements:
Write a function to check if an integer is divisable by 16 and returns to boolean (false or correct)
and break it down into really easy to follow steps:
write a function (you can learn how to write a function following Daniel Shiffman's and Andrew Glassner's video tutorials)
write a function that returns a boolean
write a function that returns a boolean and takes an integer as an argument
write a function that returns a true if the integer argument is divisible by 16
You should learn a bit more about functions with each step progressed and by the end of it you should reach your goal. Your logic (using %) makes sense, it's just a matter practicing more to a hang of the syntax (which faulty in your posted code).
Regarding checking a key, you can have a look at the keyPressed() function but note that the exercise doesn't require this. It feels keyPressed is a distraction at the moment and you should focus on understanding the syntax for functions first.
The compilers I've been using in C or Java have dead code prevention (warning when a line won't ever be executed). My professor says that this problem can never be fully solved by compilers though. I was wondering why that is. I am not too familiar with the actual coding of compilers as this is a theory-based class. But I was wondering what they check (such as possible input strings vs acceptable inputs, etc.), and why that is insufficient.
The dead code problem is related to the Halting problem.
Alan Turing proved that it is impossible to write a general algorithm that will be given a program and be able to decide whether that program halts for all inputs. You may be able to write such an algorithm for specific types of programs, but not for all programs.
How does this relate to dead code?
The Halting problem is reducible to the problem of finding dead code. That is, if you find an algorithm that can detect dead code in any program, then you can use that algorithm to test whether any program will halt. Since that has been proven to be impossible, it follows that writing an algorithm for dead code is impossible as well.
How do you transfer an algorithm for dead code into an algorithm for the Halting problem?
Simple: you add a line of code after the end of the program you want to check for halt. If your dead-code detector detects that this line is dead, then you know that the program does not halt. If it doesn't, then you know that your program halts (gets to the last line, and then to your added line of code).
Compilers usually check for things that can be proven at compile-time to be dead. For example, blocks that are dependent on conditions that can be determined to be false at compile time. Or any statement after a return (within the same scope).
These are specific cases, and therefore it's possible to write an algorithm for them. It may be possible to write algorithms for more complicated cases (like an algorithm that checks whether a condition is syntactically a contradiction and therefore will always return false), but still, that wouldn't cover all possible cases.
Well, let's take the classical proof of the undecidability of the halting problem and change the halting-detector to a dead-code detector!
C# program
using System;
using YourVendor.Compiler;
class Program
{
static void Main(string[] args)
{
string quine_text = #"using System;
using YourVendor.Compiler;
class Program
{{
static void Main(string[] args)
{{
string quine_text = #{0}{1}{0};
quine_text = string.Format(quine_text, (char)34, quine_text);
if (YourVendor.Compiler.HasDeadCode(quine_text))
{{
System.Console.WriteLine({0}Dead code!{0});
}}
}}
}}";
quine_text = string.Format(quine_text, (char)34, quine_text);
if (YourVendor.Compiler.HasDeadCode(quine_text))
{
System.Console.WriteLine("Dead code!");
}
}
}
If YourVendor.Compiler.HasDeadCode(quine_text) returns false, then the line System.Console.WriteLn("Dead code!"); won't be ever executed, so this program actually does have dead code, and the detector was wrong.
But if it returns true, then the line System.Console.WriteLn("Dead code!"); will be executed, and since there is no more code in the program, there is no dead code at all, so again, the detector was wrong.
So there you have it, a dead-code detector that returns only "There is dead code" or "There is no dead code" must sometimes yield wrong answers.
If the halting problem is too obscure, think of it this way.
Take a mathematical problem that is believed to be true for all positive integer's n, but hasn't been proven to be true for every n. A good example would be Goldbach's conjecture, that any positive even integer greater than two can be represented by the sum of two primes. Then (with an appropriate bigint library) run this program (pseudocode follows):
for (BigInt n = 4; ; n+=2) {
if (!isGoldbachsConjectureTrueFor(n)) {
print("Conjecture is false for at least one value of n\n");
exit(0);
}
}
Implementation of isGoldbachsConjectureTrueFor() is left as an exercise for the reader but for this purpose could be a simple iteration over all primes less than n
Now, logically the above must either be the equivalent of:
for (; ;) {
}
(i.e. an infinite loop) or
print("Conjecture is false for at least one value of n\n");
as Goldbach's conjecture must either be true or not true. If a compiler could always eliminate dead code, there would definitely be dead code to eliminate here in either case. However, in doing so at the very least your compiler would need to solve arbitrarily hard problems. We could provide problems provably hard that it would have to solve (e.g. NP-complete problems) to determine which bit of code to eliminate. For instance if we take this program:
String target = "f3c5ac5a63d50099f3b5147cabbbd81e89211513a92e3dcd2565d8c7d302ba9c";
for (BigInt n = 0; n < 2**2048; n++) {
String s = n.toString();
if (sha256(s).equals(target)) {
print("Found SHA value\n");
exit(0);
}
}
print("Not found SHA value\n");
we know that the program will either print out "Found SHA value" or "Not found SHA value" (bonus points if you can tell me which one is true). However, for a compiler to be able to reasonably optimise that would take of the order of 2^2048 iterations. It would in fact be a great optimisation as I predict the above program would (or might) run until the heat death of the universe rather than printing anything without optimisation.
I don't know if C++ or Java have an Eval type function, but many languages do allow you do call methods by name. Consider the following (contrived) VBA example.
Dim methodName As String
If foo Then
methodName = "Bar"
Else
methodName = "Qux"
End If
Application.Run(methodName)
The name of the method to be called is impossible to know until runtime. Therefore, by definition, the compiler cannot know with absolute certainty that a particular method is never called.
Actually, given the example of calling a method by name, the branching logic isn't even necessary. Simply saying
Application.Run("Bar")
Is more than the compiler can determine. When the code is compiled, all the compiler knows is that a certain string value is being passed to that method. It doesn't check to see if that method exists until runtime. If the method isn't called elsewhere, through more normal methods, an attempt to find dead methods can return false positives. The same issue exists in any language that allows code to be called via reflection.
Unconditional dead code can be detected and removed by advanced compilers.
But there is also conditional dead code. That is code that cannot be known at the time of compilation and can only be detected during runtime. For example, a software may be configurable to include or exclude certain features depending on user preference, making certain sections of code seemingly dead in particular scenarios. That is not be real dead code.
There are specific tools that can do testing, resolve dependencies, remove conditional dead code and recombine the useful code at runtime for efficiency. This is called dynamic dead code elimination. But as you can see it is beyond the scope of compilers.
A simple example:
int readValueFromPort(const unsigned int portNum);
int x = readValueFromPort(0x100); // just an example, nothing meaningful
if (x < 2)
{
std::cout << "Hey! X < 2" << std::endl;
}
else
{
std::cout << "X is too big!" << std::endl;
}
Now assume that the port 0x100 is designed to return only 0 or 1. In that case the compiler cannot figure out that the else block will never be executed.
However in this basic example:
bool boolVal = /*anything boolean*/;
if (boolVal)
{
// Do A
}
else if (!boolVal)
{
// Do B
}
else
{
// Do C
}
Here the compiler can calculate out the the else block is a dead code.
So the compiler can warn about the dead code only if it has enough data to to figure out the dead code and also it should know how to apply that data in order to figure out if the given block is a dead code.
EDIT
Sometimes the data is just not available at the compilation time:
// File a.cpp
bool boolMethod();
bool boolVal = boolMethod();
if (boolVal)
{
// Do A
}
else
{
// Do B
}
//............
// File b.cpp
bool boolMethod()
{
return true;
}
While compiling a.cpp the compiler cannot know that boolMethod always returns true.
The compiler will always lack some context information. E.g. you might know, that a double value never exeeds 2, because that is a feature of the mathematical function, you use from a library. The compiler does not even see the code in the library, and it can never know all features of all mathematical functions, and detect all weired and complicated ways to implement them.
The compiler doesn't necessarily see the whole program. I could have a program that calls a shared library, which calls back into a function in my program which isn't called directly.
So a function which is dead with respect to the library it's compiled against could become alive if that library was changed at runtime.
If a compiler could eliminate all dead code accurately, it would be called an interpreter.
Consider this simple scenario:
if (my_func()) {
am_i_dead();
}
my_func() can contain arbitrary code and in order for the compiler to determine whether it returns true or false, it will either have to run the code or do something that is functionally equivalent to running the code.
The idea of a compiler is that it only performs a partial analysis of the code, thus simplifying the job of a separate running environment. If you perform a full analysis, that isn't a compiler any more.
If you consider the compiler as a function c(), where c(source)=compiled code, and the running environment as r(), where r(compiled code)=program output, then to determine the output for any source code you have to compute the value of r(c(source code)). If calculating c() requires the knowledge of the value of r(c()) for any input, there is no need for a separate r() and c(): you can just derive a function i() from c() such that i(source)=program output.
Others have commented on the halting problem and so forth. These generally apply to portions of functions. However it can be hard/impossible to know whether even an entire type (class/etc) is used or not.
In .NET/Java/JavaScript and other runtime driven environments there's nothing stopping types being loaded via reflection. This is popular with dependency injection frameworks, and is even harder to reason about in the face of deserialisation or dynamic module loading.
The compiler cannot know whether such types would be loaded. Their names could come from external config files at runtime.
You might like to search around for tree shaking which is a common term for tools that attempt to safely remove unused subgraphs of code.
Take a function
void DoSomeAction(int actnumber)
{
switch(actnumber)
{
case 1: Action1(); break;
case 2: Action2(); break;
case 3: Action3(); break;
}
}
Can you prove that actnumber will never be 2 so that Action2() is never called...?
I disagree about the halting problem. I wouldn't call such code dead even though in reality it will never be reached.
Instead, lets consider:
for (int N = 3;;N++)
for (int A = 2; A < int.MaxValue; A++)
for (int B = 2; B < int.MaxValue; B++)
{
int Square = Math.Pow(A, N) + Math.Pow(B, N);
float Test = Math.Sqrt(Square);
if (Test == Math.Trunc(Test))
FermatWasWrong();
}
private void FermatWasWrong()
{
Press.Announce("Fermat was wrong!");
Nobel.Claim();
}
(Ignore the type and overflow errors) Dead code?
Look at this example:
public boolean isEven(int i){
if(i % 2 == 0)
return true;
if(i % 2 == 1)
return false;
return false;
}
The compiler can't know that an int can only be even or odd. Therefore the compiler must be able to understand the semantics of your code. How should this be implemented? The compiler can't ensure that the lowest return will never be executed. Therefore the compiler can't detect the dead code.