Code Error!? Why there is two different answer? - codeblocks

I have run a quite simple program in my pc but it's giving me the wrong answer. When I copied the code to an online IDE, the answer is correct. I am using CodeBlocks. Where is the problem.?
Online IDE link: https://ideone.com/yKV5NV
This is the image of the result in my PC:
My Code:
#include<stdio.h>
#include<string.h>
#include<math.h>
int main(){
int x=5,k=2,ans;
ans=(pow(x,k+1));
printf("%d",ans);
return 0;
}
PS: I think maybe because of the double data type rounding error. But why it is happening everytime. If I am right, how to fix it?

I think it could be because of floating point number representation that can't hold "exact" value that you expect. It happens every time because of same input data, so why wouldn't it? Regarding how to fix it, there is nothing to fix really, it's simply the way floating point numbers are working.

Related

I'm doing a course but the things the lecturer says works don't

I have recently gotten into programming and decided to learn C++. I took advantage of the sale on Udemy and bought three courses there, one for beginners on C++, one for game-making and one for Blender.
I started doing the course for beginners, the lecturer said that he would use Code::Blocks but that any other IDE would be fine so I downloaded Visual Studio 2017 because that's what the game-making course used. But when I do exactly as the lecturer says (and writes), the code won't compile correctly.
Here is an example:
What the lecturer wrote and got to work on his computer
#include <iostream>
using namespace std;
main()
{
cout << "Hello world! :-)";
}
What I figured out would work after some googling
#include <pch.h>
#include <iostream>
using std::cout;
int main()
{
cout << "Hello world! :-)";
}
And my question to those who are experienced is: What is the difference between Code::Blocks and Visual Studio 2017? What is different in that case? Will I even be able to use this course to learn?
Thanks in advance!
edit: edited in a missing # in the lecturer's code
#include <pch.h>:
See Gabriel's answer.
include <iostream> vs #include <iostream>:
The former is plain wrong. It has to be #include with the #.
using namespace std; vs using std::cout;:
While neither is particularly good practice, both should do the same thing here. If you write neither of them, you will have to write std::cout << ... instead of only cout << ... - that seems annoying but is something you should get used to if you want to eventually be a serious C++ programmer. See also Why is "using namespace std" considered bad practice?.
main() vs int main():
This is not something that Code::Blocks should allow because it is not correct C++. main should always return int.
Overall you seem to have hit an unfortunate number of differences between environments/compilers on this basic example already. However, neither your course nor VS2017 is wrong so far, so I recommend you keep using them. If something that your lecturer writes won't work for in a different environment, it's probably a bad idea to write that kind of code in the first place. And they did make several mistakes in this simple example.
PS: I strongly recommend enabling warnings, because they may tell you when you do something wrong in a more subtle way. There are many mistakes (of the "shooting yourself in the foot" kind) that the compiler is not required to stop you from making, but if you ask to be stopped (by heeding warnings) it will help you.
Using Visual Studio should be okay as long as you disable precompiled headers and your tutorial uses standard-compliant code.
About precompiled headers :
Visual Studio enables pre-compiled headers by default in a C++ command line program. This means that in your project, it'll by default force you to use a precompiled header in the first line of your source code (pch.h here).
By disabling them, you can almost* make the first snippet work in VS. To do this, select your project, go to the "Project->Properties" menu, then to the "Configuration Properties -> C/C++ -> Precompiled Headers" section, then change the "Precompiled Header" setting to "Not Using Precompiled Headers" option (this applies to VS 2012, applying this to other versions of VS should be easy).
If you want to avoid this in the future, you can create an empty project when setting up your project in VS.
See also : http://msdn.microsoft.com/en-us/library/h9x39eaw%28v=vs.71%29.aspx, How to avoid precompiled headers
* : The first snippet won't actually work since the declaration of main is not correct C++, only C (see https://en.cppreference.com/w/cpp/language/main_function, What is the proper declaration of main?)
To your actual question, VS will be fine for your course, although I'm still puzzled by the lecturer's original version of this code.
However, it's really useful to take the time to understand what all your changes did, and why they fixed your problem. Maybe you did this already - that's just not the impression I got from the phrase
What I figured out would work after some googling
when you get a compile error or warning, read it and try to understand it.
if you don't understand the error - and this is normal, certainly while you're learning - then hacking on the code until it works is perfectly fine. At least sometimes it's quicker, and the knowledge that you made progress is its own reward.
if hacking away at the code with the internet at your disposal doesn't get a solution, you'll just have to study the error message more. Turning all compiler errors and warnings on, and trying multiple compilers can both help - even if they all fail, the messages might be more useful. (I often find clang has useful errors, and godbolt.org is super helpful).
if hacking away at the code does get a solution, you should still try and understand why. Now you can see what you changed, look at the original error and try to understand why your changes fixed it. If you made multiple changes, were they all really necessary? Do you understand what they all did, and why?
If you do this, you can fix the next related problem faster, rather than going through the whole trial-and-error process again. You can even write better code that avoids the problem in the first place.
This is the part that actually constitutes learning, which is why I'm making a point of addressing it.
The important fix was changing the lines
include <iostream>
main()
to
#include <iostream>
int main()
because the former aren't legal C++. If your lecturer really wrote exactly that and you didn't somehow mis-copy, then I have no idea why their example worked.
The Visual Studio-specific stuff is the precompiled header, as described in Gabriel's answer.
But the remaining change is essentially cosmetic. Replacing:
using namespace std;
with
using std::cout;
Doesn't affect anything in your code, and just using
std::cout << "Hello world! :-)";
(with no using at all) would work just as well.

Does C# 6 make it possible to imply type inference (not explicitly use the var keyword)?

This is an odd question, I know.
Earlier this week, I was browsing hacker news and glossed past an article about the power of C# 6. I meant to 'pocket' the link at the time but I didn't and now I can't find the article again.
Long story short, there was an example of code which looked roughly like this...
using static Variables;
public class Blah
{
public Blah()
{
s = "something"
Console.WriteLine(s);
}
}
... as if to say, somehow it was possible using some magical combination of C# 6 features to remove the 'var' element of that equation and have valid code.
I'm struggling to see how that's possible without some extra Roslyn magic and even then, I'd struggle.
If you know where the article is please let me know. If not, is this even remotely possible? I'm fairly confident it would be an odd thing to do,..I'm just intrigued.
Thanks in advance.

STL on custom OS - std::list works, but std::vector doesn't

I'm just playing around with a grub-bootable C++ kernel in visual studio 2010.
I've gotten to the point where I have new and delete written and things such as dynamically allocated arrays work. I can use STL lists, for example. I can even sort them, after I wrote a memcpy routine. The problem is when I use the std::vector type. Simply constructing the vector sends the kernel off into la la land.
Obviously I'm missing a function implementation of some kind, but I looked through STL searching for it and came up empty-handed. It fails at the push_back:
vector<int> v;
v.push_back(1);
and disappears into the ether.
Any guesses as to what I'm missing?
Edit yes it's vector of int. Sorry for the confusion. Not only that, but it's not the constructor it fails on, it's a call to push_back.
Stab in the dark: do you have new[] and delete[] implemented? A list will create one item at a time with new while a vector will likely allocate larger blocks of memory with new[].
As per our discussion above, creating a
std::vector<mySimpleStruct> v;
instead of a
std::vector<int> v;
appears to work correctly. This must mean the problem is with something being done in the specialization of some functions for std::vector in your standard template library. I'm assuming you're familiar with template specialization already, but in case you're not:
http://www.parashift.com/c++-faq-lite/templates.html#faq-35.7
Also, once you've figured out where the real problem is, could you come back and post the answer here? You have me curious about where the real problem is now, plus the answer may be helpful to others trying to build their own OS kernels.
Do you use a custom allocator or a default one?
You might try using a custom one just to see what allocations vector peforms that might destroy your implementation of the memory manager (this is probably what actually fails).
And yes, please post back once you solve it - it helps all other OSdevers out there.

VC++ compilation time and performance

I'm working on a multiplaform project (MacOS, Linux and Windows) and I've been having some performance issues when trying to compile a big source file in VS C++ 2010.
Here's a little background. There's one .cpp file inside the project that is 800KB big. The size of the file is caused by the fact that I'm compiling an array that contains image information. So, it's a huge unsigned char array that can't be split.
Now, I've been working on MacOS during the last couple of months, so I didn't notice this problem until some days ago. In both MacOS and Linux, gcc compiles the file in a second or so, but when I use VC++ it takes about an hour.
At first I though it was cased by the computer itself, since it's not a fast one. But then I tried Cygwin and GCC 4 in the same machine and the compilation time was almost as fast as in MacOS. So I have to assume the problem is caused by something within VC++ 2010.
I haven't tweek VC++ in any form. The project files are generated by CMake, so I believe there should some room for optimizations here. Any help will be appreciated.
Thanks.
Hernan
Any chance you can place that large array into a seperate resource file and read it in that way? That's how I would go about fixing this problem if that array is indeed the problem. Failing that, I'd place the array in its own file so that it doesn't recompile often.
Looks like there is some O(n^k) part of VC++ with k>1 when parsing array initializers...
That would qualify as a logical bug you cannot do much about, but something that may work is
unsigned char bdata[][100] = {
{ 0x01, 0x02, ... , 0x63} ,
{ 0x64, 0x65, ... , 0xC7} ,
{ 0xC8, 0xC9, ... , 0x2B} ,
...
};
unsigned char *data = &(bdata[0][0]);
that is breaking the data in 100-bytes rows... MAY BE this will be parsed/compiled a lot faster by VC (just a suspect I have given the symptoms) and it shouldn't change your build process by much.
I don't use VC++2010 so I cannot check.
Just pay attention that sizeof(data) in this case will be just the size of a pointer and sizeof(bdata) will be instead the size of the image but rounded up to a multiple of the size of the row.
If this version runs at the same speed the unfortunately the code is O(n^k) in the number of bytes and you're basically doomed if you want that to be compiled as an array.
Another option could be using a huge string literal... the compiler may work better on that (may be they coded a special code path for string literals because "big" literals are not so uncommon), but your code generator will have to handle escaping of special chars.

stange -I effect with make and bad -L

I am trying to build an automated build system. It took me a little while to change a working wii generic makefile to a working win32 (using mingw32) makefile.
my make is here http://pastie.org/319482
The weird effect is, if i remove the a preceding the paths in ABS_INCL (line 31) the build doesnt work and complains about missing a header which is specified by the first path. Why is it doing this? i cant track the problem down.
The next issue is when i dropped in code that requires libcurl, i can still compile but no longer link as expected. I added curl to my libs (line 47) and the path (line 53) and it looks like i am including it right and the lib is in the right order (i tried to touch as little as possible while converting wii to win32) and i cant see the problem. Does anyone know why this is happening?
here is simple source to test with
#include <stdio.h>
void main2();
int main( int argc, const char* argv[])
{
int a=0;
printf("hey");
main2();
return 0;
}
#include <curl/curl.h>
void main2()
{
CURL *curl = curl_easy_init();
curl_easy_cleanup(curl);
}
You're not getting a lot of answers here - I'm going to go out on a limb and tell you it's because of your really badly written title. I've read it maybe 20 times as it scrolled down the home page and I still don't really get it. There is the obvious spelling mistake, and I want to go in and fix that but then there is the whole weirdness with the "-l" and "-L" and I can't tell where you are going with that.
So, most people will look at that and just blank out and move on. Assuming they get past that, you failed to add the useful information contained within your makefile to the question and so you've gotta go off and read it on the other site.
Finally, as one more hurdle, your makefile is too long to easily read and absorb. So assuming someone like me who is really kind of determined goes and reads it, it's too difficult to tell where the problem could lie within that. You need to edit it down to probably ten lines or less, and then assuming you haven't been able to figure out the problem, you could then post just those few lines that showed the problem in your question and then with a decent title and some good descriptive text, you'll probably get your answer.
I'm guessing the answer to your question isn't even that hard, you've successfully managed to obfuscate it to the point that most people wont even bother.

Resources