Many C++ books contain example code like this...
std::cout << "Test line" << std::endl;
...so I've always done that too. But I've seen a lot of code from working developers like this instead:
std::cout << "Test line\n";
Is there a technical reason to prefer one over the other, or is it just a matter of coding style?
The varying line-ending characters don't matter, assuming the file is open in text mode, which is what you get unless you ask for binary. The compiled program will write out the correct thing for the system compiled for.
The only difference is that std::endl flushes the output buffer, and '\n' doesn't. If you don't want the buffer flushed frequently, use '\n'. If you do (for example, if you want to get all the output, and the program is unstable), use std::endl.
The difference can be illustrated by the following:
std::cout << std::endl;
is equivalent to
std::cout << '\n' << std::flush;
So,
Use std::endl If you want to force an immediate flush to the output.
Use \n if you are worried about performance (which is probably not the case if you are using the << operator).
I use \n on most lines.
Then use std::endl at the end of a paragraph (but that is just a habit and not usually necessary).
Contrary to other claims, the \n character is mapped to the correct platform end of line sequence only if the stream is going to a file (std::cin and std::cout being special but still files (or file-like)).
There might be performance issues, std::endl forces a flush of the output stream.
There's another function call implied in there if you're going to use std::endl
a) std::cout << "Hello\n";
b) std::cout << "Hello" << std::endl;
a) calls operator << once.
b) calls operator << twice.
I recalled reading about this in the standard, so here goes:
See C11 standard which defines how the standard streams behave, as C++ programs interface the CRT, the C11 standard should govern the flushing policy here.
ISO/IEC 9899:201x
7.21.3 §7
At program startup, three text streams are predefined and need not be opened explicitly
— standard input (for reading conventional input), standard output (for writing
conventional output), and standard error (for writing diagnostic output). As initially
opened, the standard error stream is not fully buffered; the standard input and standard
output streams are fully buffered if and only if the stream can be determined not to refer
to an interactive device.
7.21.3 §3
When a stream is unbuffered, characters are intended to appear from the source or at the
destination as soon as possible. Otherwise characters may be accumulated and
transmitted to or from the host environment as a block. When a stream is fully buffered,
characters are intended to be transmitted to or from the host environment as a block when
a buffer is filled. When a stream is line buffered, characters are intended to be
transmitted to or from the host environment as a block when a new-line character is
encountered. Furthermore, characters are intended to be transmitted as a block to the host
environment when a buffer is filled, when input is requested on an unbuffered stream, or
when input is requested on a line buffered stream that requires the transmission of
characters from the host environment. Support for these characteristics is
implementation-defined, and may be affected via the setbuf and setvbuf functions.
This means that std::cout and std::cin are fully buffered if and only if they are referring to a non-interactive device. In other words, if stdout is attached to a terminal then there is no difference in behavior.
However, if std::cout.sync_with_stdio(false) is called, then '\n' will not cause a flush even to interactive devices. Otherwise '\n' is equivalent to std::endl unless piping to files: c++ ref on std::endl.
They will both write the appropriate end-of-line character(s). In addition to that endl will cause the buffer to be committed. You usually don't want to use endl when doing file I/O because the unnecessary commits can impact performance.
Not a big deal, but endl won't work in boost::lambda.
(cout<<_1<<endl)(3); //error
(cout<<_1<<"\n")(3); //OK , prints 3
If you use Qt and endl, you could accidentally end up using an incorrect endl which gives you very surprising results. See the following code snippet:
#include <iostream>
#include <QtCore/QtCore>
#include <QtGui/QtGui>
// notice that there is no "using namespace std;"
int main(int argc, char** argv)
{
QApplication qapp(argc,argv);
QMainWindow mw;
mw.show();
std::cout << "Finished Execution!" << endl;
// This prints something similar to: "Finished Execution!67006AB4"
return qapp.exec();
}
Note that I wrote endl instead of std::endl (which would have been correct) and apparently there is a endl function defined in qtextstream.h (which is part of QtCore).
Using "\n" instead of endl completely sidesteps any potential namespace issues.
This is also a good example why putting symbols into the global namespace (like Qt does by default) is a bad idea.
Something that I've never seen anyone say is that '\n' is affected by cout formatting:
#include <iostream>
#include <iomanip>
int main() {
std::cout << "\\n:\n" << std::setw(2) << std::setfill('0') << '\n';
std::cout << "std::endl:\n" << std::setw(2) << std::setfill('0') << std::endl;
}
Output:
\n:
0
std::endl:
Notice, how since '\n' is one character and fill width is set to 2, only 1 zero gets printed before '\n'.
I can't find anything about it anywhere, but it reproduces with clang, gcc and msvc.
I was super confused when I first saw it.
With reference This is an output-only I/O manipulator.
std::endl Inserts a newline character into the output sequence os and flushes it as if by calling os.put(os.widen('\n')) followed by os.flush().
When to use:
This manipulator may be used to produce a line of output immediately,
e.g.
when displaying output from a long-running process, logging activity of multiple threads or logging activity of a program that may crash unexpectedly.
Also
An explicit flush of std::cout is also necessary before a call to std::system, if the spawned process performs any screen I/O. In most other usual interactive I/O scenarios, std::endl is redundant when used with std::cout because any input from std::cin, output to std::cerr, or program termination forces a call to std::cout.flush(). Use of std::endl in place of '\n', encouraged by some sources, may significantly degrade output performance.
Related
I have a piece of code in C++ that lists files in a folder and then gets attributes for each of them through Windows API. I am puzzled by the performance of this code when the folder is an SMB mount on a remote server (mounted as a disk).
#include <string>
#include <iostream>
#include <windows.h>
#include <vector>
//#include <chrono>
//#include <thread>
int main(int argc, char *argv[]) {
WIN32_FIND_DATA ffd;
HANDLE hFile;
std::string pathStr = argv[1];
std::vector<std::string> paths;
hFile = FindFirstFile((pathStr + "\\*").c_str(), &ffd);
if (hFile == INVALID_HANDLE_VALUE) {
std::cout << "FindFirstFile failed: " << GetLastError();
return 1;
} else {
do {
paths.push_back(pathStr + "\\" + ffd.cFileName);
} while (FindNextFile(hFile, &ffd) != 0);
int error = GetLastError();
if (error != ERROR_NO_MORE_FILES) {
std::cout << "FindNextFile failed: " << error;
FindClose(hFile);
return error;
}
FindClose(hFile);
}
std::cout << paths.size() << " files listed" << std::endl;
// std::this_thread::sleep_for(std::chrono::milliseconds(30000));
for (const std::string & p : paths) {
int a = GetFileAttributes(p.c_str());
bool isDir = (a & FILE_ATTRIBUTE_DIRECTORY);
bool isHidden = (a & FILE_ATTRIBUTE_HIDDEN);
std::cout << p << ": " << (isDir ? "D" : "f") << (isHidden ? "H" : "_") << std::endl;
}
}
Namely, if I have a folder with 250 files, it passes in about 1 second. When there are 500 files, it passes in about 1 minute, and even the first files take hundreds of milliseconds each (so, 1 second is enough for ~10 files).
Experimenting with it, I found that there is some limit below which processing speed is in hundreds files per second and above which the speed is ~10 files per second. I also noticed that this number differs with file name length. With names like file-001: between 510 and 520. With names like file-file-file-file-file-001: between 370 and 380.
I am interested in why this happens, in particular why the speed degrades from the very beginning when there are "too many" files/folders in the folder. Is there a way to investigate that? Optional: is there a way to overcome that while still using GetFileAttributes?
(The code is probably ugly as hell, I just stuck it together from samples found online. I compile it with MinGW, g++ -static files.cpp -o files.exe, run it files.exe "Z:\test_folder".
My original code is in Java, and I got from reading the source of the Hotspot JVM that it uses GetFileAttributes WinAPI method, so I created this snippet to see if it would behave the same as the Java code — and it does. I am also limited in the ways to solve this performance problem: I noticed that FindFirstFile/FindNextFile WinAPI calls perform consistently fast, but I did not find a way to use it from Java without JNI/JNA which would be too much fuss for me.)
Update: if I put a 30-second sleep between listing (files collected into a vector) and getting their attributes in a loop, behavior becomes consistent with any number of files in the folder — "slow" for any number of files. I also read some scattered info here and there that Windows SMB client applies caching, limited by time etc. I guess this is what I see here: listing the folder fills this cache with file attributes, and subsequent GetFileAttributes does not hit the remote system if ran immediately after the listing. I guess the other behavior is also cache related: when listing "too many" files, only the tail of the list remains in the cache. Then we start GetFileAttributes from the first file again, and every request hits the server. Still a mystery to me why listing is so fast and GetFileAttributes is slow...
Update 2: I thought to confirm that it has something to do with the cache, but I was not lucky so far. If it had something to do with eviction of the "first" file attributes, then getting attributes in the reverse order would hit the cache for many files — not the case: it's either all fast or all slow.
I tried fiddling with SMB client parameters according to this MS article, hoping that if I set sizes really high I won't notice the slow behavior any more — was not the case either, the behavior seems to be completely independent from these parameters. What I set was:
// HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters
DirectoryCacheEntriesMax = 4096
DirectoryCacheEntrySizeMax = 32768
FileInfoCacheEntriesMax = 8192
In addition, I noticed that when there are "too many files", listing returns paths in random order (not alphabetically sorted). Not sure if it has anything to do with the problem. This behavior is even when changing listing to use FindFirstFile/FindNextFile.
In addition, I studied more carefully the timeout that is needed to "invalidate" the "cache" (so, for a folder with few files to start behaving slowly), and it is around 30 seconds in my case. Sometimes setting a lower value shows the same behavior (slow attributes getting for a folder with few files), but then re-running the program is instantaneous again.
I updated the code above, originally used std::filesystem::directory_iterator from C++17.
I will use the following code to explain my question:
#include <Windows.h>
#include <iostream>
int main()
{
bool toggle = 0;
while (1)
{
if (GetAsyncKeyState('C') & 0x8000)
{
toggle = !toggle;
if (toggle) std::cout << "Pressed\n";
else std::cout << "Not pressed\n";
}
}
}
Testing, I see that
(GetAsyncKeyState('C') & 0x8000) // 0x8000 to see if the most significant bit is 1
has the same behavior as
(GetAsyncKeyState('C'))
However, to achieve the behavior I want, which is the way any text input out there works (it waits like 1 second, and if you are still pressing the button, it starts spamming in a certain rate), I need to write
(GetAsyncKeyState('C') & 1)
The documentation says
The behavior of the least significant bit of the return value is retained strictly for compatibility with 16-bit Windows applications (which are non-preemptive) and should not be relied upon.
Can someone clarify this please?
MSDN tells you why on the same page you linked to!
Although the least significant bit of the return value indicates whether the key has been pressed since the last query, due to the pre-emptive multitasking nature of Windows, another application can call GetAsyncKeyState and receive the "recently pressed" bit instead of your application. The behavior of the least significant bit of the return value is retained strictly for compatibility with 16-bit Windows applications (which are non-preemptive) and should not be relied upon.
GetAsyncKeyState gives you "the interrupt-level state associated with the hardware" and is probably shared by all processes in the window station/session.
The low bit might be connected to the keyboard key repeat delay you can set in Control Panel, but it does not really matter because MSDN tells you to not look at that bit.
GetAsyncKeyState is usually not the correct way to process keyboard input. Console applications should read stdin, or use the console API. GUI applications should use the WM_CHAR/WM_KEYDOWN/WM_KEYUP window messages.
After three days of intensive googleing and stackoverflowing I more or less got my program to work. I tried a lot of stuff and found a lot of answers somehow connected to my problem, but no working solution. Sry should I have missed the right page!! I'm looking forward to comments and recommendations.
Task:
Send binary data (floats) from python to C++ program, get few floats back
Data is going to be 20ms soundcard input, latency is a bit critical
Platform: Windows (only due to drivers for the soundcard...)
Popen with pipes, but without communicate, because I want to keep the C++ program opened
The whole thing worked just fine on ubuntu with test data. On Windows I ran into the binary stream problem: Windows checks the float stream for EOF character and finds it randomly. Then everything freezes, waiting for instream data which is just behind the "eof" wall. Or so I picture it.
In the end these two things were necessary:
#include <io.h>
#include <fcntl.h>
and
if (_setmode(_fileno(stdin), _O_BINARY) == -1)
{cout << "binary mode problem" << endl; return 1;};
in C++ as described here: https://msdn.microsoft.com/en-us/library/aa298581%28v=vs.60%29.aspx.
cin.ignore() freezes using binary mode! Guess since there's no eof anymore. Did not try/think about this too thoroughly though
cin.read(mem,sizeof(float)*length) does the job, since I know the length of the data stream
Compiled with MinGW
and in the Python code same thing! (forgot this first, cost me a day):
if sys.platform.find("win") > -1:
import msvcrt,os
process = subprocess.Popen("cprogram.exe",stdin=subprocess.PIPE,stdout=subprocess.PIPE,bufsize=2**12)
msvcrt.setmode(process.stdin.fileno(), os.O_BINARY)
and
process.stdin.write(data.tostring())
I have a rather interesting problem for which I'm unable to find a resolution. I'm using Setup API to list drives in the system. I have no trouble using the code listed below when setting the enumerator to "IDE". My angst comes when the enumerator value is set to "SCSI". Code which reproduces this problem is below:
#include <iostream>
#include <Windows.h>
#include <SetupAPI.h>
#include <cfgmgr32.h>
#include <devguid.h>
int main() {
std::cout << "Looking for only SCSI disks" << std::endl;
HDEVINFO hDevs(SetupDiGetClassDevs(&GUID_DEVCLASS_DISKDRIVE, "SCSI", NULL, DIGCF_PRESENT));
if(INVALID_HANDLE_VALUE == hDevs) {
DWORD error(GetLastError());
std::cout << "Handle returned is invalid. Error code: " << error << std::endl;
return 1;
}
SP_DEVINFO_DATA sp = {sizeof(SP_DEVINFO_DATA)};
char buff[256];
memset(buff, 0, 256);
DWORD index(0);
std::cout << "The handle is valid, listing drives now" << std::endl;
while(SetupDiEnumDeviceInfo(hDevs, index++, &sp)) {
CM_Get_Device_ID(sp.DevInst, buff, 256, 0);
std::cout << buff << std::endl;
memset(buff, 0, 256);
}
SetupDiDestroyDeviceInfoList(hDevs);
return 0;
}
As you can see, there is nothing remarkable about this code. The problem is, on certain laptops, this code errors at SetupDiGetClassDevs(). Checking GetLastError() reveals that it failed for ERROR_INVALID_DATA (0xd). What I don't understand is why. This exact same program, run on my development box both as my user (with administrator rights) and as an unprivileged user, works just fine whether or not SCSI drives are present.
I know that the GUID in use is correct. It's defined in devguid.h. "SCSI" is a valid PnP enumerator as is referenced on this MSDN page and also from examining the "Enumerator" property in the Device Manager. The third argument may be NULL and the fourth is a valid defined flag for this function. I know this because, except for these laptops, this works on all systems I've ever tried it on (which, in my organization, is quite a few). I'm hoping that someone here may know about what would cause SetupDiGetClassDevs() to fail for this error with these conditions, or could at least point me in the right direction. I'm not a Windows expert and I could be missing something on system configuration or permissions (although not implied from the error).
As I hope is clear, I've run this code on the one laptop I can test it on as both a user with Administrator privileges and as the Administrator user: both with the same result. The laptop is an HP EliteBook 8460p running Windows 7 64-bit Service Pack 1. Compiling this code in 32 or 64 bits makes no difference.
I'm going to post the answer I got from a fellow on the MSDN support forums to help someone who may be confounded by this same issue. Apparently, this is expected behavior for Windows 7. If the system has never seen hardware with the enumerator specified to SetupDiGetClassDevs(), then a failure occurs and this error code is expected.
For reference, the thread where I asked this question is linked here.
I am a complete noob in OpenMP and just started by exploring some simple test script below.
#pragma omp parallel
{
#pragma omp for
for(int i=0;i<10;++i)
std::cout<<i<<" "<<endl;
// printf("%d \n",i);
}
}
I tried the C and C++ version and the C version seems to work fine whereas the C++ version gives me a wrong output.
Many implementations of printf acquire a lock to ensure that each printf call is not interrupted by other threads.
In contrast, std::cout's overloaded << operator means that (even with a lock) one thread's printing of i and ' ' and '\n' can be interleaved with another thread's output, because std::cout<<i<<" "<<endl; is translated to three operator<<() function calls by the C++ compiler.
This is outdated but perhaps this could be still of help to anyone:
It's not really clear what you expect the output to be but be aware of:
Your variable "i" is possibly shared amongst threads. You have a race-condition for the contents of "i". One thread needs to wait for another when it wants to access "i". Further 1 thread can change "i" and another thread doesn't take note of it meaning it will output a wrong value.
The endl() flushes the memory after ending the line. If you use \n for newline the effect is similar but without the flush. And std is an object too so multiple threads race for std access. When the memory isn't flushed after every access you may experience interferences.
To make sure those are not related to your problems you could declare the "i" as private so every thread counts "i" itself. And you could play with flushing the memory at output so see if it has to do with the problem you experience.