WinAPI C++ client detect write on anonymous pipe before reading - winapi

I am writing a C++ (Windows) client console application which reads from an anonymous pipe on STDIN. I would like to be able to use my program as follows:
echo input text here | my_app.exe
and do something in the app with the text that is piped in
OR
my_app.exe
and then use some default text inside of the app instead of the input from the pipe.
I currently have code that successfully reads from the pipe on STDIN given the first situation:
#include <Windows.h>
#include <iostream>
#include <string>
#define BUFSIZE 4096
int main(int argc, const char *argv[]) {
char char_buffer[BUFSIZE];
DWORD bytes_read;
HANDLE stdin_handle;
BOOL continue_reading;
unsigned int required_size;
bool read_successful = true;
stdin_handle = GetStdHandle(STD_INPUT_HANDLE);
if (stdin_handle == INVALID_HANDLE_VALUE) {
std::cout << "Error: invalid handle value!\n\n";
} else {
continue_reading = true;
while (continue_reading) {
continue_reading = ReadFile(stdin_handle, char_buffer, BUFSIZE,
&bytes_read, NULL);
if (continue_reading) {
if (bytes_read != 0) {
// Output what we have read so far
for (unsigned int i = 0; i < bytes_read; i++) {
std::cout << char_buffer[i];
}
} else {
continue_reading = false;
}
}
}
}
return 0;
}
I know that my only option with anonymous pipes is to do a blocking read with ReadFile. If I understand correctly, in regard to how I am invoking it, ReadFile will continue to read from the buffer on STDIN until it detects an end of write operation on the other end of the pipe (perhapse reads some sort of "end of write" token??). I would like to know if there is some sort of "beginning write" token that will be in the buffer if something is being piped in which I can check on STDIN BEFORE I call ReadFile. If this were the case I could just skip calling ReadFile and use some default text.
If there is not a way to do this, I can always pass in a command line argument that denotes that I should not check the pipe and just use the default text (or the other way around), but I would much prefer to do it the way that I specified.

Look at PeekNamedPipe(). Despite its name, it works for both named and anonymous pipes.
int main(int argc, const char *argv[])
{
char char_buffer[BUFSIZE];
DWORD bytes_read;
DWORD bytes_avail;
DWORD dw;
HANDLE stdin_handle;
bool is_pipe;
stdin_handle = GetStdHandle(STD_INPUT_HANDLE);
is_pipe = !GetConsoleMode(stdin_handle, &dw);
if (stdin_handle == INVALID_HANDLE_VALUE) {
std::cout << "Error: invalid handle value!\n\n";
} else {
while (1) {
if (is_pipe) {
if (PeekNamedPipe(stdin_handle, NULL, 0, NULL, &bytes_avail, NULL)) {
if (bytes_avail == 0) {
Sleep(100);
continue;
}
}
}
if (!ReadFile(stdin_handle, char_buffer, min(bytes_avail, BUFSIZE), &bytes_read, NULL)) {
break;
}
if (bytes_read == 0) {
break;
}
// Output what we have read so far
for (unsigned int i = 0; i < bytes_read; i++) {
std::cout << char_buffer[i];
}
}
}
return 0;
}

It looks like what you're really trying to do here is to determine whether you've got console input (where you use default value) vs pipe input (where you use input from the pipe).
Suggest testing that directly instead of trying to check if there's input ready: the catch with trying to sniff whether there's data in the pipe is that if the source app is slow in generating output, your app might make an incorrect assumption just because there isn't input yet available. (It might also be possible that, due to typeahead, there's a user could have typed in characters that area ready to be read from console STDIN before your app gets around to checking if input is available.)
Also, keep in mind that it might be useful to allow your app to be used with file redirection, not just pipes - eg:
myapp.exe < some_input_file
The classic way to do this "interactive mode, vs used with redirected input" test on unix is using isatty(); and luckily there's an equivalent in the Windows CRT - see function _isatty(); or use GetFileType() checking for FILE_TYPE_CHAR on GetStdHandle(STD_INPUT_HANDLE) - or use say GetConsoleMode as Remy does, which will only succeed on a real console handle.

This also works without overlapped I/O while using a second thread, that does the synchronous ReadFile-call. Then the main thread waits an arbitrary amount of time and acts like above...
Hope this helps...

Related

Is there the ways to forward output from XCode to Processing?

I'm trying out to forward output stream from XCode (v12.4) to Processing (https://processing.org/).
My goal is: To draw a simple object in Processing according to my XCode project data.
I need to see value of my variable in the Processing.
int main(int argc, const char * argv[]) {
// insert code here...
for (int i=0; i<10; i++)
std::cout << "How to send value of i to the Processing!\n";
return 0;
}
Finally I found the way. Hope it help someone. Share it.
Xcode app ->(127.0.0.1:UDP)-> Processing sketch
Source Links:
Sending string over UDP in C++
https://discourse.processing.org/t/receive-udp-packets/19832
Xcode app (C++):
int main(int argc, char const *argv[])
{
std::string hostname{"127.0.0.1"};
uint16_t port = 6000;
int sock = ::socket(AF_INET, SOCK_DGRAM, 0);
sockaddr_in destination;
destination.sin_family = AF_INET;
destination.sin_port = htons(port);
destination.sin_addr.s_addr = inet_addr(hostname.c_str());
std::string msg = "Hello world!";
for(int i=0; i<5; i++){
long n_bytes = ::sendto(sock, msg.c_str(), msg.length(), 0, reinterpret_cast<sockaddr*>(&destination), sizeof(destination));
std::cout << n_bytes << " bytes sent" << std::endl;
}
::close(sock);
return 0;
}
Processing code:
import java.net.*;
import java.io.*;
import java.util.Arrays;
DatagramSocket socket;
DatagramPacket packet;
byte[] buf = new byte[12]; //Set your buffer size as desired
void setup() {
try {
socket = new DatagramSocket(6000); // Set your port here
}
catch (Exception e) {
e.printStackTrace();
println(e.getMessage());
}
}
void draw() {
try {
DatagramPacket packet = new DatagramPacket(buf, buf.length);
socket.receive(packet);
InetAddress address = packet.getAddress();
int port = packet.getPort();
packet = new DatagramPacket(buf, buf.length, address, port);
//Received as bytes:
println(Arrays.toString(buf));
//If you wish to receive as String:
String received = new String(packet.getData(), 0, packet.getLength());
println(received);
}
catch (IOException e) {
e.printStackTrace();
println(e.getMessage());
}
}
The assumption is you're using c++ in Xcode (and not Objective-C, nor Swift).
Every processing sketch inherits the args property (very similar to main's const char * argv[] in c++ program). You can make use of that to initialise a Processing sketch with options from c++.
You could have something like:
int main(int argc, const char * argv[]) {
system("/path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run 0,1,2,3,4,5,6,7,8,9");
return 0;
}
(This is oversimplified, you'd have your for loop accumulate ints into a string with a separator character, maybe setup variables for paths to processing-java and the processing sketch)
To clarify, processing-java is a command line utility that ships with Processing. (You can find it in inside the Processing.app folder (via show contents), alongside the processing executable and install it via Tools menu inside Processing). It allows you to easily run a sketch from the command line. Alternatively, you can export an application, however if you're prototyping, the processing-java option might be more practical.
In Processing you'd check if the sketch was launched with arguments, and if so, parse those arguments.
void setup(){
if(args != null){
printArray(args);
}
}
You can use split() to split 0,1,2,3,4,5,6,7,8,9 into individual numbers that can be parsed (via int() for example).
If you have more complex data, you can consider formatting your c++ output as JSON, then using parseJSONObject() / parseJSONArray().
(If you don't want to split individual values, you can just use spaces with command line arguments: /path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run 0 1 2 3 4 5 6 7 8 9. If you want to send a JSON formatted string from c++, be aware you may need to escape " (e.g. system("/path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run {\"myCppData\":[0,1,2]}");)
This would work if you need to launch the processing sketch once and initialise with values from your c++ program at startup. Outside of the scope of your question, if you need to continously send values from c++ to Processing, you can look at opening a local socket connection (TCP or UDP) to estabish communication between the two programs. One easy to use protocol is OSC (via UDP). You can use oscpack in raw c++ and oscp5 in Processing. (Optionally, depending on your setup you can consider openFrameworks which (already has oscpack integrated as ofxOsc and ships with send/receive examples): its ofApp is similar Processing's PApplet (e.g. setup()/draw()/mousePressed(), etc.)

Changing tab-completion for read builtin in bash

The current tab-completion while "read -e" is active in bash seems to be only matching filenames:
read -e
[[TabTab]]
abc.txt bcd.txt cde.txt
I want the completion to be a set of strings defined by me, while file/dir/hostname-completion etc. should be deactivated for the duration of "read -e".
Outside of a script
complete -W 'string1 string2 string3' -E
works well, but i cant get this kind of completion to work inside a script while using "read -e".
Although it seems like a reasonable request, I don't believe that is possible.
The existing implementation of the read builtin sets the readline completion environment to a fairly basic configuration before calling readline to handle -e input.
You can see the code in builtins/read.def, in the edit_line function: it sets rl_attempted_completion_function to NULL for the duration of the call to readline. readline has several completion overrides, so it's not 100% obvious that this resets the entire completion environment, but as far as I know this is the function which is used to implement programmable completion as per the complete command.
With some work, you could probably modify the definition of the read command to allow a specific completion function instead of or in addition to the readline standard filename completion function. That would require a non-trivial understanding of bash internals, but it would be a reasonable project if you wanted to gain familiarity with those internals.
As a simpler but less efficient alternative, you could write your own little utility which just accepts one line of keyboard input with readline and echoes it to stdout. Then invoke read redirecting its stdin to your utility:
read -r < <(my_reader string1 string2 string3)
(That assumes that my_reader uses its command-line arguments to construct the potential completion list for the readline library. You'd probably want the option to present a prompt as well.)
The readline documentation includes an example of an application which does simple custom completion; once you translate it from the K&R function prototype syntax, it might be pretty easy to adapt to your needs.
Edit: After I looked at that example again, I thought it had a lot of unnecessary details, so I wrote the following example with fewer unnecessary details. I might upload it to github, but for now it's here even though it's nearly 100 lines:
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <readline/readline.h>
static void version(const char* progname) {
fprintf(stderr, "%s 0.1\n", progname);
}
static void usage(const char* progname) {
fprintf(stderr, "Usage: %s [-fhv] [-p PROMPT] [-n PROGNAME] [COMPLETION...]\n", progname);
fprintf(stderr,
"Reads one line using readline, and prints it to stdout.\n"
"Returns success if a line was read.\n"
" -p PROMPT Output PROMPT before requesting input.\n"
" -n PROGNAME Set application name to PROGNAME for readline config file\n"
" (Default: %s).\n"
" -f Use filename completion as well as specified completions.\n"
" -h Print this help text and exit.\n"
" -v Print version number and exit.\n"
" COMPLETION word to add to the list of possible completions.\n",
progname);
}
/* Readline really likes globals, so none of its hooks take a context parameter. */
static char** completions = NULL;
static char* generate_next_completion(const char* text, int state) {
static int index = 0;
if (state == 0) index = 0; /* reset index if we're starting */
size_t textlen = strlen(text);
while (completions[index++])
if (strncmp(completions[index - 1], text, textlen) == 0)
return strdup(completions[index - 1]);
return NULL;
}
/* We use this if we will fall back to filename completion */
static char** generate_completions(const char* text, int start, int end) {
return rl_completion_matches(text, generate_next_completion);
}
int main (int argc, char **argv) {
const char* prompt = "";
const char* progname = strrchr(argv[0], '/');
progname = progname ? progname + 1 : argv[0];
rl_readline_name = progname;
bool use_file_completion = false;
for (;;) {
int opt = getopt(argc, argv, "+fp:n:hv");
switch (opt) {
case -1: break;
case 'f': use_file_completion = true; continue;
case 'p': prompt = optarg; continue;
case 'n': rl_readline_name = optarg; continue;
case 'h': usage(progname); return 0;
case 'v': version(progname); return 0;
default: usage(progname); return 2;
}
break;
}
/* The default is stdout, which would interfere with capturing output. */
rl_outstream = stderr;
completions = argv + optind;
rl_completion_entry_function = rl_filename_completion_function;
if (*completions) {
if (use_file_completion)
rl_attempted_completion_function = generate_completions;
else
rl_completion_entry_function = generate_next_completion;
} else {
/* No specified strings */
if (!use_file_completion)
rl_inhibit_completion = true;
}
char* line = readline(prompt);
if (line) {
puts(line);
free(line);
return 0;
} else {
fputc('\n', rl_outstream);
return 1;
}
}

Same .txt files, different sizes?

I have a program that reads from a .txt file
I use the cmd prompt to execute the program with the name of the text file to read from.
ex: program.exe myfile.txt
The problem is that sometimes it works, sometimes it doesn't.
The original file is 130KB and doesn't work.
If I copy/paste the contents, the file is 65KB and works.
If I copy/paste the file and rename it, it's 130KB and doesn't work.
Any ideas?
After more testing it shows that this is what makes it not work:
int main(int argc, char *argv[])
{
char *infile1
char tmp[1024] = { 0x0 };
FILE *in;
for (i = 1; i < argc; i++) /* Skip argv[0] (program name). */
{
if (strcmp(argv[i], "-sec") == 0) /* Process optional arguments. */
{
opt = 1; /* This is used as a boolean value. */
/*
* The last argument is argv[argc-1]. Make sure there are
* enough arguments.
*/
if (i + 1 <= argc - 1) /* There are enough arguments in argv. */
{
/*
* Increment 'i' twice so that you don't check these
* arguments the next time through the loop.
*/
i++;
optarg1 = atoi(argv[i]); /* Convert string to int. */
}
}
else /* not -sec */
{
if (infile1 == NULL) {
infile1 = argv[i];
}
else {
if (outfile == NULL) {
outfile = argv[i];
}
}
}
}
in = fopen(infile1, "r");
if (in == NULL)
{
fprintf(stderr, "Unable to open file %s: %s\n", infile1, strerror(errno));
exit(1);
}
while (fgets(tmp, sizeof(tmp), in) != 0)
{
fprintf(stderr, "string is %s.", tmp);
//Rest of code
}
}
Whether it works or not, the code inside the while loop gets executed.
When it works tmp actually has a value.
When it doesn't work tmp has no value.
EDIT:
Thanks to sneftel, we know what the problem is,
For me to use fgetws() instead of fgets(), I need tmp to be a wchar_t* instead of a char*.
Type casting seems to not work.
I tried changing the declaration of tmp to
wchar_t tmp[1024] = { 0x0 };
but I realized that tmp is a parameter in strtok() used elsewhere in my code.
I here is what I tried in that function:
//tmp is passed as the first parameter in parse()
void parse(wchar_t *record, char *delim, char arr[][MAXFLDSIZE], int *fldcnt)
{
if (*record != NULL)
{
char*p = strtok((char*)record, delim);
int fld = 0;
while (p) {
strcpy(arr[fld], p);
fld++;
p = strtok('\0', delim);
}
*fldcnt = fld;
}
else
{
fprintf(stderr, "string is null");
}
}
But typecasting to char* in strtok doesn't work either.
Now I'm looking for a way to just convert the file from UTF-16 to UTF-8 so tmp can be of type char*
I found this which looks like it can be useful but in the example it uses input from the user as UTF-16, how can that input be taken from the file instead?
http://www.cplusplus.com/reference/locale/codecvt/out/
It sounds an awful lot like the original file is UTF-16 encoded. When you copy/paste it in your text editor, you then save the result out as a new (default encoding) (ASCII or UTF-8) text file. Since a single character takes 2 bytes in a UTF-16-encode file but only 1 byte in a UTF-8-encoded file, that results in the file size being roughly halved when you save it out.
UTF-16 is fine, but you'll need to use Unicode-aware functions (that is, not fgets) to work with it. If you don't want to deal with all that Unicode jazz right now, and you don't actually have any non-ASCII characters to deal with in the file, just do the manual conversion (either with your copy/paste or with a command-line utility) before running your program.

put pipe to stdin another process

I'm using pipe to send an array of numbers to another process to sort them. So far, I'm able to get the result from another process using fdopen. However, I can't figure out how to send data from the pipe as stdin for another process.
Here is my code:
int main ()
{
int fd[2], i, val;
pid_t child;
char file[10];
FILE *f;
pipe(fd);
child = fork();
if (child == 0)
{
close(fd[1]);
dup2(fd[0], STDIN_FILENO);
close(fd[0]);
execl("sort", "sort", NULL);
}
else
{
close(fd[0]);
printf ("BEFORE\n");
for (i = 100; i < 110; i++)
{
write(fd[1], &i, sizeof (int));
printf ("%d\n", i);
}
close(fd[1]);
wait(NULL);
}
}
By the way, how can the other process get input? scanf?
I think your pipe is set up correctly. The problems may start at execl(). For this call you should specify an absolute path, which is probably /bin/sort if you mean the Unix utility. There is also a version of the call execlp() which searches automatically on the PATH.
The next problem is that sort is text based, more specifically line based, and you're sending binary garbage to its STDIN.
In the parent process you should write formatted text into the pipe.
FILE *wpipe = fdopen(fd[1], "w");
for (i = 100; i < 110; i++) {
fprintf(wpipe, "%d\n", i);
...
}
fclose(wpipe);
A reverse loop from 110 down to 100 may test your sorting a bit better.

Alternative to fgets()?

Description:
Obtain output from an executable
Note:
Will not compile, due to fgets() declaration
Question:
What is the best alternative to fgets, as fgets requires char *?
Is there a better alternative?
Illustration:
void Q_analysis (const char *data)
{
string buffer;
size_t found;
found = buffer.find_first_of (*data);
FILE *condorData = _popen ("condor_q", "r");
while (fgets (buffer.c_str(), buffer.max_size(), condorData) != NULL)
{
if (found == string::npos)
{
Sleep(2000);
} else {
break;
}
}
return;
}
You should be using the string.getline function for strings
cppreference
however in your case, you should be using a char[] to read into.
eg
string s;
char buffer[ 4096 ];
fgets(buffer, sizeof( buffer ), condorData);
s.assign( buffer, strlen( buffer ));
or your code:
void Q_analysis( const char *data )
{
char buffer[ 4096 ];
FILE *condorData = _popen ("condor_q", "r");
while( fgets( buffer, sizeof( buffer ), condorData ) != NULL )
{
if( strstr( buffer, data ) == NULL )
{
Sleep(2000);
}
else
{
break;
}
}
}
Instead of declaring you buffer as a string declare it as something like:
char buffer[MY_MAX_SIZE]
call fgets with that, and then build the string from the buffer if you need in that form instead of going the other way.
The reason what you're doing doesn't work is that you're getting a copy of the buffer contents as a c-style string, not a pointer into the gut of the buffer. It is, by design, read only.
-- MarkusQ
You're right that you can't read directly into a std::string because its c_str and data methods both return const pointers. You could read into a std::vector<char> instead.
You could also use the getline function. But it requires an iostream object, not a C FILE pointer. You can get from one to the other, though, in a vendor-specific way. See "A Handy Guide To Handling Handles" for a diagram and some suggestions on how to get from one file type to another. Call fileno on your FILE* to get a numeric file descriptor, and then use fstream::attach to associate it with an fstream object. Then you can use getline.
Try the boost library - I believe it has a function to create an fstream from a FILE*
or you could use fileno() to get a standard C file handle from the FILE, then use fstream::attach to attach a stream to that file. From there you can use getline(), etc. Something like this:
FILE *condorData = _popen ("condor_q", "r");
std::ifstream &stream = new std::ifstream();
stream.attach(_fileno(condorData));
I haven't tested it all too well, but the below appears to do the job:
//! read a line of text from a FILE* to a std::string, returns false on 'no data'
bool stringfgets(FILE* fp, std::string& line)
{
char buffer[1024];
line.clear();
do {
if(!fgets(buffer, sizeof(buffer), fp))
return !line.empty();
line.append(buffer);
} while(!strchr(buffer, '\n'));
return true;
}
Be aware however that this will happily read a 100G line of text, so care must be taken that this is not a DoS-vector from untrusted source files or sockets.

Resources