I've written a libcurses-based ascii ui that writes text to stdout when the program exits.
If I execute the program alone, like so...
> ./test
...the ui displays.
However, if I try to capture the program output to a Bash variable, like so...
> foo=$(./test)
...the ui does not display, however the Bash variable captures the expected output.
Does anyone know why this behavior is as such? Is there a way to get the ui to show up when trying to capture its stdout to a Bash variable?
The Code
#include <iostream>
#include <curses.h>
#include <unistd.h>
int main(int argc, char* argv[])
{
WINDOW* pWindow = initscr();
keypad(pWindow, TRUE);
curs_set(0);
nodelay(pWindow, FALSE);
mvwprintw(pWindow, 5, 5, "hello, world!");
mvwprintw(pWindow, 6, 5, "hello, fold!");
mvwprintw(pWindow, 7, 5, "hello, toad!");
for (int i = 0; i < 5; ++i)
{
mvwprintw(pWindow, 5 + i, 1, "==>");
refresh();
usleep(500000);
mvwprintw(pWindow, 5 + i, 1, " ");
refresh();
}
endwin();
std::cout << "bar" << std::endl;
}
Simply because redirecting the std output (>, a=$(…)) just redirects the standard output – ncurses, on the other hand, directly talks to the terminal and displays characters, that never are part of stdout.
Short: it doesn't capture the output because there is none. Instead, ncurses programs directly talk to the underlying terminal.
Is there a way to get the ui to show up when trying to capture its stdout to a Bash variable?
I don't recommend that. Because you're mixing non-interactive usage (getting std output) and interactive, and that can't really go well in the end, but:
you can end your ncurses session and just use printf like any other C programmer. Then you'd actually be producing std output.
I'd much rather just add an option to my program that takes a file to which I write my output. Then the bash script could open that file after my program has run.
When you initialize curses using initscr, it will use the standard output for display. (You could use newterm to specify another output). So when you redirect the output of the program you will not see the user-interface.
Adapting your example,
#!/bin/bash
g++ -o test foo.c $(ncursesw6-config --cflags --libs)
foo=$(./test)
set >foo.log
and looking at what bash puts in $foo, I see the expected control characters which are written in the user interface, e.g.,
foo=$'\E[?1049h\E[1;40r\E(B\E[m\E[4l\E[?7h\E[?1h\E=\E[?25l\E[H\E[2J\E[6d ==> hello, world!\n\E[6Ghello, fold!\n\E[6Ghello, toad!\E[6;5H\r \r\n ==>\r \r\n ==>\r \r\n ==>\r\E[J \r\n ==>\r\E[J \E[40;1H\E[?12l\E[?25h\E[?1049l\r\E[?1l\E>bar'
Related
Everytime I use the terminal to print out a string or any kind of character, it automatically prints an "%" at the end of each line. This happens everytime I try to print something from C++ or php, havent tried other languages yet. I think it might be something with vscode, and have no idea how it came or how to fix it.
#include <iostream>
using namespace std;
int test = 2;
int main()
{
if(test < 9999){
test = 1;
}
cout << test;
}
Output:
musti#my-mbp clus % g++ main.cpp -o tests && ./tests
1%
Also changing the cout from cout << test; to cout << test << endl; Removes the % from the output.
Are you using zsh? A line without endl is considered a "partial line", so zsh shows a color-inverted % then goes to the next line.
When a partial line is preserved, by default you will see an inverse+bold character at the end of the partial line: a ‘%’ for a normal user or a ‘#’ for root. If set, the shell parameter PROMPT_EOL_MARK can be used to customize how the end of partial lines are shown.
More information is available in their docs.
My Code
#include<stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/wait.h>
int main()
{
char * arg_list[3];
arg_list[0] = "ls";
arg_list[1] = "-l";
arg_list[2] = 0;
char *arg_list2[3];
arg_list2[0] = " ps";
arg_list2[1] = "-ef";
arg_list2[2] = 0;
for(int i=0;i<5;i++){ // loop will run n times (n=5)
if(fork() == 0) {
if (i == 0){
execvp("ls", arg_list);
}else if(i==1){
execvp("ps" , arg_list2);
}else if(i>1){
printf("[son] pid %d from [parent] pid %d\n",getpid(),getppid());
exit(0);
}
}
}
for(int i=0;i<5;i++) // loop will run n times (n=5)
wait(NULL);
}
ME trying to modify it
#include<stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/wait.h>
int main()
{
for(int i=0;i<5;i++){ // loop will run n times (n=5)
if(fork() == 0) {
printf("[son] pid %d from [parent] pid %d\n",getpid(),getppid());
execlp(argv[i],argv[i],argv[i+1],(char*)NULL);
exit(0);
}
}
for(int i=0;i<5;i++) // loop will run n times (n=5)
wait(NULL);
}
-- NEED GUIDANCE AND UNDERSTANDING
I am trying to make my own tiny little shell program. When I run my first code works fine, runs all commands on the command line. But I cannot know and define all commands the user might enter. So i am trying to get a base code which could run any commands single or multiple entered by user. I tried using execlp where it does not compile saying argv is not defined which is true as i don't want to specifically define it.
I am trying to make my own tiny little shell program. When I run my first code works fine, runs all commands on the command line. But I cannot know and define all commands the user might enter.
For sure.... A shell program purpose is basically:
Read user input
Execute user input
Return result of execution.
There's nothing in your code that read user input....
So i am trying to get a base code which could run any commands single or multiple entered by user.
So read user input ;-)
I tried using execlp where it does not compile saying argv is not defined which is true as i don't want to specifically define it.
For sure ... but how would GCC guessed that `argv[]̀ must be automaticallty filled with user input ?
There's nothing automatic when coding in C language. You have to manage this manually.
Also, note that argc, argv et envp are usually reserved for main() function:
main(int argc, char **argv, char **envp)
So you may use something else to build your command array.
In pseudo code, what you must implement is:
quit=0
while (quit = 0) {
command_to_run = read_user_input();
if (command_to_run == "exit") {
quit = 1;
} else {
execute(command_to_run);
}
}
Some advices:
Try to use more functions. For example, implement a fork_and_run(char **cmd) function to fork and then execute command provided by the user. Il will make your code more readable and easy to maintain.
Read carefully manpages: everything you should know (like, for example, the fact that array provided to execvp() must be NULL-terminated) is written in it.
Your debugging messages should be printed to stderr. The result of the command run must be printed to stdin, so use fprintf() instead of printf() to write to the correct stream.
I would use a #define debug(x) fprintf(stderr, x) or something similar for debugging output so that you can easily disable later ;-)
for(i=0;i<2;i++)
if(fork()==0)
printf("Hi");
I am expecting 3 hi and getting 4 hi
SO i edited the printf as printf("Hi %d %d %d ",i,getpid(),getppid());
The first child created prints two hi with same value of I i.e 0 and its pid and parent's pid are also same. Why?
It's quite interesting and looks like the answer is output buffering. For example we have:
#include <unistd.h>
#include <stdio.h>
int main() {
for(int i=0;i<2;i++) {
if(fork()==0) {
printf("Hi %d %d %d\n",i,getpid(),getppid());
}
}
}
If run this code in terminal there will be 3 lines, but if I will redirect the output to less there will be four!
If we will flush the buffer after printf() the problem will disappear:
// ...
printf("Hi %d %d %d\n",i,getpid(),getppid());
fflush(stdout);
// ...
That's happening because stdout is buffered, so when process forked the buffer still not flushed.
From man stdout:
The stream stderr is unbuffered. The stream stdout is
line-buffered when it points to a terminal. Partial lines will not
appear until fflush(3) or exit(3) is called, or a newline is printed.
This can produce unexpected results, especially with debugging
output.
I'm trying to simulate a pipe behavior on Ubuntu's Terminal, for example the command:
"echo hello | wc".
Please assume I got the tokens from stdin, handled everything correctly and now These are the commands I "received" from the user who typed them in the shell for me to handle.
I'm trying to create two processes. Using a pipe, in the first process, I point the file descriptor of the writing edge of the pipe to stdout. The second process should read into stdin with the reading edge of the pipe what execvp(..) returned.?
Here is the code I wrote:
#include <stdio.h>
#include <stdlib.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
int main()
{
char* fcmd[] = {"echo", "hello", NULL};
char* scmd[] = {"wc", NULL};
pid_t pid;
int pipe_des[2];
int i;
pipe(pipe_des);
for(i = 0; i < 2; i++)
{
pid = fork();
if (pid ==0)
{
switch (i)
{
case 0: // FIRST CHILD
{
dup2(pipe_des[1], STDOUT_FILENO);
close(pipe_des[0]);
execvp(fcmd[0], fcmd);
exit(0);
}
case 1: //SECOND CHILD
{
dup2(pipe_des[0], STDIN_FILENO);
close(pipe_des[1]);
execvp(scmd[0], scmd);
exit(0);
}
}
}
else if (pid < 0)
exit(EXIT_FAILURE);
return EXIT_SUCCESS;
}
I get: " amirla#ubuntu:~/Desktop/os/class/ex4$ 1 1 6 "
Like it should, but why he's printing the bash cwd first? The pipe seems to work because I get what I should, according to the length of the word I'm sending with the echo command(in the main()). After that the cursor just waits on the line below for another command without showing me the bash pwd. (maybe stdin is waiting?)
I've looked in many posts on here as well as on other websites and I still can't seem to find a solution to my problem. Any help would be appreciated. Thanks in advance.
Note: Please Ignore checking for errors, I've delete them to make the code shorter so assume they exist.
Why do I get a prompt before the output?
Your main process doesn't wait for the children to finish. What you see is:
Main starts
Main creates children
Main exits
BASH prints prompt
Children start their work
To prevent this, you need to wait for the children. See How to wait until all child processes called by fork() complete?
In your case, it's enough to add
waitpid(-1, NULL, 0);
after the loop.
I'm using Open3's popen2 to interact with a simple C++ program's iostreams. My understanding is that std::cin and std::cout are independent, but the order in which I have my popen2 block's IO objects read/write calls seems to make a difference. My C++ program is:
int main(int argc, char** argv) {
std::string input;
std::cout<<"EXECUTE TASK"<<std::endl;
std::cin>>input;
std::cout<<"END"<<std::endl;
}
My ruby script is:
require 'open3'
expected_string = "EXECUTE TASK"
Open3.popen2('~/Sandbox/a.out') { |stdin, stdout|
stdin.write('\n')
stdin.close
results = stdout.readlines
puts results
}
The above works fine, but if I move the stdout.readlines before the stdin.close, the ruby script will hang. My intent is to conditionally write the \n to stdin if the C++ program writes expected_string to standard out first, but I'm forced to close the stdin stream before I can execute readlines. Like I said, my understanding was that the two streams are independent, and the file descriptors returned by popen2 appear to be independent as well, so why would the order matter?
Any help is appreciated. Thanks.
Solution with full scope of what I was trying to accomplish (someone may find this helpful):
int main(int argc, char** argv) {
std::string input;
std::cout<<"1"<<std::endl;
std::cout<<"2"<<std::endl;
std::cout<<"3"<<std::endl;
std::cout<<"4"<<std::endl;
std::cout<<"5"<<std::endl;
std::cout<<"EXECUTE TASK"<<std::endl;
std::cout.flush();
std::cin>>input;
std::cout<<"END"<<std::endl;
}
require 'open3'
expected_string = "EXECUTE TASK"
Open3.popen2('~/Sandbox/a.out') { |stdin, stdout|
found = false
begin
while(result = stdout.readline)
puts result
if(result.include?(expected_string))
found = true
break
end
end
rescue
raise "Exception caught while reading lines"
end
stdin.write('\n')
stdin.close
}
Looks like a deadlock:
The ruby script is calling IO::readlines, which will not return until the entire stream has been read.
The C++ program does not actually terminate until it receives a carriage return.
You may want to call IO::readline, which will return each line as it is received, or reorder the two scripts so that there is not a deadlock.
It is highly likely that you just need stdout.flush after anything that you expect your correspondent to act on.
Because the pipe set up by #popen is not a tty, it will not default to line buffering. It will do block buffering. You will need to force your stream to act in datagram fashion by calling #flush on "record" boundaries.
Also, see IO#sync= for a way to flush all I/O automatically.