I made a c program which takes two standard inputs automatically and one manually.
#include <stdio.h>
int main()
{
int i, j;
scanf("%d %d", &i, &j);
printf("Automatically entered %d %d\n", i, j);
int k;
scanf("%d", &k);
printf("Manually entered %d", k);
return 0;
}
I want to run this program using bash script which can input first two inputs automatically and leaves one input that is to be entered manually. This is the script I am using.
#!/bin/bash
./test <<EOF
1
2
EOF
The problem is EOF is passed as third input instead of asking for manual input. I cannot change my c program and I cannot input third input before the two inputs, so how can I do this using bash. I am new to bash scripting please help.
I made a c program which takes two standard inputs automatically and one manually.
No, you didn't. You made a program that attempts to read three whitespace-delimited decimal integers from the standard input stream. The program cannot distinguish between different origins of those integers.
The problem is EOF is passed as third input instead of asking for manual input.
No, the problem is that you are redirecting the program's standard input to be a shell here document. The here document provides the whole standard input, similar to if your program were reading a file with the here document's contents. When it reaches the end, it does not fall back to reading anything else.
I cannot change my c program and I cannot input third input before the two inputs
I take those two statements to be redundant: you cannot alter the program so that the input you characterize as "manual" is the first one it attempts to read. Not that that would help, anyway.
What you need to do is prepend the fixed input to the terminal input in the test program's standard input stream. There are many ways to do that, but the cat command (mnemonic for concatenate) seems like a natural choice. That would work together with process substitution to achieve your objective. For example,
#!/bin/bash
cat <(echo 1 2) - | ./test
The <(echo 1 2) part executes echo 1 2 and provides its standard output as if it were a file. The cat command concatenates that with its own standard input (represented by -), emitting its result to its standard output. The result is piped into program ./test.
This provides a means to prepend fixed input under your control to arbitrary data read from the standard input. That is, the wrapper script doesn't need to know what input the program expects after the fixed initial part.
Your problem is not caused by EOF being passed as third argument, but actually because stdin for your command is closed before third call to scanf.
One way how to solve this, is reading the value inside the script and then passing all three of them.
Something like this:
#!/bin/bash
read value
printf '1 2 %s' "$value" | ./test
Related
I have a Nextflow process that uses a bash script (check_bam.sh) to generate a text file. The only options for the contents of that text file are either a 0 or any other number. I would like to extract that 0 or the other value and save it to a Nextflow variable, to be able to use a conditional, in the way that if the content of the file is a 0, the Nextflow script should skip some processes, and if it's any other number that is not zero, the execution should be carried out completely. I am not having problems with the use of Nextflow conditionals and setting channels to empty, but in the part of saving that value that is generated inside the script part into a Nextflow variable to use outside processes.
The process that generates the file (result_bam.txt) with the 0 or other number is as follows (I have simplified it to make it as clear as possible):
process CHECK_BAM {
input:
path bam from channel_bam
output:
path "result_bam.txt"
path "result_bam.txt" into channel_check_bam
script:
"""
bash bin/check_bam.sh $bam > result_bam.txt
"""
What I am checking is the number of mapped reads in the BAM file, and I would like to save that number into a Nextflow variable because if the number is zero, the execution should skip most of the following processes, but if the number is different than zero, it means that there are mapped reads in the file and the execution should continue as intended.
I have thought that maybe using cat result_bam.txt > $FOO or FOO=``cat result_bam.txt` could be a solution but I don't know how to properly save it so the variable is usable between processes.
Use an env channel to grab the data from FOO=``cat result_bam.txt and turn it into a channel.
Few things come into my mind there, hopefully I understand your problem well. Is check_bam.sh only counting lines of BAM file?
The first option for me would be to check if there is possibility for you, to check if the BAM file has content from your pipeline. This might be useful: countLines_documentation. You should be cautious here, as huge BAM file can lead to memory exception (countLines "loads" the file).
Second option, maybe better, is to pass file result_bam.txt into channel channel_check_bam, and then, following process should be run regarding if the content of file (the number in file result_bam.txt) is greater than 0. So, when you are connecting this channel to other process, you should read the content as:
input:
val bam_lines from channel_check_bam.map{ it.readLines() } // Gives a list of lines, so 1st line will be your number of mapped reads.
when:
bam_lines[0].toInteger() > 0
This way it should be run only when number in result_bam.txt is > 0.
I was testing that with DSL2, so the code might need some little changes - but it works.
Cris Tuñí - Edit: 08/24/2021
Thanks to the help of DawidGaceck I could edit my processes to run only when the number in the file was different than zero. My code ended looking like this:
process CHECK_BAM {
input:
path bam from channel_bam
output:
path "result_bam.txt"
path "result_bam.txt" into channel_check_bam_process1,
channel_check_bam_process2
script:
"""
bash bin/check_bam.sh $bam > result_bam.txt
"""
process PROCESS1 {
input:
val bam_lines from channel_check_bam_process1.map{ it.readLines() }
when:
bam_lines[0].toInteger() > 0
script:
"""
foo bar baz
"""
Hope this helps anyone with the same question or a similar issue!
Although they do give the same results, I wonder if there is some difference between them and which is the most appropriate way to sort something contained in a file.
Another thing which intrigues me is the use of delimiters, I noticed that the sort filter only works if you separate the strings with a new line, are there any ways to do this without having to write the new strings in a separate line
The sort(1) command reads lines of text, analyzes and sorts them, and writes out the result. The command is intended to read lines, and lines in unix/linux are terminated by a new line.
The command takes its first non-option argument as the file to read; if there is no specification it reads standard input. So:
sort file_name
is a command line with such argument. The other two examples, "... | sort" and "sort < ..." do not specify the file to read directly to sort(1), but use its standard input. The effect, for what sort(1) is concerned, is the same.
ways to do this without having to write the new strings in a separate line
Ultimately no. But if you want you can feed sort using another filter (a program), which reads the file non-linefeed-separated and creates lines to pass to sort. If such program exists and is named "myparse", you can do:
myparse non-linefeed-separated-file | sort
The solution using cat involves creating a second process unnecessarily. This could be a performance issue if you perform many of such operation in a loop.
When doing input redirection to your file, the shell is setting up the association of file with std input. If the file would not exist, the shell complains about the file being missing.
When passing the file name as explicit argument, the sort process has to care about opening the file and to report an error if there is an accessability problem with it.
I have to run unix code on a remote system that does not have MATLAB. I call it like this
% shared key between computers so no password required to type in
ssh_cmd = 'ssh login#ipaddress ';
for x = 1:nfiles
cmd = sprintf('calcPosition filename%d',x);
% as an example, basically it runs a c++ command
% on that computer and I pass in a different filename as an input
full_cmd = [ssh_cmd, cmd];
[Status,Message] = system(full_cmd);
% the stdout is a mix of strings and numbers
% <code that parses Message and handles errors>
end
For a small test this takes about 30 seconds to run. If I set it so that
full_cmd = [ssh_cmd, cmd1; cmd2; cmdN]; % obviously not valid syntax here
% do the same thing as above, but no loop
it takes about 5 seconds, so most of the time is spent connecting to the other system. But Message is the combined stdout of all n files.
I know I can pipe the outputs to separate files, but I'd rather pass it back directly to MATLAB without addition I/O. So is there a way to get the stdout (Message) as a cell array (or Unix equivalent) for each separate command? Then I could just loop over the cells of Message and the remainder of the function would not need to be changed.
I thought about using an identifier text around each command call as a way to help parse Message, e.g.
full_cmd = [ssh_cmd; 'echo "begincommand " '; cmd1; 'echo "begincommand " '; cmd2]; % etc
But there must be something more elegant than that. Any tips?
I am writing a program in which I am taking in a csv file via the < operator on the command line. After I read in the file I would also like to ask the user questions and have them input their response via the command line. However, whenever I ask for user input, my program skips right over it.
When I searched stack overflow I found what seems to be the python version here, but it doesn't really help me since the methods are obviously different.
I read my file using $stdin.read. And I have tried to use regular gets, STDIN.gets, and $stdin.gets. However, the program always skips over them.
Sample input ruby ./bin/kata < items.csv
Current File
require 'csv'
n = $stdin.read
arr = CSV.parse(n)
input = ''
while true
puts "What is your choice: "
input = $stdin.gets.to_i
if input.zero?
break
end
end
My expected result is to have What is your choice: display in the command and wait for user input. However, I am getting that phrase displayed over and over in an infinite loop. Any help would be appreciated!
You can't read both file and user input from stdin. You must choose. But since you want both, how about this:
Instead of piping the file content to stdin, pass just the filename to your script. The script will then open and read the file. And stdin will be available for interaction with the user (through $stdin or STDIN).
Here is a minor modification of your script:
arr = CSV.parse(ARGF) # the important part.
input = ''
while true
puts "What is your choice: "
input = STDIN.gets.to_i
if input.zero?
break
end
end
And you can call it like this:
ruby ./bin/kata items.csv
You can read more about ARGF in the documentation: https://ruby-doc.org/core-2.6/ARGF.html
This has nothing to do with Ruby. It is a feature of the shell.
A file descriptor is connected to exactly one file at any one time. The file descriptor 0 (standard input) can be connected to a file or it can be connected to the terminal. It can't be connected to both.
So, therefore, what you want is simply not possible. And it is not just not possible in Ruby, it is fundamentally impossible by the very nature of how shell redirection works.
If you want to change this, there is nothing you can do in your program or in Ruby. You need to modify how your shell works.
Is it possible to call shell command from a Fortran script?
My problem is that I analyze really big files. These files have a lot of lines, e.g. 84084002 or similar.
I need to know how many lines the file has, before I start the analysis, therefore I usually used shell command: wc -l "filename", and than used this number as a parameter of one variable in my script.
But I would like to call this command from my program and use the number of lines and store it into the variable value.
Since 1984, actually in the 2008 standard but already implemented by most of the commonly-encountered Fortran compilers including gfortran, there is a standard intrinsic subroutine execute_command_line which does, approximately, what the widely-implemented but non-standard subroutine system does. As #MarkSetchell has (almost) written, you could try
CALL execute_command_line('wc -l < file.txt > wc.txt' )
OPEN(unit=nn,file='wc.txt')
READ(nn,*) count
What Fortran doesn't have is a standard way in which to get the number of lines in a file without recourse to the kind of operating-system-dependent workaround above. Other, that is, than opening the file, counting the number of lines, and then rewinding to the start of the file to commence reading.
You should be able to do something like this:
command='wc -l < file.txt > wc.txt'
CALL system(command)
OPEN(unit=nn,file='wc.txt')
READ(nn,*) count
You can output the number of lines to a file (fort.1)
wc -l file|awk '{print $1}' > fort.1
In your Fortran program, you can then store the number of lines to a variable (e.g. count) by reading the file fort.1:
read (1,*) count
then you can loop over the variable count and read your whole file
do 1,count
read (file,*)