how to tell if my C program was invoked via shebang? - shell

I've built a little command interpreter (in C++) which can be invoked either directly, or in a script via shebang (#!). It can take arguments on the command line (which appear as argc/argv in my code).
Trouble is, when invoked via shebang, the script itself gets passed to my program as argument 1. That's problematic; I don't want my command interpreter trying to process the script that it was invoked from. But I can't see any easy way to tell when this is the case.
EDIT: As an example, if I have a script called "test" which starts with #!/usr/local/bin/miniscript, and then invoke it as ./test --help -c -foo, I get 5 arguments in my C code: /usr/local/bin/miniscript, ./test, --help, -c, and -foo. If I invoke it directly, then I get four arguments: /usr/local/bin/miniscript, --help, -c, and -foo
How can I tell when my program was invoked via a shebang, or otherwise know to skip the argument that represents the script it was invoked by?

My question was based on a wrong assumption. I believed that two things were happening when a program was invoked via shebang:
Path to that program was passed as the first argument.
Contents of that program were piped to stdin.
So I was essentially worried about processing the content twice. But only item 1 is true; item 2 does not happen (as pointed out by helpful commenters on my question). So if the C code accepts the name of a file to process as a first argument, and ignores any initial line starting with a shebang, then all is right with the world.

Related

When data is piped from one program via | is there a way to detect what that program was from the second program?

Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.

Simple Linux shred script error

I have an ultra-simple script that houses the Linux program shred, and it contains the parameters that have always worked from the command line (bash). Specifically 'shred -uzn 35'
The script, named D, has execute permissions set.
When I run the script, bash prints an error:
$ D some_file_to_delete
shred: missing file operand
I realize that the solution to the problem is probably as simple as the program itself. Please help?
Thanks in advance.
EDIT: The error "missing file operand" was due to the fact that the script was not set to take arguments, such as via "$#". Also, as stated in the accepted answer, I agree that an alias makes total sense for such a scenario (much more sense than, say, a script somewhere in $PATH).
Since you are using a script, not an alias, you need to pass the arguments through
shred -uzn 35 "$#"
In this case, however, I suggest you do make it an alias. In your .bashrc file, add this:
alias D='shred -uzn 35'

Result of shell script as build setting

Is it possible to run a shell script and use its result as a user defined macro in Xcode?
Basically I just want the result of a shell script to be put in a variable so it gets set in Info.plist (just like ${EXECUTABLE_NAME} etc.)
For example:
If I add $(/usr/bin/whoami) as a build setting condition (at the bottom of settings of the build configuration) it just sets an empty string.
See this question for a couple of different approaches. All of them require add a "Run Script" build phase.
Assuming a bash like shell, and given an almost complete lack of context for your problem, try
EXECUTABLE_NAME=$( scriptToGetEXEC_NAME )
PRODUCT_NAME=$( scriptToGetPROD_NAME)
The $( ... cmd ... ) construct is called command substitution. What this means is the when the shell processor scans each line of code, if first looks to see if there are any $(...) embedded (and other things to). If there are, it spawns a new shell, executes the code inside, and if any text is returned, it is embedded in the command line and THEN the shell scans the line again, and eventually executes everything from left to right, assuming that the first word will turn into a built-in command or a command in the PATH.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.

How to call bash commands from tcl script?

Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.

C Shell: How to execute a program with non-command line arguments?

My $SHELL is tcsh. I want to run a C shell script that will call a program many times with some arguments changed each time. The program I need to call is in Fortran. I do not want to edit it. The program only takes arguments once it is executed, but not on the command line. Upon calling the program in the script, the program takes control (this is where I am stuck currently, I can never get out because the script will not execute anything until after the program process stops). At this point I need to pass it some variables, then after several iterations I will need to Ctrl+C out of the program and continue with the script.
How can this be done?
To add to what #Toybuilder said, you can use a "here document". I.e. your script could have
./myfortranprogram << EOF
first line of input
second line of input
EOF
Everything between the "<<EOF" and the "EOF" will be fed to the program's standard input (does Fortran still use "read (5,*)" to read from standard input?)
And because I think #ephemient's comment deserves to be in the answer:
Some more tips: <<'EOF' prevents
interpolation in the here-doc body;
<<-EOF removes all leading tabs (so
you can indent the here-doc to match
its surroundings), and EOF can be
replaced by any token. An empty token
(<<"") indicates a here-doc that stops
at the first empty line.
I'm not sure how portable those ones are, or if they're just tcsh extensions - I've only used the <<EOF type "here document" myself.
What you want to use is Expect.
Uhm, can you feed your Fortran code with a redirection? You can create a temporary file with your inputs, and then pipe it in with the stdin redirect (<).
This is a job for the unix program expect, which can nicely and easily interactively command programs and respond to their prompts.
I was sent here after being told my question was close to being a duplicate of this one.
FWIW, I had a similar problem with a csh C shell script.
This bit of code was allowing the custom_command to execute without getting ANY input arguments:
foreach f ($forecastTimes)
custom_command << EOF
arg1=x$f;2
arg2=ya
arg3=z,z$f
run
exit
EOF
end
It didn't work the first time I tried it, but after I backspaced out all of the white space in that section of the code I removed the space between the "<<" and the "EOF". I also backspaced the closing "EOF" all the way to the left margin. After that it worked:
foreach f ($forecastTimes)
custom_command <<EOF
arg1=x$f;2
arg2=ya
arg3=z,z$f
run
exit
EOF
end
Not a tcsh user, but if the program runs then reads in commands via stdin then you can use shell redirection < to feed it the required commands. If you run it in the background with & you will not block when it is executed. Then you can sleep for a bit, then use whatever tools you have (ps, grep, awk, etc) to discover the program's PID, then use kill to send it SIGTERM which is the same as doing a Ctrl-C.

Resources