It behaves like I have given "$(python") to the argument.
It should behave like AAAA in argument, which it does not.
That is why I am unable to use shell code.
I want python output as the argument of run.
$() is now working. what is the alternative or Does it works in windows?
GDB's "run" command doesn't do shell expansion - it will pass the arguments $(python, -c, "print and so on.
However, you can start GDB with:
gdb --args ./yourprogram $(python -c "print 'AAAA'")
and then run:
run
Related
I want to check if a command exists on my machine (RedHat) inside a perl script.
Im trying to check if compgen -c contains the desired command, but running it from inside a script just gives me an empty output. Other commands work fine.
example.pl:
my $x = `compgen -c`;
print $x;
# empty output
my $y = `ls -a`;
print $y;
# .
# ..
# example.pl
Are there possible solutions for this? Or is there a better way to check for commands on my machine?
First, Perl runs external commands using /bin/sh, which is nowadays a link to a shell that is a default-of-sorts on your system. Much of the time that is bash, but not always; on RedHat it is.
This compgen is a bash builtin. One way to discover that is to run man compgen (in bash) -- and the bash manual pops up. Another way is type as Dave shows.
To use builtins we generally need to run an explicit shell for them, and they have a varied behavior in regards to whether the shell is "interactive" or not.† I can't find a discussion of that in bash documentation for this builtin but experimentation reveals that you need
my #completions = qx(bash -c "compgen -c")
The quotes are needed so to pass a complete command to a shell that will be started.
Note that this way you don't catch any STDERR out of those commands. That will come out on the terminal, and it can get missed that way. Or, you can redirect that stream in the command, by adding 2>&1 (redirect to STDOUT) at the end of it.
This is one of the reasons to use one of a number of good libraries for running and managing external commands instead of the builtin "backticks" (the qx I use above is an operator form of it.)
† This can be facilitated with -i
my #output_lines = qx(bash -i -c "command with arguments")
It's because compgen is a bash built-in command, not an external command. And when you run a command using backticks, you get your system's default shell - which is probably going to be /bin/sh, not bash.
The solution is to explicitly run bash, using the -c command-line option to give it a command to run.
my $x = `bash -c compgen -c`;
From a bash prompt, you can use type to see how a command is implemented.
$ type ssh
ssh is /usr/bin/ssh
$ type compgen
compgen is a shell builtin
I am trying to run rsync from a batch file. The command is
SET CMD="rsync -P -rptz --delete -e 'ssh -i /root/.ssh/CERTIFICATE.pem' SOURCE_ADDRESS /mnt/c/Users/MYNAME/IdeaProjects/PROJECT/SUBFOLDER/SUBFOLDER/SUBFOLDER/SUBFOLDER/LASTFOLDER"
bash %CMD%
This works fine if I run the command after typing bash, but when I run the command from cmd with the bash precursor it says No such file or directory.
Additionally, when playing around and trying to debug bash ends up hanging... i.e. if I open bash I get no prompt, just a blinking cursor.
Any help is appreciated.
To run a command with bash you need to use the -c option
bash -c "%CMD%"
Without it the first non-option parameter will be treated as a *.sh shell script, which rsync isn't and will cause an error
If arguments remain after option processing, and neither the -c nor the -s option has been supplied, the first argument is assumed to be the name of a file containing shell commands.
Note that the cmd in Windows is not DOS even though they have a few similar commands. The rest are vastly different
I'm writing a bash script that starts the tcsh interpreter as a login shell and has it execute my_command. The tcsh man page says that there are two ways to start a login shell. The first is to use /bin/tcsh -l with no other arguments. Not an option, because I need the shell to execute my_command. The second is to specify a dash (-) as the zeroeth argument.
Now the bash exec command with the -l option does exactly this, and in fact the following works perfectly:
#!/bin/bash
exec -l /bin/tcsh -c my_command
Except... I can't use exec because I need the script to come back and do some other things afterwards! So how can I specify - as the zeroeth argument to /bin/tcsh without using exec?
You can enclose the exec command into a sub-shell of your script.
#!/bin/bash
(exec -l /bin/tcsh -c my_command)
# ... whatever else you need to do after the command is done
You can write a wrapper (w.sh) script that contains:
#!/bin/bash
exec -l /bin/tcsh -c my_command
and execute w.sh in your main script.
Here is the offending part of my script:
read -d '' TEXT <<'EOF'
Some Multiline
text that
I would like
in
a
var
EOF
echo "$TEXT" > ~/some/file.txt
and the error:
read: 175: Illegal option -d
I use this read -d all over the place and it works fine. Not sure why its not happy now. I'm running the script on Ubuntu 10.10
Fixes? Workarounds?
If you run sh and then try that command, you get:
read: 1: Illegal option -d
If you do it while still in bash, it works fine.
I therefore deduce that your script is not running under bash.
Make sure that your script begins with the line:
#!/usr/bin/env bash
(or equivalent) so that the correct shell is running the script.
Alternatively, if you cannot do that (because the script is not a bash one), just be aware that -d is a bash feature and may not be available in other shells. In that case, you will need to find another way.
The -d option to read is a feature unique to bash, not part of the POSIX standard (which only specifies -r and -p options to read). When you run your script with sh on Ubuntu, it's getting run with dash, which is a POSIX shell, and not bash. If you want the script to run under bash then you should run it with bash, or give it a #!/bin/bash shebang. Otherwise, it should be expected to run under any POSIX sh.
In a C program I can write argv[0] and the new name shows up in a ps listing.
How can I do this in bash?
You can do it when running a new program via exec -a <newname>.
Just for the record, even though it does not exactly answer the original poster's question, this is something trivial to do with zsh:
ARGV0=emacs nethack
I've had a chance to go through the source for bash and it does not look like there is any support for writing to argv[0].
I'm assuming you've got a shell script that you wish to execute such that the script process itself has a new argv[0]. For example (I've only tested this in bash, so i'm using that, but this may work elsewhere).
#!/bin/bash
echo "process $$ here, first arg was $1"
ps -p $$
The output will be something like this:
$ ./script arg1
process 70637 here, first arg was arg1
PID TTY TIME CMD
70637 ttys003 0:00.00 /bin/bash ./script arg1
So ps shows the shell, /bin/bash in this case. Now try your interactive shell's exec -a, but in a subshell so you don't blow away the interactive shell:
$ (exec -a MyScript ./script arg1)
process 70936 here, first arg was arg1
PID TTY TIME CMD
70936 ttys008 0:00.00 /bin/bash /path/to/script arg1
Woops, still showing /bin/bash. what happened? The exec -a probably did set argv[0], but then a new instance of bash started because the operating system read #!/bin/bash at the top of your script. Ok, what if we perform the exec'ing inside the script somehow? First, we need some way of detecting whether this is the "first" execution of the script, or the second, execed instance, otherwise the second instance will exec again, and on and on in an infinite loop. Next, we need the executable to not be a file with a #!/bin/bash line at the top, to prevent the OS from changing our desired argv[0]. Here's my attempt:
$ cat ./script
#!/bin/bash
__second_instance="__second_instance_$$"
[[ -z ${!__second_instance} ]] && {
declare -x "__second_instance_$$=true"
exec -a MyScript "$SHELL" "$0" "$#"
}
echo "process $$ here, first arg was $1"
ps -p $$
Thanks to this answer, I first test for the environment variable __second_instance_$$, based on the PID (which does not change through exec) so that it won't collide with other scripts using this technique. If it's empty, I assume this is the first instance, and I export that environment variable, then exec. But, importantly, I do not exec this script, but I exec the shell binary directly, with this script ($0) as an argument, passing along all the other arguments as well ($#). The environment variable is a bit of a hack.
Now the output is this:
$ ./script arg1
process 71143 here, first arg was arg1
PID TTY TIME CMD
71143 ttys008 0:00.01 MyScript ./script arg1
That's almost there. The argv[0] is MyScript like I want, but there's that extra arg ./script in there which is a consequence of executing the shell directly (rather than via the OS's #! processing). Unfortunately, I don't know how to get any better than this.
Update for Bash 5.0
Looks like Bash 5.0 adds support for writing to special variable BASH_ARGV0, so this should become far simpler to accomplish.
(see release announcement)
( exec -a foo bash -c 'echo $0' )
ps and others inspect two things, none of which is argv0: /proc/PID/comm (for the "process name") and /proc/PID/cmdline (for the command-line). Assigning to argv0 will not change what ps shows in the CMD column, but it will change what the process usually sees as its own name (in output messages, for example).
To change the CMD column, write to /proc/PID/comm:
echo -n mynewname >/proc/$$/comm; ps
You cannot write to or modify /proc/PID/cmdline in any way.
Process can set their own "title" by writing to the memory area in which argv & envp are located (note that this is different than setting BASH_ARGV0). This has the side effect of changing /proc/PID/cmdline as well, which is what some daemons do in order to prettify (hide?) their command lines. libbsd's setproctitle() does exactly that, but you cannot do that in Bash without support of external tools.
I will just add that this must be possible at runtime, at least in some environments. Assigning $0 in perl on linux does change what shows up in ps. I do not know how that is implemented, however. If I can find out, i'll update this.
edit:
Based on how perl does it, it is non-trivial. I doubt there is any bask built in way at runtime but don't know for sure. You can see how perl does sets the process name at runtime.
Copy the bash executable to a different name.
You can do this in the script itself...
cp /bin/bash ./new-name
PATH=$PATH:.
exec new-name $0
If you are trying to pretend you are not a shell script you can rename the script itself to something cool or even " " (a single space) so
exec new-name " "
Will execute bash your script and appears in the ps list as just new-name.
OK so calling a script " " is a very bad idea :)
Basically, to change the name
bash script
rename bash and rename the script.
If you are worried, as Mr McDoom. apparently is, about copying a binary to a new name (which is entirely safe) you could also create a symlink
ln -s /bin/bash ./MyFunkyName
./MyFunkyName
This way, the symlink is what appears in the ps list. (again use PATH=$PATH:. if you dont want the ./)