How to evaluate a Julia expression from the terminal (not the REPL)? - terminal

Is there a way to run a (set of) Julia commands without entering the REPL?
E.g. julia.exe "using IJulia; notebook()" doesn't work.
MY end goal is to be able to create a clickable batch file that allows me and others I share it with to open Jupyter without needing to worry about the command line or REPL.

You can use the -e flag to the julia executable like this:
julia.exe -e "using IJulia; notebook()"
If you don't want the session to die after running, and you want it to give you a REPL afterwards, you can pass -i as:
julia.exe -e "using IJulia; notebook()" -i
This option and others is documented in the "Getting started" section of the documentation
Or by running the executable with the -h flag:
julia.exe -h

In addition with -e option, julia also read and evaluate stdin. Thus you can also do these using shell pipes/redirections:
$ echo '1+1' | julia
2
$ julia <<EOF
> 1+1
> EOF
2
$ julia <<< 1+1
2

Related

Can't run "compgen -c" from perl script

I want to check if a command exists on my machine (RedHat) inside a perl script.
Im trying to check if compgen -c contains the desired command, but running it from inside a script just gives me an empty output. Other commands work fine.
example.pl:
my $x = `compgen -c`;
print $x;
# empty output
my $y = `ls -a`;
print $y;
# .
# ..
# example.pl
Are there possible solutions for this? Or is there a better way to check for commands on my machine?
First, Perl runs external commands using /bin/sh, which is nowadays a link to a shell that is a default-of-sorts on your system. Much of the time that is bash, but not always; on RedHat it is.
This compgen is a bash builtin. One way to discover that is to run man compgen (in bash) -- and the bash manual pops up. Another way is type as Dave shows.
To use builtins we generally need to run an explicit shell for them, and they have a varied behavior in regards to whether the shell is "interactive" or not.† I can't find a discussion of that in bash documentation for this builtin but experimentation reveals that you need
my #completions = qx(bash -c "compgen -c")
The quotes are needed so to pass a complete command to a shell that will be started.
Note that this way you don't catch any STDERR out of those commands. That will come out on the terminal, and it can get missed that way. Or, you can redirect that stream in the command, by adding 2>&1 (redirect to STDOUT) at the end of it.
This is one of the reasons to use one of a number of good libraries for running and managing external commands instead of the builtin "backticks" (the qx I use above is an operator form of it.)
† This can be facilitated with -i
my #output_lines = qx(bash -i -c "command with arguments")
It's because compgen is a bash built-in command, not an external command. And when you run a command using backticks, you get your system's default shell - which is probably going to be /bin/sh, not bash.
The solution is to explicitly run bash, using the -c command-line option to give it a command to run.
my $x = `bash -c compgen -c`;
From a bash prompt, you can use type to see how a command is implemented.
$ type ssh
ssh is /usr/bin/ssh
$ type compgen
compgen is a shell builtin

Running for loop using some other user

I am trying to execute a command using some other user. Here is my code
sudo -i -u someuser bash -c 'for i in 1 2 3; do echo $i; done'
I am expecting output as 1 2 3 but executed with someuser. Above code printing blank lines. I tried to add some other commands
sudo -i -u someuser bash -c 'for i in 1 2 3; do ls; done'
somefile1.txt somefile2.txt
somefile1.txt somefile2.txt
somefile1.txt somefile2.txt
If I try loop with the current user it gives expected output
for i in 1 2 3; do echo $i; done
1
2
3
Looks like bash is unable to resolve variable $i inside for loop. I tried escape character \ but not helping.
TL;DR: Don't use sudo -i with bash -c
The usual way to use sudo -i is without any arguments, in which case it simply starts an interactive login shell.
If you really must have a login shell for some reason (which isn't good practice for running scripts), it's much saner to simply add the extra arguments needed to make your shell a login shell to the bash command itself, and keep sudo out of the business of changing the arguments you pass it:
sudo -u someuser bash -lic 'for i in 1 2 3; do echo "$i"; done'
...or...
sudo -u someuser -i <<'EOF'
for i in 1 2 3; do echo "$i"; done
EOF
The Gory Details
When you use sudo -i with arguments, it rewrites the argument list given to concatenate the arguments together into a single command that can be put into the argument after -c, so you get something like {"sh", "-c", "bash -c ..."}. In concatenating arguments together, sudo uses the logic from parse_args handling for MODE_LOGIN_SHELL, adding an escape character before all characters that are not alphanumeric, _, - or $; keeping $ out of this list was introduced in commitish 6484574f, tagged as a fix for bug #564 (which was introduced by the fix to bug #413 -- personally, I think we would all be better off if bug 413 had been left in place rather than making any attempt to fix it).
See also sh -c does not expand positional parameters if I run it from sudo --login over at Unix & Linux Stack Exchange.
Since this behavior was deliberately put in place in 2013, I doubt there's any fixing it at this point -- any change to sudo's escaping behavior has the potential to modify the security properties of existing scripts.

Bash exec in infinite while loop doesn't work

#!/bin/bash
while :
do
echo twerkin
exec free -h > /ver/www/raspberry/load.txt
exec /opt/vc/bin/vcgencmd measure_temp > /var/www/raspberry/heat.txt
done
This is what I made, I am going to read it on my website, but thats not the problem. The problem is that it gives me this error:
pi#raspberrypi ~ $ sh showinfo.sh
showinfo.sh: 7: showinfo.sh: Syntax error: "done" unexpected (expecting "do")
I'm running this on my raspberry pi with Raspbian (Debian Wheezy)
You don't seem to understand what the exec keyword does. It replaces your current script (basically, thewhile loop) with the free command.
To run a command, simply specify it.
#!/bin/bash
while : ; do
free -h
/opt/vc/bin/vcgencmd measure_temp
done
(I omitted the redirections because you were overwriting the old results on each iteration. What do you actually want?)
try to change few things in your script
#!/bin/bash
while : ;
do
echo twerkin
#### commands to execute are not need to be prefixed with exec
#### + concatenate output with >>
free -h >> /ver/www/raspberry/load.txt
#### same here
/opt/vc/bin/vcgencmd measure_temp >> /var/www/raspberry/heat.txt
#### you probably would want to add a delay here, because in other way
#### the script will be executed probably way too fast
# sleep 0.01
done

How to change argv0 in bash so command shows up with different name in ps?

In a C program I can write argv[0] and the new name shows up in a ps listing.
How can I do this in bash?
You can do it when running a new program via exec -a <newname>.
Just for the record, even though it does not exactly answer the original poster's question, this is something trivial to do with zsh:
ARGV0=emacs nethack
I've had a chance to go through the source for bash and it does not look like there is any support for writing to argv[0].
I'm assuming you've got a shell script that you wish to execute such that the script process itself has a new argv[0]. For example (I've only tested this in bash, so i'm using that, but this may work elsewhere).
#!/bin/bash
echo "process $$ here, first arg was $1"
ps -p $$
The output will be something like this:
$ ./script arg1
process 70637 here, first arg was arg1
PID TTY TIME CMD
70637 ttys003 0:00.00 /bin/bash ./script arg1
So ps shows the shell, /bin/bash in this case. Now try your interactive shell's exec -a, but in a subshell so you don't blow away the interactive shell:
$ (exec -a MyScript ./script arg1)
process 70936 here, first arg was arg1
PID TTY TIME CMD
70936 ttys008 0:00.00 /bin/bash /path/to/script arg1
Woops, still showing /bin/bash. what happened? The exec -a probably did set argv[0], but then a new instance of bash started because the operating system read #!/bin/bash at the top of your script. Ok, what if we perform the exec'ing inside the script somehow? First, we need some way of detecting whether this is the "first" execution of the script, or the second, execed instance, otherwise the second instance will exec again, and on and on in an infinite loop. Next, we need the executable to not be a file with a #!/bin/bash line at the top, to prevent the OS from changing our desired argv[0]. Here's my attempt:
$ cat ./script
#!/bin/bash
__second_instance="__second_instance_$$"
[[ -z ${!__second_instance} ]] && {
declare -x "__second_instance_$$=true"
exec -a MyScript "$SHELL" "$0" "$#"
}
echo "process $$ here, first arg was $1"
ps -p $$
Thanks to this answer, I first test for the environment variable __second_instance_$$, based on the PID (which does not change through exec) so that it won't collide with other scripts using this technique. If it's empty, I assume this is the first instance, and I export that environment variable, then exec. But, importantly, I do not exec this script, but I exec the shell binary directly, with this script ($0) as an argument, passing along all the other arguments as well ($#). The environment variable is a bit of a hack.
Now the output is this:
$ ./script arg1
process 71143 here, first arg was arg1
PID TTY TIME CMD
71143 ttys008 0:00.01 MyScript ./script arg1
That's almost there. The argv[0] is MyScript like I want, but there's that extra arg ./script in there which is a consequence of executing the shell directly (rather than via the OS's #! processing). Unfortunately, I don't know how to get any better than this.
Update for Bash 5.0
Looks like Bash 5.0 adds support for writing to special variable BASH_ARGV0, so this should become far simpler to accomplish.
(see release announcement)
( exec -a foo bash -c 'echo $0' )
ps and others inspect two things, none of which is argv0: /proc/PID/comm (for the "process name") and /proc/PID/cmdline (for the command-line). Assigning to argv0 will not change what ps shows in the CMD column, but it will change what the process usually sees as its own name (in output messages, for example).
To change the CMD column, write to /proc/PID/comm:
echo -n mynewname >/proc/$$/comm; ps
You cannot write to or modify /proc/PID/cmdline in any way.
Process can set their own "title" by writing to the memory area in which argv & envp are located (note that this is different than setting BASH_ARGV0). This has the side effect of changing /proc/PID/cmdline as well, which is what some daemons do in order to prettify (hide?) their command lines. libbsd's setproctitle() does exactly that, but you cannot do that in Bash without support of external tools.
I will just add that this must be possible at runtime, at least in some environments. Assigning $0 in perl on linux does change what shows up in ps. I do not know how that is implemented, however. If I can find out, i'll update this.
edit:
Based on how perl does it, it is non-trivial. I doubt there is any bask built in way at runtime but don't know for sure. You can see how perl does sets the process name at runtime.
Copy the bash executable to a different name.
You can do this in the script itself...
cp /bin/bash ./new-name
PATH=$PATH:.
exec new-name $0
If you are trying to pretend you are not a shell script you can rename the script itself to something cool or even " " (a single space) so
exec new-name " "
Will execute bash your script and appears in the ps list as just new-name.
OK so calling a script " " is a very bad idea :)
Basically, to change the name
bash script
rename bash and rename the script.
If you are worried, as Mr McDoom. apparently is, about copying a binary to a new name (which is entirely safe) you could also create a symlink
ln -s /bin/bash ./MyFunkyName
./MyFunkyName
This way, the symlink is what appears in the ps list. (again use PATH=$PATH:. if you dont want the ./)

Using bash shell inside Matlab

I'm trying to put a large set of bash commands into a matlab script and manage my variables (like file paths, parameters etc) from there. It is also needed because this workflow requires manual intervention at certain steps and I would like to use the step debugger for this.
The problem is, I don't understand how matlab interfaces with bash shell.
I can't do system('source .bash_profile') to define my bash variables. Similarly I can't define them by hand and read them either, e.g. system('export var=somepath') and then system('echo $var') returns nothing.
What is the correct way of defining variables in bash inside matlab's command window? How can I construct a workflow of commands which will use the variables I defined as well as those in my .bash_profile?
If all you need to do is set environment variables, do this in MATLAB:
>> setenv('var','somepath')
>> system('echo $var')
Invoke Bash as a login shell to get your ~/.bash_profile sourced and use the -c option to execute a group of shell commands in one go.
# in Terminal.app
man bash | less -p 'the --login option'
man bash | less -p '-c string'
echo 'export profilevar=myProfileVar' >> ~/.bash_profile
# test in Terminal.app
/bin/bash --login -c '
echo "$0"
echo "$3"
echo "$#"
export var=somepath
echo "$var"
echo "$profilevar"
ps
export | nl
' zero 1 2 3 4 5
# in Matlab
cmd=sprintf('/bin/bash --login -c ''echo "$profilevar"; ps''');
[r,s]=system(cmd);
disp(s);

Resources