How can I call GAP functions from a shell script? - shell

I want to get the result of a function of the GAP software. This is an interactive command line tool mainly for mathematician who work on group theory related topics.
The documentation/faq states about 8.1: Can I call GAP functions from another programme? that it is in general not possible. However, running GAP as a child process and communicate with it using pipes, pseudo-ttys, UNIX FIFOs or some similar device it can be done.
An example session using a package called CrystCat (Crystallographic Groups Catalog) looks like:
$ gap
gap > LoadPackage( "CrystCat" );
gap > DisplaySpaceGroupType( "P1" );
#I Space-group type (3,1,1,1,1); IT(1) = P1; orbit size 1; fp-free
gap > quit;
$ # exited 'gap' and back in my shell
As I am not familiar with these techniques, can someone show me a minimal example having following functionality:
$ ./script.sh "P1"
#I Space-group type (3,1,1,1,1); IT(1) = P1; orbit size 1; fp-free
$
UPDATE: The accepted answer of this question doesn't work.

Answer by gap-support (using stdin read-in capability of gap)
#!/bin/sh
if [ "$#" != "1" ]; then
echo "Usage: test.sh <string>"
exit 1
fi;
gap -r -b -q << EOI
LoadPackage( "CrystCat" );
DisplaySpaceGroupType( "$1" );
EOI
It works exactly as asked, namely
$ ./script.sh P1
#I Space-group type (3,1,1,1,1); IT(1) = P1; orbit size 1; fp-free

Related

Parallel subshells doing work and report status

I am trying to do work in all subfolders in parallel and describe a status per folder once it is done in bash.
suppose I have a work function which can return a couple of statuses
#param #1 is the folder
# can return 1 on fail, 2 on sucess, 3 on nothing happend
work(){
cd $1
// some update thing
return 1, 2, 3
}
now I call this in my wrapper function
do_work(){
while read -r folder; do
tput cup "${row}" 20
echo -n "${folder}"
(
ret=$(work "${folder}")
tput cup "${row}" 0
[[ $ret -eq 1 ]] && echo " \e[0;31mupdate failed \uf00d\e[0m"
[[ $ret -eq 2 ]] && echo " \e[0;32mupdated \uf00c\e[0m"
[[ $ret -eq 3 ]] && echo " \e[0;32malready up to date \uf00c\e[0m"
) &>/dev/null
pids+=("${!}")
((++row))
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
echo "waiting for pids ${pids[*]}"
wait "${pids[#]}"
}
and what I want is, that it prints out all the folders per line, and updates them independently from each other in parallel and when they are done, I want that status to be written in that line.
However, I am unsure subshell is writing, which ones I need to capture how and so on.
My attempt above is currently not writing correctly, and not in parallel.
If I get it to work in parallel, I get those [1] <PID> things and [1] + 3156389 done ... messing up my screen.
If I put the work itself in a subshell, I don't have anything to wait for.
If I then collect the pids I dont get the response code to print out the text to show the status.
I did have a look at GNU Parallel but I think I cannot have that behaviour. (I think I could hack it that the finished jobs are printed, but I want all 'running' jobs are printed, and the finished ones get amended).
Assumptions/undestandings:
a separate child process is spawned for each folder to be processed
the child process generates messages as work progresses
messages from child processes are to be displayed in the console in real time, with each child's latest message being displayed on a different line
The general idea is to setup a means of interprocess communications (IC) ... named pipe, normal file, queuing/messaging system, sockets (plenty of ideas available via a web search on bash interprocess communications); the children write to this system while the parent reads from the system and issues the appropriate tput commands.
One very simple example using a normal file:
> status.msgs # initialize our IC file
child_func () {
# Usage: child_func <unique_id> <other> ... <args>
local i
for ((i=1;i<=10;i++))
do
sleep $1
# each message should include the child's <unique_id> ($1 in this case);
# parent/monitoring process uses this <unique_id> to control tput output
echo "$1:message - $1.$i" >> status.msgs
done
}
clear
( child_func 3 & )
( child_func 5 & )
( child_func 2 & )
while IFS=: read -r child msg
do
tput cup $child 10
echo "$msg"
done < <(tail -f status.msgs)
NOTES:
the (child_func 3 &) construct is one way to eliminate the OS message re: 'background process completed' from showing up in stdout (there may be other ways but I'm drawing a blank at the moment)
when using a file (normal, pipe) OP will want to look at a locking method (flock?) to insure messages from multiple children don't stomp each other
OP can get creative with the format of the messages printed to status.msgs in conjunction with parsing logic in the parent's while loop
assuming variable width messages OP may want to look at appending a tput el on the end of each printed message in order to 'erase' any characters leftover from a previous/longer message
exiting the loop could be as simple as keeping count of the number of child processes that send a message <id>:done, or keeping track of the number of children still running in the background, or ...
Running this at my command line generates 3 separate lines of output that are updated at various times (based on the sleep $1):
# no ouput to line #1
message - 2.10 # messages change from 2.1 to 2.2 to ... to 2.10
message - 3.10 # messages change from 3.1 to 3.2 to ... to 3.10
# no ouput to line #4
message - 5.10 # messages change from 5.1 to 5.2 to ... to 5.10
NOTE: comments not actually displayed in console
Based on #markp-fuso's answer:
printer() {
while IFS=$'\t' read -r child msg
do
tput cup $child 10
echo "$child $msg"
done
}
clear
parallel --lb --tagstring "{%}\t{}" work ::: folder1 folder2 folder3 | printer
echo
You can't control exit statuses like that. Try this instead, rework your work function to echo status:
work(){
cd $1
# some update thing &> /dev/null without output
echo "${1}_$status" #status=1, 2, 3
}
And than set data collection from all folders like so:
data=$(
while read -r folder; do
work "$folder" &
done < <(find . -maxdepth 1 -mindepth 1 -type d -printf "%f\n" | sort)
wait
)
echo "$data"

Avoid unexpected behavior using namerefs in bash

Why does this give no output (other than newline) instead of "foo"? The code uses a nameref, which was introduced in bash 4.3, and is a "reference to another variable" which "allows variables to be manipulated indirectly."
And what should be done to guard against this, if writing code for a library?
#!/usr/bin/bash
setret() {
local -n ret_ref=$1
local ret="foo"
ret_ref=$ret
}
setret ret
echo $ret
Running it through bash -x made my head spin, because it looks like it should be outputting the foo that I expected:
+ setret ret
+ local -n ret_ref=ret
+ local ret=foo
+ ret_ref=foo
+ echo
Interestingly, this prints bar, not foo.
#!/usr/bin/bash
setret() {
local -n ret_ref=$1
ret_ref="bar"
local ret="foo"
ret_ref=$ret
}
setret ret
echo $ret
With an equally confusing bash -x output:
+ setret ret
+ local -n ret_ref=ret
+ ret_ref=bar
+ local ret=foo
+ ret_ref=foo
+ echo bar
bar
I'm hoping this is valuable to others, because asking for the expected output in the #bash IRC channel got a response from one of its regulars of foo, which is what I expected.
Then, they set me straight. namerefs just don't work like I thought they did. local -n isn't setting ret_ref to refer to $1. Rather, it basically storing the string ret in ret_ref, marked to be used as a reference when it's used.
So, although it looked to me like ret_ref would refer to the caller's ret variable, it only does so until the function defines its own local ret variable, then it will refer to that one instead.
The only guaranteed way to guard against this, if writing code for a library, is within any function that uses namerefs, to prefix all non-nameref variables with the function name, along these lines:
#!/usr/bin/bash
setret() {
local -n ___setret_ret_ref=$1
local ___setret_ret="foo"
___setret_ret_ref=$___setret_ret
}
setret ret
echo $ret
Very ugly, but necessary to avoid collisions. (Sure, there's less ugly ways to do it that might be likely to work, but not as certain.)

Detect empty command

Consider this PS1
PS1='\n${_:+$? }$ '
Here is the result of a few commands
$ [ 2 = 2 ]
0 $ [ 2 = 3 ]
1 $
1 $
The first line shows no status as expected, and the next two lines show the
correct exit code. However on line 3 only Enter was pressed, so I would like the
status to go away, like line 1. How can I do this?
Here's a funny, very simple possibility: it uses the \# escape sequence of PS1 together with parameter expansions (and the way Bash expands its prompt).
The escape sequence \# expands to the command number of the command to be executed. This is incremented each time a command has actually been executed. Try it:
$ PS1='\# $ '
2 $ echo hello
hello
3 $ # this is a comment
3 $
3 $ echo hello
hello
4 $
Now, each time a prompt is to be displayed, Bash first expands the escape sequences found in PS1, then (provided the shell option promptvars is set, which is the default), this string is expanded via parameter expansion, command substitution, arithmetic expansion, and quote removal.
The trick is then to have an array that will have the k-th field set (to the empty string) whenever the (k-1)-th command is executed. Then, using appropriate parameter expansions, we'll be able to detect when these fields are set and to display the return code of the previous command if the field isn't set. If you want to call this array __cmdnbary, just do:
PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
Look:
$ PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
0 $ [ 2 = 3 ]
1 $
$ # it seems that it works
$ echo "it works"
it works
0 $
To qualify for the shortest answer challenge:
PS1='\n${a[\#]-$? }${a[\#]=}$ '
that's 31 characters.
Don't use this, of course, as a is a too trivial name; also, \$ might be better than $.
Seems you don't like that the initial prompt is 0 $; you can very easily modify this by initializing the array __cmdnbary appropriately: you'll put this somewhere in your configuration file:
__cmdnbary=( '' '' ) # Initialize the field 1!
PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
Got some time to play around this weekend. Looking at my earlier answer (not-good) and other answers I think this may be probably the smallest answer.
Place these lines at the end of your ~/.bash_profile:
PS1='$_ret$ '
trapDbg() {
local c="$BASH_COMMAND"
[[ "$c" != "pc" ]] && export _cmd="$c"
}
pc() {
local r=$?
trap "" DEBUG
[[ -n "$_cmd" ]] && _ret="$r " || _ret=""
export _ret
export _cmd=
trap 'trapDbg' DEBUG
}
export PROMPT_COMMAND=pc
trap 'trapDbg' DEBUG
Then open a new terminal and note this desired behavior on BASH prompt:
$ uname
Darwin
0 $
$
$
$ date
Sun Dec 14 05:59:03 EST 2014
0 $
$
$ [ 1 = 2 ]
1 $
$
$ ls 123
ls: cannot access 123: No such file or directory
2 $
$
Explanation:
This is based on trap 'handler' DEBUG and PROMPT_COMMAND hooks.
PS1 is using a variable _ret i.e. PS1='$_ret$ '.
trap command runs only when a command is executed but PROMPT_COMMAND is run even when an empty enter is pressed.
trap command sets a variable _cmd to the actually executed command using BASH internal var BASH_COMMAND.
PROMPT_COMMAND hook sets _ret to "$? " if _cmd is non-empty otherwise sets _ret to "". Finally it resets _cmd var to empty state.
The variable HISTCMD is updated every time a new command is executed. Unfortunately, the value is masked during the execution of PROMPT_COMMAND (I suppose for reasons related to not having history messed up with things which happen in the prompt command). The workaround I came up with is kind of messy, but it seems to work in my limited testing.
# This only works if the prompt has a prefix
# which is displayed before the status code field.
# Fortunately, in this case, there is one.
# Maybe use a no-op prefix in the worst case (!)
PS1_base=$'\n'
# Functions for PROMPT_COMMAND
PS1_update_HISTCMD () {
# If HISTCONTROL contains "ignoredups" or "ignoreboth", this breaks.
# We should not change it programmatically
# (think principle of least astonishment etc)
# but we can always gripe.
case :$HISTCONTROL: in
*:ignoredups:* | *:ignoreboth:* )
echo "PS1_update_HISTCMD(): HISTCONTROL contains 'ignoredups' or 'ignoreboth'" >&2
echo "PS1_update_HISTCMD(): Warning: Please remove this setting." >&2 ;;
esac
# PS1_HISTCMD needs to contain the old value of PS1_HISTCMD2 (a copy of HISTCMD)
PS1_HISTCMD=${PS1_HISTCMD2:-$PS1_HISTCMD}
# PS1_HISTCMD2 needs to be unset for the next prompt to trigger properly
unset PS1_HISTCMD2
}
PROMPT_COMMAND=PS1_update_HISTCMD
# Finally, the actual prompt:
PS1='${PS1_base#foo${PS1_HISTCMD2:=${HISTCMD%$PS1_HISTCMD}}}${_:+${PS1_HISTCMD2:+$? }}$ '
The logic in the prompt is roughly as follows:
${PS1_base#foo...}
This displays the prefix. The stuff in #... is useful only for its side effects. We want to do some variable manipulation without having the values of the variables display, so we hide them in a string substitution. (This will display odd and possibly spectacular things if the value of PS1_base ever happens to begin with foo followed by the current command history index.)
${PS1_HISTCMD2:=...}
This assigns a value to PS1_HISTCMD2 (if it is unset, which we have made sure it is). The substitution would nominally also expand to the new value, but we have hidden it in a ${var#subst} as explained above.
${HISTCMD%$PS1_HISTCMD}
We assign either the value of HISTCMD (when a new entry in the command history is being made, i.e. we are executing a new command) or an empty string (when the command is empty) to PS1_HISTCMD2. This works by trimming off the value HISTCMD any match on PS1_HISTCMD (using the ${var%subst} suffix replacement syntax).
${_:+...}
This is from the question. It will expand to ... something if the value of $_ is set and nonempty (which it is when a command is being executed, but not e.g. if we are performing a variable assignment). The "something" should be the status code (and a space, for legibility) if PS1_HISTCMD2 is nonempty.
${PS1_HISTCMD2:+$? }
There.
'$ '
This is just the actual prompt suffix, as in the original question.
So the key parts are the variables PS1_HISTCMD which remembers the previous value of HISTCMD, and the variable PS1_HISTCMD2 which captures the value of HISTCMD so it can be accessed from within PROMPT_COMMAND, but needs to be unset in the PROMPT_COMMAND so that the ${PS1_HISTCMD2:=...} assignment will fire again the next time the prompt is displayed.
I fiddled for a bit with trying to hide the output from ${PS1_HISTCMD2:=...} but then realized that there is in fact something we want to display anyhow, so just piggyback on that. You can't have a completely empty PS1_base because the shell apparently notices, and does not even attempt to perform a substitution when there is no value; but perhaps you can come up with a dummy value (a no-op escape sequence, perhaps?) if you have nothing else you want to display. Or maybe this could be refactored to run with a suffix instead; but that is probably going to be trickier still.
In response to Anubhava's "smallest answer" challenge, here is the code without comments or error checking.
PS1_base=$'\n'
PS1_update_HISTCMD () { PS1_HISTCMD=${PS1_HISTCMD2:-$PS1_HISTCMD}; unset PS1_HISTCMD2; }
PROMPT_COMMAND=PS1_update_HISTCMD
PS1='${PS1_base#foo${PS1_HISTCMD2:=${HISTCMD%$PS1_HISTCMD}}}${_:+${PS1_HISTCMD2:+$? }}$ '
This is probably not the best way to do this, but it seems to be working
function pc {
foo=$_
fc -l > /tmp/new
if cmp -s /tmp/{new,old} || test -z "$foo"
then
PS1='\n$ '
else
PS1='\n$? $ '
fi
cp /tmp/{new,old}
}
PROMPT_COMMAND=pc
Result
$ [ 2 = 2 ]
0 $ [ 2 = 3 ]
1 $
$
I need to use great script bash-preexec.sh.
Although I don't like external dependencies, this was the only thing to help me avoid to have 1 in $? after just pressing enter without running any command.
This goes to your ~/.bashrc:
__prompt_command() {
local exit="$?"
PS1='\u#\h: \w \$ '
[ -n "$LASTCMD" -a "$exit" != "0" ] && PS1='['${red}$exit$clear"] $PS1"
}
PROMPT_COMMAND=__prompt_command
[-f ~/.bash-preexec.sh ] && . ~/.bash-preexec.sh
preexec() { LASTCMD="$1"; }
UPDATE: later I was able to find a solution without dependency on .bash-preexec.sh.

Conditional use of functions?

I created a bash script that parses ASCII files into a comma delimited output. It's worked great. Now, a new file layout for these files is being gradually introduced.
My script has now two parsing functions (one per layout) that I want to call depending on a specific marker that is present in the ASCII file header. The script is structured thusly:
#!/bin/bash
function parseNewfile() {...parse stuff...return stuff...}
function parseOldfile() {...parse stuff...return stuff...}
#loop thru ASCII files array
i=0
while [ $i -lt $len ]; do
#check if file contains marker for new layout
grep CSVHeaderBox output_$i.ASC
#calls parsing function based on exit code
if [ $? -eq 0 ]
then
CXD=`parseNewfile`
else
CXD=`parseOldfile`
fi
echo ${array[$i]}| awk -v cxd=`echo $CXD` ....
let i++
done>>${outdir}/outfile.csv
...
The script does not err out. It always calls the original function "parseOldfile" and ignores the new one. Even when I specifically feed my script with several files with the new layout.
What I am trying to do seem very trivial. What am I missing here?
EDIT: Samples of old and new file layouts.
1) OLD File Layout
F779250B
=====BOX INFORMATION=====
Model = R15-100
Man Date = 07/17/2002
BIST Version = 3.77
SW Version = 0x122D
SW Name = v1b1645
HW Version = 1.1
Receiver ID = 00089787556
=====DISK INFORMATION=====
....
2) NEW File Layout
F779250B
=====BOX INFORMATION=====
Model = HR22-100
Man Date = 07/17/2008
BIST Version = 7.55
SW Version = 0x066D
SW Name = v18m1fgu
HW Version = 2.3
Receiver ID = 028910170936
CSVHeaderBox:Platform,ManufactureDate,BISTVersion,SWVersion,SWName,HWRevision,RID
CSVValuesBox:HR22-100,20080717,7.55,0x66D,v18m1fgu,2.3,028910170936
=====DISK INFORMATION=====
....
This may not solve your problem, but a potential performance boost: instead of
grep CSVHeaderBox output_$i.ASC
#calls parsing function based on exit code
if [ $? -eq 0 ]
use
if grep -q CSVHeaderBox output_$i.ASC
qrep -q will exit successfully on the first match, so it doesn't have to scan the whole file. Plus you don't have to bother with the $? var.
Don't do this:
awk -v cxd=`echo $CXD`
Do this:
awk -v cxd="$CXD"
I'm not sure if this solves the OP's requirement.
What's the need for awk if your function knows how to parse the file?
#/bin/bash
function f1() {
echo "f1() says $#"
}
function f2() {
echo "f2() says $#"
}
FUN="f1"
${FUN} "foo"
FUN="f2"
${FUN} "bar"
I am bit embarrassed to write this but I solved my "problem".
After gedit (I am on Ubuntu) err-ed out several dozen times about "Trailing spaces", I copied and pasted my code into a new file and re-run my script.
It worked.
I have no explanation why.
Thanks to everyone for taking the time.

Is it possible to run two loops at the same time?

So I have a project in my cyber security class to make a bash game. I like to make one of those medieval games where you make farms and mines to get resources. Well I like to make something like that. To do that I have to have two while loops running. Like this
while [ blah ]; do
blah
done
while [ blah ]; do
blah
done
Is it possible to run two while loops at the same time and if I am writing it wrong how do I write it?
If you put a & after each done, like done&, you will create new processes in the background that run the while loops. You will have to be careful to realize what this means though, since the bash script will continue executing commands after creating those new processes even if they are not finished. You might use the wait command to prevent this from happening, but I'm not too used to using that so I cannot vouch for it.
Yes, but you will have to fork a new process for each while loop to be executing in. Technically, they won't both run at the same time (unless you consider multiple cores, but this isn't even garaunteed).
Below is a link to how to fork multiple processes using bash.
Forking / Multi-Threaded Processes | Bash
Since you mention this is a school project, I'll stop here lest I help you "not learn".
R
First things first, wrap the loop into a function and then fork it.
This is done when you want to split a process, for example, if I'm processing a CSV with 160,000+ lines, single process/"thread" will take hours. If you wrap the loop into a function and simply fork it, you will have x amount of processes running, then add wait/kill defunct process loop and you are done. here what you are looking at.
while loop with nested loop:
function jobA() {
while read STR;
do
touch $1_temp
key=$(IFS="|";set -- $STR; echo $1)
for each in ${blah[#]};
do
#echo "$each"
done
done <$1;
}
for i in ${blah[#]};
do
echo "$i"
$(jobRDtemp $i) &
child_pid=$!
parent_pid=$$
PIDS+=($child_pid)
echo "forked process $child_pid with parent $parent_pid"
done
for pid in ${PIDS[#]};
do
wait $pid
done
echo "all jobs done"
sleep 1
Now this is wrapped, here is example of a FORKED loop. this means you will have parallel processes run in the background, WAIT will wait for ALL to complete before proceeding. This is important for some type of scripts.
Also, DO NOT use nested FOR loops written C style like presented above, example:
for (( i = 1; i <= 5; i++ )) ### Outer for loop ###
This is VERY slow. use THIS type:
for each in ${blah[#]};
do
#echo "$each"
if [ "$key" = "$each" ]; then
# echo "less than $keyValNeed..."
echo $STR >> $1_temp
fi
done
You could also use nested for loops
for (( i = 1; i <= 5; i++ )) ### Outer for loop ###
do
for (( j = 1 ; j <= 5; j++ )) ### Inner for loop ###
do
echo -n "$i "
done
echo "" #### print the new line ###
done
EDIT: I thought you meant Nested Loop but reading again you said running both loops "at the same time". I will leave my answer here though.

Resources