Change directory and applescript terminal command - terminal

I have tried the following but cannot seem to get this to work:
do script "cd ~/desktop/test; for x in ls -1 | sed -e 's/^\(.\).*/\1/' | sort -u; do mv -i ${x}?* $x done"
I'm wanting to perform this command in applescript. I run this in applescript and I get an error regarding "" marks but am not sure how to correct it. I'm a complete newbie to applescript. willing to learn just a little lost.
Thanks

Note that in AppleScript, whenever you pass a shell script command with \, use \\ to represent it.
As:
do shell script "cd ~/Desktop/; for x in `ls -1 | sed -e 's/^\\(.\\).*/\\1/' | sort -u`; do echo ${x}?* $x; done"
Also, it uses for x in `command`, instead of for x in command. It's because `` treats everything inside it to be a command, and expects the result of this command, just as $():
do shell script "cd ~/Desktop/; for x in $(ls -1 | sed -e 's/^\\(.\\).*/\\1/' | sort -u); do echo ${x}?* $x; done"

Related

How to find the number of instances of current script running in bash?

I have the below code to find out the number of instances of current script running that is running with same arg1. But looks like the script creates a subshell and executes this command which also shows up in output. What would be the better approach to find the number of instances of running script ?
$cat test.sh
#!/bin/bash
num_inst=`ps -ef | grep $0 | grep $1 | wc -l`
echo $num_inst
$ps aux | grep test.sh | grep arg1 | grep -v grep | wc -l
0
$./test.sh arg1 arg2
3
$
I am looking for a solution that matches all running instance of ./test.sh arg1 arg2 not the one with ./test.sh arg10 arg20
The reason this creates a subshell is that there's a pipeline inside the command substitution. If you run ps -ef alone in a command substitution, and then separately process the output from that, you can avoid this problem:
#!/bin/bash
all_processes=$(ps -ef)
num_inst=$(echo "$all_processes" | grep "$0" | grep -c "$1")
echo "$num_inst"
I also did a bit of cleanup on the script: double-quote all variable references to avoid weird parsing, used $() instead of backticks, and replaced grep ... | wc -l with grep -c.
You might also replace the echo "$all_processes" | ... with ... <<<"$all_processes" and maybe the two greps with a single grep -c "$0 $1":
...
num_inst=$(grep -c "$0 $1" <<<"$all_processes")
...
Modify your script like this:
#!/bin/bash
ps -ef | grep $0 | wc -l
No need to store the value in a variable, the result is printed to standard out anyway.
Now why do you get 3?
When you run a command within back ticks (fyi you should use syntax num_inst=$( COMMAND ) and not back ticks), it creates a new sub-shell to run COMMAND, then assigns the stdout text to the variable. So if you remove the use of $(), you will get your expected value of 2.
To convince yourself of that, remove the | wc -l, you will see that num_inst has 3 processes, not 2. The third one exists only for the execution of COMMAND.

execute a string in a bash script containing multiple redirects

I am trying to write a bash script which simply acts as an emulator. It takes input from the user and executes the command while forwarding the command along with the result onto a file. I am unable to handle inputs which have either a | or a > in them.
The only option I could find was segregating the commands based on the | into an array and run them individually. However, this does not allow > redirects.
Thanking in advance.
$cmd is a command taken as input from the user
I used the command
$cmd 2>&1 | tee -a $flname
but this does not work if there is a | or a > in $cmd
/bin/bash -c "$cmd 2>&1 | tee -a $flname" does not run/store the command either
Try this:
#!/bin/bash
read -r -p "Insert command to execute"$'\n' cmd
echo "Executing '$cmd'"
/bin/bash -c "$cmd"
# or eval "$cmd"
Example of execution:
$ ./script.sh
Insert command to execute
printf '1\n2\n3\n4\n' | grep '1\|3'
Executing 'printf '1\n2\n3\n4\n' | grep '1\|3''
1
3

Testing if a Daemon is alive or not with Shell

I have a log_sender.pl perl script that when executed runs a daemon. I want to make a test, using Shell:
#!/bin/bash
function log_sender()
{
perl -I $HOME/script/log_sender.pl
}
(
[[ "${BASH_SOURCE[0]}" == "${0}" ]] || exit 0
function check_log_sender()
{
if [ "ps -aef | grep -v grep log_sender.pl" ]; then
echo "PASSED"
else
echo FAILED
fi
}
log_sender
check_log_sender
)
Unfortunately when I run this my terminal becomes:
-bash-4.1$ sh log_sender.sh
...
...
What am I doing wrong?
> if [ "ps -aef | grep -v grep log_sender.pl" ]; then
This is certainly not what you want. Try this:
if ps -aef | grep -q 'log_sender\.pl'; then
...
In a shell script, the if construct takes as its argument a command whose exit status it examines. In your code, the command is [ (also known as test) and you run it on the literal string "ps -aef | grep -v grep log_sender.pl" which is simply always true.
You probably intended to check whether ps -aef outputs a line which contains log_sender.pl but does not contain grep; that would be something like ps -aef | grep -v grep | grep 'log_sender\.pl' but you can avoid the extra grep -v by specifying a regular expression which does not match itself.
The -q option to grep suppresses any output; the exit code indicates whether or not the input matched the regular expression.
The perl invocation is also not correct; the -I option requires an argument, so you are saying effectively just perl and your Perl interpreter is now waiting for you to type in a Perl script for it to execute. Apparently the script is log_sender.pl so you should simply drop the -I (or add an argument to it, if you really do need to add some Perl library paths in order for the script to work).
Finally, if you write a Bash script, you should execute it with Bash.
chmod +x log_sender.sh
./log_sender.sh
or alternatively
bash ./log_sender.sh
The BASH_SOURCE construct you use is a Bashism, so your script will simply not work correctly under sh.
Finally, the parentheses around the main logic are completely redundant. They will cause the script to run these commands in a separate subshell for no apparent benefit.

Shell Scripting: Using xargs to execute parallel instances of a shell function

I'm trying to use xargs in a shell script to run parallel instances of a function I've defined in the same script. The function times the fetching of a page, and so it's important that the pages are actually fetched concurrently in parallel processes, and not in background processes (if my understanding of this is wrong and there's negligible difference between the two, just let me know).
The function is:
function time_a_url ()
{
oneurltime=$($time_command -p wget -p $1 -O /dev/null 2>&1 1>/dev/null | grep real | cut -d" " -f2)
echo "Fetching $1 took $oneurltime seconds."
}
How does one do this with an xargs pipe in a form that can take number of times to run time_a_url in parallel as an argument? And yes, I know about GNU parallel, I just don't have the privilege to install software where I'm writing this.
Here's a demo of how you might be able to get your function to work:
$ f() { echo "[$#]"; }
$ export -f f
$ echo -e "b 1\nc 2\nd 3 4" | xargs -P 0 -n 1 -I{} bash -c f\ \{\}
[b 1]
[d 3 4]
[c 2]
The keys to making this work are to export the function so the bash that xargs spawns will see it and to escape the space between the function name and the escaped braces. You should be able to adapt this to work in your situation. You'll need to adjust the arguments for -P and -n (or remove them) to suit your needs.
You can probably get rid of the grep and cut. If you're using the Bash builtin time, you can specify an output format using the TIMEFORMAT variable. If you're using GNU /usr/bin/time, you can use the --format argument. Either of these will allow you to drop the -p also.
You can replace this part of your wget command: 2>&1 1>/dev/null with -q. In any case, you have those reversed. The correct order would be >/dev/null 2>&1.
On Mac OS X:
xargs: max. processes must be >0 (for: xargs -P [>0])
f() { echo "[$#]"; }
export -f f
echo -e "b 1\nc 2\nd 3 4" | sed 's/ /\\ /g' | xargs -P 10 -n 1 -I{} bash -c f\ \{\}
echo -e "b 1\nc 2\nd 3 4" | xargs -P 10 -I '{}' bash -c 'f "$#"' arg0 '{}'
If you install GNU Parallel on another system, you will see the functionality is in a single file (called parallel).
You should be able to simply copy that file to your own ~/bin.

Using xargs to assign stdin to a variable

All that I really want to do is make sure everything in a pipeline succeeded and assign the last stdin to a variable. Consider the following dumbed down scenario:
x=`exit 1|cat`
When I run declare -a, I see this:
declare -a PIPESTATUS='([0]="0")'
I need some way to notice the exit 1, so I converted it to this:
exit 1|cat|xargs -I {} x={}
And declare -a gave me:
declare -a PIPESTATUS='([0]="1" [1]="0" [2]="0")'
That is what I wanted, so I tried to see what would happen if the exit 1 didn't happen:
echo 1|cat|xargs -I {} x={}
But it fails with:
xargs: x={}: No such file or directory
Is there any way to have xargs assign {} to x? What about other methods of having PIPESTATUS work and assigning the stdin to a variable?
Note: these examples are dumbed down. I'm not really doing an exit 1, echo 1 or a cat, but used these commands to simplify so we can focus on my particular issue.
When you use backticks (or the preferred $()) you're running those commands in a subshell. The PIPESTATUS you're getting is for the assignment rather than the piped commands in the subshell.
When you use xargs, it knows nothing about the shell so it can't make variable assignments.
Try set -o pipefail then you can get the status from $?.
xargs is run in a child process, as are all the commands you call. So they can't effect the environment of your shell.
You might be able to do something with named pipes (mkfifo), or possible bash's read function?
EDIT:
Maybe just redirect the output to a file, then you can use PIPESTATUS:
command1 | command2 | command3 >/tmp/tmpfile
## Examine PIPESTATUS
X=$(cat /tmp/tmpfile)
How about ...
read x <<<"$(echo 1)"
read x < <(echo 1)
echo "$x"
Why not just populate a new array?
IFS=$'\n' read -r -d '' -a result < <(echo a | cat | cat; echo "PIPESTATUS='${PIPESTATUS[*]}'" )
IFS=$'\n' read -r -d '' -a result < <(echo a | exit 1 | cat; echo "PIPESTATUS='${PIPESTATUS[*]}'" )
echo "${#result[#]}"
echo "${result[#]}"
echo "${result[0]}"
echo "${result[1]}"
There are already a few helpful solutions. It turns out that I actually had an example that matches the question as framed above; close-enough anyway.
Consider this:
XX=$(ls -l *.cpp | wc -l | xargs -I{} echo {})
echo $XX
3
Meaning that I had 3 x .cpp files to in my working directory. Now $XX is 3 and I can make use of that result in my script. It is contrived, because I don't actually need the xargs in this example. It works though.
In the example from the question ...
x=`exit 1|cat`
I don't think that will give you what was specified. exit will quit the sub-shell before the cat gets a mention. Also on that note,
I might start with something like
declare -a PIPESTATUS='([0]="0")'
x=$?
x now has the status from the last command.
Assign each line of input to an array, e.g. all python files in a directory
declare -a pyFiles=($(ls -l *.py | awk '{print $9}'))
where $9 is the nineth field in ls -l corresponding to the filename

Resources