I have three bash scripts which will give output with options.
./script_1 list
./script_2 list
./script_3 list
./script_9 list
and these numbers differs with servers but the first word in every server is "script".
now I want to run all these scripts together with same option 'list'. I need something like ./script_* list ?? or a command with ls or awk or anything else..
Running the process in background is not fulfilling my solution as I appended it into another script.
for i in ./script_*; do
$i list
done
If this is not the use case you're seeking, you'll have to be more specific in your question.
Related
I've a command which I need to run in which one of the args is a list of comma separated ids. The list of ids is over 50k. I've the stored the list of ids in a file and I'm running the command in the following way:
sudo ./mycommand --ids `cat /tmp/ids.txt`
However I get an error zsh: argument list too long: sudo
This I believe is because the kernel has a max size of arguments it can take. One option for me is to manually split the file into smaller pieces (since the ids are comma separated I can't just break it evenly) and then run the command each time for each file.
Is there a better approach?
ids.txt file looks like this:
24342,24324234,122,54545,565656,234235
Converting comments into a semi-coherent answer.
The file ids.txt contains a single line of comma-separated values, and the total size of the file can be too big to be the argument list to a program.
Under many circumstances, using xargs is the right answer, but it relies on being able to split the input up in to manageable chunks of work, and it must be OK to run the program several times to get the job done.
In this case, xargs doesn't help because of the size and format of the file.
It isn't stated absolutely clearly that all the values in the file must be processed in a single invocation of the command. It also isn't absolutely clear whether the list of numbers must all be in a single argument or whether multiple arguments would work instead. If multiple invocations are not an issue, it is feasible to reformat the file so that it can be split by xargs into manageable chunks. If need be, it can be done to create a single comma-separated argument.
However, it appears that these options are not acceptable. In that case, something has to change.
If you must supply a single argument that is too big for your system, you're hosed until you change something — either the system parameters or your program.
Changing the program is usually easier than reconfiguring the o/s, especially if you take into account reconfiguring upgrades to the o/s.
One option worth reviewing is changing the program to accept a file name instead of the list of numbers on the command line:
sudo ./mycommand --ids-list=/tmp/ids.txt
and the program opens and reads the ID numbers from the file. Note that this preserves the existing --ids …comma,separated,list,of,IDs notation. The use of the = is optional; a space also works.
Indeed, many programs work on the basis that arguments provided to it are file names to be processed (the Unix filter programs — think grep, sed, sort, cat, …), so simply using:
sudo ./mycommand /tmp/ids.txt
might be sufficient, and you could have multiple files in a single invocation by supplying multiple names:
sudo ./mycommand /tmp/ids1.txt /tmp/ids2.txt /tmp/ids3.txt …
Each file could be processed in turn. Whether the set of files constitutes a single batch operation or each file is its own batch operation depends on what mycommand is really doing.
I have a list of (bash) commands I want to run:
<Command 1>
<Command 2>
...
<Command n>
Each command takes a long time to run, and sometimes after seeing the output of (e.g.) <Command 1>, I'd like to update a parameter of <Command 5>, or add a new <Command k> at an arbitrary position in the list. But I want to be able to walk away from my machine at any time, and have it keep working through my last update to the list.
This is similar to the question here: Edit shell script while it's running. Some of those answers could be made to serve, but that question had the additional constraint of wanting to edit the script file itself, and I suspect there is a simpler answer because I don't have that exact constraint.
My current solution is to end my script with a call to a second script. I can edit the second file while the first one runs, this lets me append new commands to the end of my list, but I can't make any changes to the list of commands in the first file. And once execution has started in the second file, I can't make any more changes. But I often stop my script to insert updates, and this sometimes means stopping a long command that is almost complete, only so that I can update later items on the list before I leave my machine for a time. I could of course chain together many files in this way, but that seems a mess for what (hopefully) has a simple solution.
This is more of a conceptual answer than one where I provide the full code. My idea would be to run Redis (Redis description here) - it is pretty simple to install - and use it as a data-structure server. In your case, the data structure would be a list of jobs.
So, you basically add each job to a Redis list which you can do using LPUSH at the command-line:
echo "lpush jobs job1" | redis-cli
You can then start one, or more, workers, in parallel if you wish and they sit in a loop, doing repeated BLPOP of jobs (blocking pop, waiting till there are jobs) off the list and processing them:
#!/bin/bash
while :; do
job=$(echo brpop jobs 0 | redis_cli)
do $job
done
And then you are at liberty to modify the list while the worker(s) is/are running using deletions and insertions.
Example here.
I would say to put each command that you want to run in a file and in the main file list all of the command files
ex: main.sh
#!/bin/bash
# Here you define the absolute path of your script
scriptPath="/home/script/"
# Name of your script
scriptCommand1="command_1.sh"
scriptCommand2="command_2.sh"
...
scriptCommandN="command_N.sh"
# Here you execute your script
$scriptPath/$scriptCommand1
$scriptPath/$scriptCommand2
...
$scriptPath/$scriptCommandN
I suppose while 1 is running you can then modify the other since they are external files
i'm trying to create a shell script so that it calls two commands in 2 seperate directories and then shows their feedback, To call the a command i'm guessing it would be something like this ./directory/ ./script.sh
Thanks in advance for your replies.
If you want to sequentially invoke the commands:
/path/to/command1; /path/to/command2
If you want to call the second command only if the first one succeeded:
/path/to/command1 && /path/to/command2
If you want to run them in parallel:
/path/to/command1 &
/path/to/command2
The output of the commands will be the standard output (most likely the terminal). If you run the two commands in parallel and they produce some output, you might want to redirect it to different files.
I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?
i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck
How can one loop a command/program in a Unix shell without writing the loop into a script or other application.
For example, I wrote a script that outputs a light sensor value but I'm still testing it right now so I want it run it in a loop by running the executable repeatedly.
Maybe I'd also like to just run "ls" or "df" in a loop. I know I can do this easily in a few lines of bash code, but being able to type a command in the terminal for any given set of command would be just as useful to me.
You can write the exact same loop you would write in a shell script by writing it in one line putting semicolons instead of returns, like in
for NAME [in LIST ]; do COMMANDS; done
At that point you could write a shell script called, for example, repeat that, given a command, runs it N times, by simpling changing COMMANDS with $1 .
I recommend the use of "watch", it just do exactly what you want, and it cleans the terminal before each execution of the commands, so it's easy to monitor changes.
You probably have it already, just try watch ls or watch ./my_script.sh. You can even control how much time to wait between each execution, in seconds, with the -n option, and you can use -d to highlight the difference in the output of consecutive runs.
Try:
Run ls each second:
watch -n 1 ls
Run my_script.sh each 3 seconds, and highlight differences:
watch -n 3 -d ./my_script.sh
watch program man page:
http://linux.die.net/man/1/watch
This doesn't exactly answer your question, but I felt it was relavent. One of the great things with shell looping is that some commands return lists of items. Of course that is obvious, but a something you can do using the for loop is execute a command on that list of items.
for $file in `find . -name *.wma`; do cp $file ./new/location/ done;
You can get creative and do some very powerful stuff.
Aside from accepting arguments, anything you can do in a script can be done on the command line. Earlier I typed this directly in to bash to watch a directory fill up as I transferred files:
while sleep 5s
do
ls photos
end