Is it possible to install all sub packages within a specific package in a single command.
Something along the lines of "emerge sys-apps/*" (obviously this doesn't work it is just an example to help get my point across) to install all packages within the sys-apps package.
No, you can't.
You can have a set w/ desired packages, or simply substitute a result of another command as parameters to emerge -- e.g. smth like:
$ emerge $(find /usr/portage/sys-apps -maxdepth 1 -type d | tail -n +2 | sed 's,/usr/portage/,,')
Related
I have a directory which has 70000 xml files in it. Each file has a tag which looks something like this, for the sake of simplicity:
<ns2:apple>, <ns2:orange>, <ns2:grapes>, <ns2:melon>. Each file has only one fruit tag, i.e. there cannot be both apple and orange in the same file.
I would like rename every file (add "1_" before the beginning of each filename) which has one of: <ns2:apple>, <ns2:orange>, <ns2:melon> inside of it.
I can find such files with egrep:
egrep -r '<ns2:apple>|<ns2:orange>|<ns2:melon>'
So how would it look as a bash script, which I can then user as a cron job?
P.S. Sorry I don't have any bash script draft, I have very little experience with it and the time is of the essence right now.
This may be done with this script:
#!/bin/sh
find /path/to/directory/with/xml -type f | while read f; do
grep -q -E '<ns2:apple>|<ns2:orange>|<ns2:melon>' "$f" && mv "$f" "1_${f}"
done
But it will rescan the directory each time it runs and append 1_ to each file containing one of your tags. This means a lot of excess IO and files with certain tags will be getting 1_ prefix each run, resulting in names like 1_1_1_1_file.xml.
Probably you should think more on design, e.g. move processed files to two directories based on whether file has certain tags or not:
#!/bin/sh
# create output dirs
mkdir -p /path/to/directory/with/xml/with_tags/ /path/to/directory/with/xml/without_tags/
find /path/to/directory/with/xml -maxdepth 1 -mindepth 1 -type f | while read f; do
if grep -q -E '<ns2:apple>|<ns2:orange>|<ns2:melon>'; then
mv "$f" /path/to/directory/with/xml/with_tags/
else
mv "$f" /path/to/directory/with/xml/without_tags/
fi
done
Run this command as a dry run, then remove --dry_run to actually rename the files:
grep -Pl '(<ns2:apple>|<ns2:orange>|<ns2:melon>)' *.xml | xargs rename --dry-run 's/^/1_/'
The command-line utility rename comes in many flavors. Most of them should work for this task. I used the rename version 1.601 by Aristotle Pagaltzis. To install rename, simply download its Perl script and place into $PATH. Or install rename using conda, like so:
conda install rename
Here, grep uses the following options:
-P : Use Perl regexes.
-l : Suppress normal output; instead print the name of each input file from which output would normally have been printed.
SEE ALSO:
grep manual
I need to batch rename 40000 files on a folder with a number count in the end like this:. something.jpg to something_00001.jpg. I'd like to work with the rename command, but anything that works will do. Any Help? Thanks!
These are powerful commands that will make lots of changes very rapidly - please test on a copy of a small subset of your data.
Method 1 - Using "rename" (Perl tool)
This should work with rename which you can install on macOS with:
brew install rename
The command would be:
rename --dry-run -N "00001" 's/.jpg$/_$N.jpg/' *jpg
Remove the --dry-run to actually execute the command rather than just tell you what it would do.
Method 2 - Using GNU Parallel
Alternatively, this should work with GNU Parallel which you can install on macOS with:
brew install parallel
The command would be:
find . -name \*.jpg -print0 | parallel -0 --dry-run mv {} {.}_{#}.jpg
where:
{} means "the current file",
{.} means "the current file minus extension", and
{#} means "the (sequential) job number"
Remove the --dry-run to actually execute the command rather than just tell you what it would do.
You mentioned you wanted an offset, so the following works with an offset of 3:
find . -name \*.jpg -print0 | parallel -0 'printf -v new "%s_%05d.jpg" "{.}" $(({#}+3)); echo mv "{}" "$new"'
Method 3 - No additional software required
This should work using just standard, built-in tools:
#!/bin/bash
for f in *.jpg; do
# Figure out new name
printf -v new "%s_%05d.jpg" "${f%.jpg}" $((cnt+=1))
echo Would rename \"$f\" as \"$new\"
#mv "$f" "$new"
done
Remove the # in the penultimate line to actually do the rename.
Method 4 - No additional software required
This should work using just standard, built-in tools. Note that macOS comes with Perl installed, and this should be faster as it doesn't start a new mv process for each of 40,000 files like the previous method. Instead, Perl is started just once, it reads the null-terminated filenames passed to it by find and then executes a library call for each:
find . -name \*.jpg -print0 | perl -0 -e 'while(<>){ ($sthg=$_)=~s/.jpg//; $new=sprintf("%s_%05d.jpg",$sthg,++$cnt); printf("Rename $_ as $new\n"); }'
If that looks correct, change the last line so it actually does the rename rather than just telling you what it would do:
find . -name \*.jpg -print0 | perl -0 -e 'while(<>){ ($sthg=$_)=~s/.jpg//; $new=sprintf("%s_%05d.jpg",$sthg,++$cnt); mv $_, $new; }'
Here's a solution using renamer.
$ tree
.
├── beach.jpg
├── sky.jpg
$ renamer --path-element name --index-format %05d --find /$/ --replace _{{index}} *
$ tree
.
├── beach_00001.jpg
├── sky_00002.jpg
I have git repositories in a directory. For example,
$ ls
repo1 repo2 repo3 repo4
I want to see the last k commit logs of the all repositories quickly.
(k is something like 3.)
For repo1, I can print the last 3 commit logs and go back to the directory like this:
$ cd repo1; git log -3 ; cd ../
But I do not want to repeat this for all the repositories. I'm looking for a smart way to do it easily. (Maybe use xargs?)
I'm using Bash.
Thank you.
Often it's pointless to use xargs, since find can execute stuff on its own:
find ~/src/ -maxdepth 2 -name .git -execdir git log \;
Explanation:
find ~/src/
Look for stuff under ~/src/. You can pass multiple arguments if you want, possibly as a result of a shell glob.
-maxdepth 2
Don't recurse deeply. This saves a lot of time, since hitting the filesystem is relatively slow.
-maxdepth 2 will find ~/src/.git (if it exists) and ~/src/foo/.git, so it can be used whether you pass the repo directory itself or just the directory containing all the repos.
-maxdepth 1 would work (and be easier on IO) if only you want to pass the repo directories themselves.
-maxdepth 3 might have occasional use for certain source hierarchies, but for them you're probably better just passing additional directories to find in the first place.
-name .git
We're looking for the .git directory (or file! yes, git does that), because -execdir normally takes a {} argument which it passes to the command. The passed filename would just be the basename (so that e.g. mv works ... remember that find often works with files), with the working directory set to whatever contains that.
-execdir ... \;
Run a command in the repo itself. The command (here ...) can include anything, notably - options which will not be interpreted by find itself ... except that {} anywhere in a word is the filename, a lone ; terminates the command (but is here escaped for the shell), and + terminates the command but passes multiple files at once.
For this use case, we don't need to pass a filename, since the directory the program is run in provides all the needed information.
I have something similar, which is easy to be changed to satisfy your requirement:
CODE_BASE=(/parentdir/to/your/repositories
/another/parent/dirs
/another/parent/dirs/if/you/have
)
EXCLUDE_PATT="gitRepoYouWantToIgnore" #this is regex
for base in ${CODE_BASE[#]};do
echo "##########################"
echo " scanning $base"
echo "##########################"
for line in $(find "$base" -name ".git"|grep -v "$EXCLUDE_PATT"); do
line=$(sed 's#/\.git##'<<<"$line")
repo=$(awk -F'/' '$0=$NF' <<<"$line")
echo "##########################"
echo "====> Showing log of Repository: $repo <===="
echo "##########################"
git -C "$line" log -3
done
done
Save to showlog.sh for example, then execute it. You can add more log parameters to make the log output fit your needs.
I'm doing a find and locating several executables that I want to run with -v. I tried something like this:
find somefilters | xargs -I % % -v
Unfortuntely, xargs seems to require that the "utility" be a fixed binary rather than a binary provided by stdin. Does anyone have a recipe for doing this command line magic?
Use the -exec primary:
find ... -exec '{}' -v \;
Yet another way around this - use xargs to write a shell script for you:
find somefilters | xargs -n 1 -I % echo % -v | ${SHELL}
That won't work out so well if any of the programs require interactivity, but if the -v option is just to spit out the version numbers or something (one common meaning, the other being a verbose flag), it should work fine.
I have a script that runs git commands over a number of repositories in parallel which gnu parallel. I would like to pass the output of the git command through grep and color certain parts, for example on git status I want the word "clean" to appear green. Is there any way to do this with gnu parallel and grep?
This is my script so far:
#!/bin/bash
START_DIR=`pwd`
export GIT_ARGS=$*
function do_git() {
PROJECT_DIR=`dirname $1`
cd $PROJECT_DIR
echo ""
pwd
git $GIT_ARGS
echo ""
cd $START_DIR
}
export -f do_git
find . -maxdepth 2 -type d -name ".git" | sort | parallel --max-procs 4 "do_git {}"
Try adding this to the end of your pipeline:
| grep -E --color 'clean|word1|word2|$'
Substitute and add or remove words as needed. The $ causes all lines to match and pass through. The --color option is for GNU grep. Other versions of grep may use a different option.
Alternatively, there are several utilities that can do colorization.
General tips:
Avoid using all-caps variable names to prevent name collision with shell variables
Use $() instead of backticks - they're more readable and more versatile (e.g. nesting)
Using the function keyword is unnecessary
See BashFAQ/028 regarding trying to use the location of your script
I don't think GIT_ARGS need to be exported
To force grep to show colours when using parallel, try grep --color=always
I'll see if I could give a good suggestion about showing the color.
Meanwhile I think you could improve your script like this:
#!/bin/bash
function do_git {
PROJECT_DIR=${1%.git}
cd "$PROJECT_DIR"
echo
pwd
git "${#:2}"
echo
}
export -f do_git
find . -maxdepth 2 -type d -name '.git' | sort | parallel --max-procs 4 do_git '{}' "$#"
You don't have to change back with cd "$START_DIR" since it's run in a subshell (in parallel perhaps) and won't affect the calling shell.