Pipe shell output to svn del command? - shell

I have a rather complicated deploy setup for our Drupal site that is a combination of CVS and SVN. We use CVS to get the newest versions of modules, and we deploy with SVN. Unfortunately, when CVS updates remove files, Subversions complains because they weren't removed in SVN. I am trying to do some shell scripting and Perl to run an svn rm command on all these files that have already been deleted from the filesystem, but I have not gotten far. What I have so far is this:
svn st | grep !
This outputs a list of all the files deleted from the filesystem like so:
! panels_views/panels_views.info
! panels_views/panels_views.admin.inc
! contexts/term.inc
! contexts/vocabulary.inc
! contexts/terms.inc
! contexts/node_edit_form.inc
! contexts/user.inc
! contexts/node_add_form.inc
! contexts/node.inc
etc. . .
However, I want to somehow run an svn del on each of these lines. How can I get this output into my Perl script, or alternatively, how can I run svn del on each of these lines?
Edit: The exact command I used, with some help from all, was
svn st | grep ^! | cut -c 9- | xargs svn del

Try using xargs, like this:
svn st | grep ^! | cut -f2 | xargs svn rm
The xargs command takes lines on its standard input, and turns them around and uses those lines as command line parameters on the svn rm. By default, xargs uses multiple lines with each invocation of its command, which in the case of svm rm is fine.
You may also have to experiment with the cut command to get it just right. By default, cut uses a tab as a delimiter and Subversion may output spaces there. In that case, you may have to use cut -d' ' -f6 or something.
As always when building such a command pipeline, run portions at a time to make sure things look right. So run everything up to the cut command to ensure that you have the list of file names you expect, before running it again with "| xargs svn rm" on the end.

svn st | egrep ^! | cut -b 9- | xargs svn del

Just as an alternative to the ones above, I'd use something like:
svn st | awk '$1=="!"{print $2}' | xargs svn del
I find awk's pattern-matching language very handy for tasks like this.

Related

How to use grep/git grep with pipe output?

Is it possible to use pipe output as input for grep or git grep? The data im trying to pass to grep/git grep is the following
kubectl get namespace -o name -l app.kubernetes.io/instance!=applications | cut -f2 -d "/"
argocd
default
kube-node-lease
kube-public
kube-system
nsx-system
pks-system
I've tried to extent the command but this results in an error:
kubectl get namespace -o name -l app.kubernetes.io/instance!=applications | cut -f2 -d "/" | xargs git grep -i
fatal: ambiguous argument 'default': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
Using just grep results in:
kubectl get namespace -o name -l app.kubernetes.io/instance!=applications | cut -f2 -d "/" | xargs grep -i
grep: default: No such file or directory
grep: kube-node-lease: No such file or directory
grep: kube-public: No such file or directory
grep: kube-system: No such file or directory
grep: nsx-system: No such file or directory
grep: pks-system: No such file or directory
The issue im facing with grep in general in this particular case is, that even if i soley use grep within my directory, it takes ages till it's done, whereas git grep is done within seconds. If I'm not doing something terrible wrong that would explain the slow results of grep, getting git grep to work would be preferred.
I've found this other Stackoverflow Question that somewhat explains what the issue is, but I don't know how to "process" the output into git grep properly.
The problem is that (as your screenshot shows) the result is multiple terms which I'm guessing you want to be OR-ed together, and not searching for the first term in the files identified by the last terms (which is what the current xargs command does)
Since OR in regex is via the | character, you can use xargs echo to fold the vertical list into a space delimited horizontal list then replace the spaces with | and be pretty close to what you want
printf 'alpha\nbeta\ncharlie\n' | xargs echo | tr ' ' '|' | xargs git grep -i
although due to the folding operation, that command is an xargs of one line, and thus would be conceptually easier to reason about using just normal $() interpolation:
git grep -i $(printf 'alpha\nbeta\ncharlie\n' | xargs echo | tr ' ' '|')
The less "whaaa" shell pipeline would be to use kubectl get -o go-template= to actually emit a pipe-delimited list and feed that right into xargs (or $()), bypassing the need to massage the output text first

How to get all revisions in subversion URL (trunk/branch) based on a string in svn comments?

Need some help on shell command to get all revs in subversion trunk URL based on a string in svn comments.
I figured out to get it on one file but not on URL.
I tried svn log URL --stop-on-copy and svn log URL --xml to get the revs but unsuccessful.
Thanks !!
Another way using sed. It's probably not perfect but it also works with multiline comments. Replace SEARCH_STRING for your personal search.
svn log -l100 | sed -n '/^r/{h;d};/SEARCH_STRING/{g;s/^r\([[:digit:]]*\).*/\1/p}'
For Subversion 1.8 it's
svn log URL --search STRING
Try following.
x="refactoring"; svn log --limit 10 | egrep -i --color=none "($x|^r[0-9]+ \|.*lines$)" | egrep -B 1 -i --color=none $x | egrep --color=none "^r[0-9]+ \|.*lines$" | awk '{print $1}' | sed 's/^r//g'
Replace refactoring with search string.
Change svn log parameters to suite your need.
Case insensitive matching is used (egrep -i).
Edit based on comment.
x="ILIES-113493"; svn log | egrep -i --color=none "($x|^r[0-9]+ \|.*lines$)" | egrep -B 1 -i --color=none $x | egrep --color=none "^r[0-9]+ \|.*lines$" | awk '{print $1}' | sed 's/^r//g'
Notes:
x is the variable to contain the search string, and x is used in
two places in the command.
In order to use x as a variable in the shell itself, you need to put entire command on a single line (from x=".."; svn log ... sed '...'). Semicolon ; can be used to separate multiple commands on the same line.
I had used --limit 10 in example to limit the number of log entries,
change that as well as use other svn log parameters to suite your
need. Using --limit 10 will restrict the search to 10 most recent log entries.
Thanks all for the help !! This worked for me:
svn log $URL --stop-on-copy | grep -B 2 $STRING | grep "^r" | cut -d"r" -f2 | cut -d" " -f1
Use "--stop-on-copy" or "--limit" options depending on the requirement.

Manipulating a file - bash

I need some guidance manipulating a text file that is the result of a diff. I only want those results listed after the > delimiter (which are file names) and then I will add a path to the file name for further work.
I am not dealing with large files.
I am hoping to do it all in place.
Essentially I want to take something like this
96a97,98
> SCR-33333.sql
> SCR-33333-WEB.sql
and create an action like
cp /add/this/path/SCR-33333.sql /to/somewhere/else
Can anyone please give me a quick example I can run with?
Well, you could try this, bearing in mind that it'll only work if filenames do not contains spaces...
diff this that | awk '/^>/{print "/add/this/path/" $2}' | xargs -i cp {} /to/somewhere/else
(note: this is a one-liner command. ignore wrapping caused by web browser.)
grep ">" dummy.txt | cut -f 2 -d ' ' | xargs -I{} cp /add/this/path/{} somewhere
where 'dummy.txt' is your diff file.

To show only file name without the entire directory path

ls /home/user/new/*.txt prints all txt files in that directory. However it prints the output as follows:
[me#comp]$ ls /home/user/new/*.txt
/home/user/new/file1.txt /home/user/new/file2.txt /home/user/new/file3.txt
and so on.
I want to run the ls command not from the /home/user/new/ directory thus I have to give the full directory name, yet I want the output to be only as
[me#comp]$ ls /home/user/new/*.txt
file1.txt file2.txt file3.txt
I don't want the entire path. Only filename is needed. This issues has to be solved using ls command, as its output is meant for another program.
ls whateveryouwant | xargs -n 1 basename
Does that work for you?
Otherwise you can (cd /the/directory && ls) (yes, parentheses intended)
No need for Xargs and all , ls is more than enough.
ls -1 *.txt
displays row wise
There are several ways you can achieve this. One would be something like:
for filepath in /path/to/dir/*
do
filename=$(basename $filepath)
... whatever you want to do with the file here
done
Use the basename command:
basename /home/user/new/*.txt
(cd dir && ls)
will only output filenames in dir. Use ls -1 if you want one per line.
(Changed ; to && as per Sactiw's comment).
you could add an sed script to your commandline:
ls /home/user/new/*.txt | sed -r 's/^.+\///'
A fancy way to solve it is by using twice "rev" and "cut":
find ./ -name "*.txt" | rev | cut -d '/' -f1 | rev
The selected answer did not work for me, as I had spaces, quotes and other strange characters in my filenames. To quote the input for basename, you should use:
ls /path/to/my/directory | xargs -n1 -I{} basename "{}"
This is guaranteed to work, regardless of what the files are called.
I prefer the base name which is already answered by fge.
Another way is :
ls /home/user/new/*.txt|awk -F"/" '{print $NF}'
one more ugly way is :
ls /home/user/new/*.txt| perl -pe 's/\//\n/g'|tail -1
just hoping to be helpful to someone as old problems seem to come back every now and again and I always find good tips here.
My problem was to list in a text file all the names of the "*.txt" files in a certain directory without path and without extension from a Datastage 7.5 sequence.
The solution we used is:
ls /home/user/new/*.txt | xargs -n 1 basename | cut -d '.' -f1 > name_list.txt
There are lots of way we can do that and simply you can try following.
ls /home/user/new | tr '\n' '\n' | grep .txt
Another method:
cd /home/user/new && ls *.txt
Here is another way:
ls -1 /home/user/new/*.txt|rev|cut -d'/' -f1|rev
You could also pipe to grep and pull everything after the last forward slash. It looks goofy, but I think a defensive grep should be fine unless (like some kind of maniac) you have forward slashes within your filenames.
ls folderpathwithcriteria | grep -P -o -e "[^/]*$"
When you want to list names in a path but they have different file extensions.
me#server:/var/backups$ ls -1 *.zip && ls -1 *.gz

Using linux "cut" with stdin

I'm trying to pipe data into "cut" to, say, cut away the first column of text. This works
$ cat test.txt | cut -d\ -f2-
Reading from stdin also works:
$ cut -d\ -f2- -
? doc/html/analysis.html
? doc/html/classxytree-members.html
<CTRL+D>
However, as soon as a pipe is involved, it doesn't accept my <CTRL+D> anymore, and I can't signal "end of file":
$ cut -d\ -f2- - | xargs echo
Update: This is apparently a bug in an old version of bash (3.00.15). It does work in more recent versions (tried 4.0.33 and 3.2.25). It would be nice to have some workaround, though, since I can't easily upgrade.
Background: I've got a script/oneliner that gives me a condensed output of cvs status (I know, CVS...) in the form
? filename
e.g. for a file not committed yet. I'd like to be able to copy+paste parts of the output from that command and use this as an input to another command, that adds these files to cvs. Say:
$ cut -d\ -f2- | xargs cvs add
<paste lines>
<CTRL-D> # <-- doesn't work
Ideas?
have you tried
$ cat | cut -d\ -f2- | xargs cvs add
<paste lines>
<CTRL-D> # <-- doesn't work
Your examples work fine for me. What shell are you using? What utilities?
One thing that sometimes trips people up is that Ctrl-D only works if it's the first character in the line. If you copy and paste, you might sometimes accidentally have whitespace as the first character of the line, or no newline at the end of the pasted block, in which case Ctrl-D won't work. Just hit return and then try Ctrl-D again and see if that fixes your problem.

Resources