How can I recursively run through an entire subversion a repository and list files containing a specific text?
You could use ack (site: http://betterthangrep.com/ , I like the domain name ;) ):
It does ignore the .svn by default and runs on multiple platform, including Windows, being a Perl program.
Usage Example:
Find all #include files in C programs:
ack --cc '#include\s+<(.*)> --output '$1' -h
Testimonial example:
"Grepping of SVN repositories was driving me crazy until I found ack. It fixes all of my grep annoyances and adds features I didn't even know I wanted." --
If you have a subversion client installed, then you will be able to grab all the versioned files with this command :
svn info -R repository_root
then extract from this list the files (Path : field) and then make a grep (like this one) to extract correct files.
Related
I'm attempting to add about 40k files to Mercurial using 'hg add' on Win 10. Due to downstream constraints I need to split this up into chunks of 10k files for each add/commit, therefore I need to pass large lists of files to hg add. However I'm hitting Window's commandline argument character limits.
Is there a way to pass file lists stored in a file to hg add? AFAIK can't just use pipes as this is just reading the file and passing it as arguments. I think hg would need to directly read the text file.
You can solve this by using Mercurial's extensive pattern functionality and possibly file sets (see hg help patterns and hg help filesets).
One type of pattern that you can specify is the listfile: pattern, which takes a filename as an argument, which must contain a newline-separated list of patterns (which of course can be filenames).
So, you would first produce lists of the files in question, split up across several files. Let's assume they're called files1.txt, files2.txt, ...
Then you'd do the following:
hg add listfile:files1.txt
hg commit
hg add listfile:files2.txt
hg commit
...
If that by itself isn't sufficient, you can use file sets to further modify what files get added.
(This is not a direct answer to how to get hg to read from a list of files - but an alternative idea).
My thought is to use hg addremove instead of add since it already is designed to work in "batch".
From hg addremove --help:
hg addremove [OPTION]... [FILE]...
add all new files, delete all missing files
...
options ([+] can be repeated):
-I --include PATTERN [+] include names matching the given patterns
...
-n --dry-run do not perform actions, just print output
(some details omitted)
So a command like the following might work:
hg addremove --include path\to\files\prefix*.*
(The question doesn't have any specifics about filenames or paths to know more specifically how to structure the include details.)
This only marks the files for addition; it doesn't automatically commit. So you would be able to run this multiple times, varying the inclusion criteria, committing each time in multiple batches.
Also note that hg commit accepts a similar --include option. So alternately you could addremove 100% of your files in one go, and then issue multiple commits for only a portion of all the files each. For instance, just commit one directory at a time, or some other scheme that is easy to do and get under the 10K file limit.
I'm working in a very large project. I need to perform a search in all pom.xml files under a trunk folder (I've a hundred different trunk folder) in order to find if it contains a certain dependency.
I can't donwload all the svn repository, so I need to make a remote search. Is there a efficient way to do it ?
I think I should locate trunk folders, then search all pom.xml files into them before grep'ing their content to find my dependency, but I don't know how to do it on remote server :/
svn ls -R in the root of tree, grep needed files, construct full URL
svn cat URL/OF/FILE | grep PATTERN for each file
Use a combination of Find and Grep command. Similar to (not exact)
grep -R "192.168.1.5" /etc/
or with combination with Find command
find . -name “*.xml” |xargs grep -n “bla”
see here
I want to export only changes (files and folders - tree) between HEAD and PREVIOUS revisions using svn export command. Is it possible? How to do it under Windows (command-line)?
svn diff -rPREV:HEAD --summarize
Will give you a list of files modified. You should be able to iterate over the list with whatever scripting languages you have available and create the necessary directories followed by svn cat commands to get the contents of each file.
This would be quite easy in bash, maybe someone else can tell you where to go from here in Windows.
I have a project, where I'm forced to use ftp as a means of deploying the files to the live server.
I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents,
deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository.
(and taking care of user uploaded files and folders, and making post-deploy changes, etc)
It's working well, but the project is starting to get big enough to make the deployment process too long.
I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is)
I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files,
and upload each modified file, and delete each removed file.
I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc.
Two problems:
I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here,
if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations.
hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least.
So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.
You could use a custom style that does the parsing for you.
hg log --rev rev1:rev2 --style mystyle
Then pipe it to sort -u to get a unique list of files. The file "mystyle" would look like this:
changeset = '{file_mods}{file_adds}\n'
file_mod = '{file_mod}\n'
file_add = '{file_add}\n'
The mods and adds templates are files modified or added. There is a similar file_dels and file_del template for deleted files.
Alternatively, you could use hg status -ma --rev rev1-1:rev2 which adds an M or an A before modified/added files. You need to pass a different revision range, one less than rev1, as it is the status since that "baseline". Deleted files are similar - you need the -d flag and a D is added before each deleted file.
I have a series of files named filename.part0.tar, filename.part1.tar, … filename.part8.tar.
I guess tar can create multiple volumes when archiving, but I can't seem to find a way to unarchive them on Windows. I've tried to untar them using 7zip (GUI & commandline), WinRAR, tar114 (which doesn't run on 64-bit Windows), WinZip, and ZenTar (a little utility I found).
All programs run through the part0 file, extracting 3 rar files, then quit reporting an error. None of the other part files are recognized as .tar, .rar, .zip, or .gz.
I've tried concatenating them using the DOS copy command, but that doesn't work, possibly because part0 thru part6 and part8 are each 100Mb, while part7 is 53Mb and therefore likely the last part. I've tried several different logical orders for the files in concatenation, but no joy.
Other than installing Linux, finding a live distro, or tracking down the guy who left these files for me, how can I untar these files?
Install 7-zip. Right click on the first tar. In the context menu, go to "7zip -> Extract Here".
Works like a charm, no command-line kung-fu needed:)
EDIT:
I only now noticed that you mention already having tried 7zip. It might have balked if you tried to "open" the tar by going "open with" -> 7zip - Their command-line for opening files is a little unorthodox, so you have to associate via 7zip instead of via the file association system built-in to windows. If you try the right click -> "7-zip" -> "extract here", though, that should work- I tested the solution myself (albeit on a 32-bit Windows box- Don't have a 64 available)
1) download gzip http://www.gzip.org/ for windows and unpack it
2) gzip -c filename.part0.tar > foo.gz
gzip -c filename.part1.tar >> foo.gz
...
gzip -c filename.part8.tar >> foo.gz
3) unpack foo.gz
worked for me
As above, I had the same issue and ran into this old thread. For me it was a severe case of RTFM when installing a Siebel VM . These instructions were straight from the manual:
cat \
OVM_EL5U3_X86_ORACLE11G_SIEBEL811ENU_SIA21111_PVM.tgz.1of3 \
OVM_EL5U3_X86_ORACLE11G_SIEBEL811ENU_SIA21111_PVM.tgz.2of3 \
OVM_EL5U3_X86_ORACLE11G_SIEBEL811ENU_SIA21111_PVM.tgz.3of3 \
| tar xzf –
Worked for me!
The tar -M switch should it for you on windows (I'm using tar.exe).
tar --help says:
-M, --multi-volume create/list/extract multi-volume archive
I found this thread because I had the same problem with these files. Yes, the same exact files you have. Here's the correct order: 042358617 (i.e. start with part0, then part4, etc.)
Concatenate in that order and you'll get a tarball you can unarchive. (I'm not on Windows, so I can't advise on what app to use.) Note that of the 19 items contained therein, 3 are zip files that some unarchive utilities will report as being corrupted. Other apps will allow you to extract 99% of their contents. Again, I'm not on Windows, so you'll have to experiment for yourself.
Enjoy! ;)
This works well for me with multivolume tar archives (numbered .tar.1, .tar.2 and so on) and even allows to --list or --get specific folders or files in them:
#!/bin/bash
TAR=/usr/bin/tar
ARCHIVE=bkup-01Jun
RPATH=home/user
RDEST=restore/
EXCLUDE=.*
mkdir -p $RDEST
$TAR vf $ARCHIVE.tar.1 -F 'echo '$ARCHIVE'.tar.${TAR_VOLUME} >&${TAR_FD}' -C $RDEST --get $RPATH --exclude "$EXCLUDE"
Copy to a script file, then just change the parameters:
TAR=location of tar binary
ARCHIVE=Archive base name (without .tar.multivolumenumber)
RPATH=path to restore (leave empty for full restore)
RDEST=restore destination folder (relative or absolute path)
EXCLUDE=files to exclude (with pattern matching)
Interesting thing for me is you really DON'T use the -M option, as this would only ask you questions (insert next volume etc.)
Hello perhaps would help.
I had the same problems ...
a save on my web site made automaticaly in Centos at 4 am create multiple file in multivolume tar format (saveblabla.tar, saveblabla.tar1.tar, saveblabla.tar2.tar,etc..)
after downloading this file on my PC (windows) i can't extract them with both windows cmd or 7zip (unknow error).
I thirst binary copy file to reassemble tar files. (above in that thread)
copy /b file1+file2+file3 destination
after that, 7zip worked !!! Thanks for you help