I have two views say A and B,
c:\a_view
A - View
-------Folder_1-->test.bat
-------Folder_2
-------Folder_3
d:\b_view
B - View
-------Folder_1
-------Folder_2
When I do findmerge (with merge option):
c:\>cleartool findmerge "d:\b_view" -ftag a_view -type d -merge
I am getting output as:
d:\b_view
B - View
-------Folder_1-->test.bat(Merged file)
-------Folder_2
-------Folder_3 (Merged directory)
which is ok as per findmerge behavious,
But I want to merge only existing folder, i.e Folder_1 should be only merged and new
elements Folder_3 should be suppressed (should not be merged)
How can we do this?
cd D:\bView\Folder1
cleartool findmerge . -ftag a_view -type d -merge
This is fine, but suppose I have "n" number of folders I have to execute this command "N" times, it would take lot of time,
My requirement is only run on parent folder.
i.e
c:\>cleartool findmerge "d:\b_view" -ftag a_view -type d -merge
and it should exclude the new elements/folders
Do we have any parameter or some regular expression or some thing, which would exclude the new elements ?
cd D:\bView\Folder1 cleartool findmerge . -ftag a_view -type d -merge
This is fine, but suppose I have "n" number of folders I have to execute this command "N" times, it would take lot of time,
My requirement is only run on parent folder.
i.e
c:>cleartool findmerge "d:\b_view" -ftag a_view -type d -merge
and it should exclude the new elements/folders
Do we have any parameter or some regular expression or some thing, which would exclude the new elements ?
You could simply go to Folder1 (in the destination view, that is the view where the merge takes place), and run the findmerge from there.
cd D:\bView\Folder1
cleartool findmerge . -ftag a_view -type d -merge
That would limit the merge to that specific folder (you find similar example in the cleartool findmerge man page, since you can merge directory versions).
Do we have any parameter or some regular expression or some thing, which would exclude the new elements ?
The only thing you have would be the NEVER_MERGE type presented in this IBM article:
But that is not practical to apply on all the folder (and their content).
You can also try a merge -ndata on the elements you don't want to merge (by making ClearCase think they are already merged).
Again, not practical.
It is best to list the high-level folders you want to merge and merge them individually.
Related
I have a script which makes some files in pair (all the resulting files have either R1 OR R2 in the file name) and then put each pair in a separate directory with random name (every time will be different and I cannot predict). for example if I have 6 files, 3 files would be file_R1.txt or file_R2.txt but in pair like this example:
s1_R1.txt and s1_R2.txt
s2_R1.txt and s2_R2.txt
s3_R1.txt and s3_R2.txt
In this example I will have 3 directories (one per pair of files). I want to make 2 text file (d1.txt and. d2.txt) containing above file names. In fact, d1.txt will have all the files with R1 and d2.txt will contain all the files with R2. To do so, I made the following short bash code but it does not return what I want. Do you know how to fix it?
For file in /./*R*.txt;
do;
touch "s1.txt"
touch "s2.txt"
echo "${fastq}" >> "s1.txt"
done
Weird question, not sure I get it, but for your d1 and d2 files:
#!/bin/bash
find . -type f -name "*R1*" -print >d1.txt
find . -type f -name "*R2*" -print >d2.txt
find is used since you have files under different sub-directories.
Using > ensures the text files are emptied out if they already exist.
Note the code you put in your question is not valid bash.
I have a rather interesting problem that I am trying to find the optimal solution for. I am creating an file autocompletion backend for Emacs. This means that I am using the linux find command to get files and directories.
The backend is given a file with a partially completed path (e.g. /usr/folder/foo) and I want to grab all files and directories that could match the partial path for two directories down (e.g. for example it could provide foo_bar/, foo_bar/bar, foo_bar/baz, foo_bar/bat/ foo_baz). So far I have only been to break this down into 3 steps
find all files in the current directory that may match the prefix
find foo* -type f -maxdepth 1
collect all possible directories we may want to look through
find foo* -type d -maxdepth 1
use each of those directories to make 2 more calls to find (I need to be able to differentiate between files and directories)
find foo_bar/ -type d -maxdepth 1
find foo_bar/ -type f -maxdepth 1
This solution involves a lot of calls to find (especially because the last step has to be called for every matching directory). This makes getting candidates slow, especially in large file systems. Ideally I would like to only make one call to get all the candidates. But I have not found a good way to do that. Does anyone know an optimal solution?
looking though the find manpage, I ended up using -printf.
find -L foo* -maxdepth 1 -printf '%p\t%y\n'
gives me everything I needed. only one command, differentiate between files and directories, search depth, etc.
I've been reading up on find's -prune action. One common task I do is to process only the files of a directory, ignoring all directories.
Prune, from what I've learned, is great for ignoring directories if you know their names (or wildcards matching their names). But what if you don't know their names (or a pattern that matches files as well as directories)?
I found that -maxdepth achieves what I'm trying to do. I'm just wondering what the equivalent -prune approach might be.
For example, say I want to process all the files of my home directory, but not recurse into any subdirectory. Let's say my directory structure and files look like this (directories ending in '/'):
~/tmpData.dat
~/.bashrc
~/.vimrc
~/.Xdefaults
~/tmp/
~/tmp/.bashrc
~/bkups/.bashrc
~/bkups/.vimrc
~/bkups/.Xdefaults
~/bkups/tmpData.dat
.. what would be the correct find/prune command?
OK, I found my own solution. I simply specify pattern(s) that match everything in my
home directory ('~/*' for example). But in order to include all my dot files (.bashrc,
etc.), I have to use two patterns; one for non-dotted filenames and one for the files
starting with dots:
find ~/* ~/.* -type d -prune -o -type f -print
I need a shell script that writes the tree structure (including all data in the folders) from a specific folder to a text/dat file.
So far I got this:
find . -type f|sed 's_\.\/__' > PATH/Input.dat
I dont want a "/" as the first char of the path.
This script works fine, but it returns ALL folder structures. I need something that returns only structures from a specific folder, like "Sales".
I need something that returns only structures from a specific folder,
like "Sales".
Specify the desired folder name. Say:
find Sales -type f | sed 's_\.\/__'
^^^^^
Saying find . ... would search in . (i.e. the current directory and subdirectories).
If you need to search more folders, say Purchase, specify those too:
find Sales Purchase -type f | sed 's_\.\/__'
I understand that the wildcard * (by itself) will expand in such a way that it means "all non-hidden files in the current folder" with hidden files being those prefixed by a period.
There are two use cases that I would think are useful, but I don't know how to properly do:
How can you glob for... "All files in the current folder, including hidden files, but not including . or .."?
How can you glob for... "All hidden files (and only hidden files) in the current folder, but not including . or .."?
To expand on paviums answer and answer the second part of your question, all files except . and .. could be specified like this:
{.[!.]*,*}
Depending on your exact use case it might be better to set the dotglob shell option, so that bash includes dotfiles in expansions of * by default:
$ shopt -s dotglob
$ echo *
.tst
The Bash Cookbook suggests a solution to your 2nd requirement.
.[!.]*
as a way of specifying 'dot files' but avoiding . and ..
Of course, ls has the -A option, but that's not globbing.
Combining sth and pavium answers
# dot files but avoiding . and ..
.[!.]*
# all files but avoiding . and ..
{.[!.]*,*}
To meet your first case:
echo {.,}[^.]*
or
echo {.,}[!.]*
Edit:
This one seems to get everything, but is shorter than ephemient's
echo {.*,}[^.]*
By "all files" and "all hidden files" do you mean files-only, or do you mean both files and directories? Globbing operates on names irrespective of it belonging to a file or a directory. The other folks give good answers for using globbing to find hidden vs non-hidden names, but you may want to turn to the find command as an easier alternative that can distinguish between the types.
To find "All files in the current folder, including hidden files, but not including . or ..":
find . -type f
To find "All files and directories in the current folder, including hidden files, but not including . or ..":
find . ! -name .
To find "All hidden files (and only hidden files) in the current folder, but not including . or ..":
find . -name '.*' -type f
To find "All hidden files and directories (and only hidden files and directories) in the current folder, but not including . or ..":
find . -name '.*' ! -name .
Note that by default find will recurse through subdirectories, too, so if you want to limit it to only the current directory you can use:
find . -maxdepth 1 -type f
So, even though this is old - without using shopt, this doesn't seem to have been answered fully. But, expanding on what has been given as answers so far, these work for me:
1:
{*,.[!.]*,..?*}
2:
{.[!.]*,..?*}