Removing files from lost+found directory in clearcase UCM - clearcase-ucm

I want to delete all the files from lost+found directory in one go from command prompt.
How can i do that ?

You can check the technote About the lost+found directory
Removing Objects from lost+found
Before taking any steps to clean out the VOB's lost+found, please make a backup of the VOB as a safeguard.
There are two possible ways to remove an object from the root of the lost+found:
The object can be moved to a new location in the VOB using the cleartool mv command
The object can be permanently deleted from the VOB.
In your case, you would do some cleartool rmelem -force lost+found/afile.
% pwd
/vobs/myvob/lost+found
% cleartool ls
test.c.f9e4e356252a11d0a41508000993b102##/main/1 Rule: /main/LATEST
% cleartool rmelem test.c.f9e4e356252a11d0a41508000993b102
As I mentioned in "How to remove a checkout without any view reference in clearcase?", you need to rmelem the files first, then the folder.
So write a script which does a cleartool find -type f myVob/loast+found first, then a find -type d.
You can combine a cleartool find with an -exec directive calling cleartool rmelem:
... -exec "cleartool rmelem -force \"%CLEARCASE_PN%\""
Be very careful with that rmelem command: once an element is remove from the vob, it cannot be recovered.

Related

Shell Script for moving files to a specific Folder

perhaps someone can point me in the right direction:
I want to move certain files which are named on a specific type (e.g. file%AA) to the mirrored place in my archive.
So for example i have the following:
livefolder/folder1/file%AA.csv
and want to move it to
archivefolder/folder1/file%AA.csv
When the archive folder does not exist, it should be created.
How can i achive this?
At the moment i am stuck with this:
find /livefolder/folder1/ -type d \( -name '*%AA' \) -exec mv {} /archivefolder/folder1/ +
But then i would need to do this for every subfolder which exist. Is there a way to do this recursevily for all subfolders?
I hope someone can point me in the right direction.
Thanks in advance!
As mentioned in comments by Barmer and Renaud, you can use the rsync. It will create the subdirectories in the path if it does not exist when you use --relative option. But you need to have the parent directory present (unfortunately, in your case archivefolder). Anyways, you can create it with mkdir -p (-p will only create a directory if it does not exist).
rsync command will look like this:
rsync -a --relative livefolder/./folder1/file%AA.csv archivefolder
relative will set the path in destination with respect to the source you have provided.
i.e it will be equal to
rsync [options] livefolder/folder1/file%AA.csv archivefolder/folder1/file%AA.csv
We should use . after livefolder inside the source path in rsync command (rsync -a --relative livefolder/./folder1/file%AA.csv). This means that in the destination path given, rsync will copy in the format /folder1/file%AA.csv (i.e follow the hierarchy mentioned after . in the source). Hence, this will create folder1/file%AA.csv inside archivefolder.

Find and execute command on all files in directory that have a specific folder name

I want to use the find command to execute a special command on all of the files inside the directory that are contained inside multiple different directories with a specific keyword in it (the keyword is "Alpha") and keep the output inside the working directory of the initial file I ran the command on.
The command works such that it requires to you to provide the initial file to perform the command on and then the name of the newly converted file. So like this
command file_to_run_command_on.txt new_file.txt
This is my code
find $PWD -name *.txt* -exec command {} new_file. \;
Right now, it finds all the text files in this directory even in the sub directories and outputs just one file in the directory I run the initial find command from. I'm also unsure how to add the additional search for the keyword in the directory. All advice appreciated!
-exec runs the command in the directory from which you start find.
-execdir runs the command in the matching file's directory.
To only find *.txt* files whose parents contain a specific file, you could use:
find "$PWD" -path "*keyword*/*.txt*" -execdir command {} new_file \;
This will run the command for foo/bar/some-keyword-dir/baz/etc/file.txts but not for foo/bar/baz/file.txts (no keyword in parent directory names) or foo/bar/some-keyword-dir/baz/file.tar (not *.txt*)

Moving files older than 1 hour from one directory to another - AIX

I am trying to move files older than one hour, which are being populated almost every minute very rapidly to another folder whose name specifies the particular hour, in aix.
The script i was trying to run is:
find /log/traces/ -type f -mmin +59 -exec mv '{}' /Directory \;
The above script gives me an error:
find: bad starting directory
I am a newbie to shell scripting.
Any help will be highly appreciated.
------------------Edited-----------------------
I have been able to move the files older than 1 hour, but if the specified folder does not exist, it creates a file with the name specified in command and dumps all the files in it. The script i am running now is:
find /log/traces -type f -mmin +59 -exec mv '{}' /Directory/ABC-$(date +%Y%m%d_%H) \;
It creates a file named ABC-[Current hour]. I want to create a directory and move all the files into it.
If you are not running as a root user you may be getting this problem because of read permissions on /log/traces/.
To see the permission level of this directory run ls -l /log/traces/ the left most column will display something like this drwxr-xr-x which is an explanation of what permission settings that directory has. To understand more read this.
You need to ensure the user you are executing your command as has read access to /log/traces/ - that should fix your error.
Does the directory /Directory- (Timestamp) exist before the script exist? I am guessing the directory is not there to move the files. Make sure the directory exists before you start moving.
You can create a shell script for moving the files that will take the target directory as a parameter along with the file name. Tf the target directory does not exist the script creates the directory. After that script will execute mv command to move the file to the target directory.
Don't bother moving the files, just create them in the right folder directly. How?
Let crontab run each hour and create a dir /Directory/ABC-$(date +%Y%m%d_%H).
And now make a symbolic link between /log/traces and the new directory.
When the link /log/traces already exists, you must replace the link (and not make a link in a subdir).
newdir="/Directory/ABC-$(date +%Y%m%d_%H)"
mkdir -p "${newdir}"
ln -snf "${newdir} /log/traces
The first time you will need to mv /log/traces /log/traces_old and make the first link during a small moment that no new files are created.
Please test first with /log/testtraces first, checking the correct rights for the crontab user.

How to clean up directory structure from botched rsync operation?

In a bash script, I am rsync'ing many directories. However, in my script I forgot to put a trailing slash at the end of the source directory. As a result, I have something like
rsync /first/path/dir /second/path/dir
in my script, which I know is wrong, and ends up creating
/second/path/dir/dir
in the destination directory, which is not what I want at all.
Is there a quick way I can use the find command to find all instances of "dir/dir" and perform an 'rm -rf' without losing the other original contents of /second/path/dir ?
you can use the -path argument to unix find command to locate (and remove) 'dir/dir'
find . -path \*/dir/dir -exec rm -rf {} \;

How to remove all files with a specific extension from version control?

I am using Mercurial under Linux. I would like to exclude all files containing the pattern *.pro.user* from the version control system.
I tried to list all the files with:
find . -name "*.pro.user*"
This turned out also some results which are in the .hg folder:
...
./.hg/store/data/_test_run_multiple/_test_run_multiple.pro.user.i
./.hg/store/data/_test_non_dominated_sorting/_test_sorting.pro.user.i
./Analyzer/AlgorithmAnalyzer.pro.user
./Analyzer/AlgorithmAnalyzer.pro.user.a6874dd
...
I then tried to pipe this result to the hg forget command like:
find . -name "*.pro.user*" | hg forget
but I get:
abort: no files specified
My guess is that the list needs to be processed in some way in order to be passed to hg forget.
I would like to ask:
How can I pass the result of my find query into the hg forget command?
Since the query result contains files in the "private" folder .hg, is it a good idea? I hope that Mercurial will ignore that request, but shoud I remove those results somehow?
Try the following:
hg forget "set:**.pro.user*"
This tells Mercurial to forget any files that match the fileset **.pro.user*. As the fileset is defined in Mercurial, it won't go into the .hg directory. You can do even more with filesets by looking at: hg -v help filesets
The ** at the start means to work in subdirectories, rather than just the current directory.
First of all, you can use find * -name "*.pro.user*" to avoid looking in .hg.
Mercurial's forget command requires its arguments on the command line. So you need to use xargs:
find * -name "*.pro.user*" | xargs hg forget
Alternatively you can ask find to do the job:
find * -name "*.pro.user*" -exec hg forget {} \;
Finally, you should add *.pro.user* to your .hgignore file.

Resources