rm -r .git
rm -r .git --force
I get the following and there seems to be a never ending supply after I enter 'yes' and move to the next.
override r--r--r-- redacted/staff for .git/objects/95/90087aa4b351e278e6e53ff6240045ab2db6d1?
Analysis and explanation:
The message override r--r--r-- ...? is seen in some versions of the rm command when you try to delete a file or files with the rm command that have write access removed.
To reproduce:
▶ mkdir -p foo/{bar,baz} ; touch foo/bar/qux
▶ chmod -R -w foo
▶ find foo -ls
4305147410 0 dr-xr-xr-x 4 alexharvey wheel 128 24 Mar 18:19 foo
4305147412 0 dr-xr-xr-x 2 alexharvey wheel 64 24 Mar 18:19 foo/baz
4305147411 0 dr-xr-xr-x 3 alexharvey wheel 96 24 Mar 18:19 foo/bar
4305147413 0 -r--r--r-- 1 alexharvey wheel 0 24 Mar 18:19 foo/bar/qux
Now if you try to delete these files you'll be asked if you really want to override this file mode:
▶ rm -r foo
override r-xr-xr-x alexharvey/wheel for foo/baz?
Note also that if you are on Mac OS X or other BSD variant, as appears to be the case, then you have specified the --force argument incorrectly by adding it to the end of the command line, where it will be interpreted as the name of an additional file to delete.
But even if I correct that, -f still can't override r--r--r--. Instead, you would see this:
▶ rm -rf foo
rm: foo/baz: Permission denied
rm: foo/bar/qux: Permission denied
rm: foo/bar: Permission denied
rm: foo: Directory not empty
The fix:
To fix this, firstly restore the write permission within the folder:
▶ chmod -R +w foo
Then rm -r should work fine:
▶ rm -r foo
▶ ls foo
ls: foo: No such file or directory
See also:
this related question at Unix & Linux Stack Exchange.
source code for BSD rm here.
if you want to delete directories in git, just log in to sudo:
$ sudo rm -r file-name
rm -rf .folder
does the trick without spending extra time setting parameters
So I have folder aa
$ mkdir aa
and path expansion for ls command works like this:
$ ls -la a*
total 0
drwxr-xr-x 1 a a 0 Mar 29 08:41 ./
drwxr-xr-x 1 a a 0 Dec 31 1979 ../
$ ls -la a?
total 0
drwxr-xr-x 1 a a 0 Mar 29 08:41 ./
drwxr-xr-x 1 a a 0 Dec 31 1979 ../
But "the same" for mkdir shows an error:
$ mkdir a*/bb
mkdir: cannot create directory 'a*/bb': No such file or directory
$ mkdir a?/bb
mkdir: cannot create directory 'a?/bb': No such file or directory
Where can I read why this difference in behavior happens and is there simple trick to let mkdir be "smarter" for behavior like in ls?
This does not work, since wildcard expansion is done before the argument is passed to mkdir. bash tries to expand a*/bb, doesn't find a match and tells you so. mkdir is not even invoked here. You can also try e.g.
echo a*/bb
or as you did before
ls -la a*/bb
Both commands will give you the same error message.
Now I realize how stupid that question was. Probably I wanted something like this for expansion to work:
mkdir "$(ls -d a?)"/bb
Try:
mkdir -p a*/aa
mkdir -p a?/aa
Im trying to create a simple rpm package on centos 6.5.. But i cannot finish it as its giving me errors.. I have already followed these two threads.. Bad exit status from /var/tmp/rpm-tmp.b1DgAt (%build) and Bad exit status from /var/tmp/rpm-tmp.ajKra4 (%prep) .. yet no luck...
I cannot figure out what i'm missing here.. please help me to fix this..
this is my
Name: test
Version: 1.0
Release: 1%{?dist}
Summary: A test package
Group: Testing
License: GPL
URL: http://www.yahoo.com
Source0: test-1.0.tar.gz
BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)
BuildRequires: /bin/rm, /bin/mkdir, /bin/cp
Requires: /bin/bash, /bin/date
%description
this is the test package build for rhche
%prep
%setup -q
%build
./configure
%install
rm -rf $RPM_BUILD_ROOT
make -p $RPM_BUILD_ROOT/usr/local/bin
cp myscriptdate $RPM_BUILD_ROOT/usr/local/bin
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root,-)
%attr(0755,root,root)/usr/local/bin/myscriptdate
%changelog
* Thu Dec 09 2010 Forrest <forrest#redhat.com> 1.0-1
-Initial RPM
-Added /usr/local/bin/myscript
Source directory is /test1
[ara#catshit test1]$ pwd
/test1
[ara#catshit test1]$ ls -ls
total 12
4 drwxrwxrwx. 2 ara ara 4096 Dec 7 00:02 test-1.0
4 -rw-rw-r--. 1 ara ara 210 Dec 7 00:09 test-1.0.tar.gz
4 -rwxrwxrwx. 1 ara ara 742 Dec 7 00:17 test.spec
[ara#catshit test1]$
test-1.0 is compressed as test-1.0.tar.gz.
Inside test-1.0 I have script called myscriptdate which is having following simple code..
'#!/bin/bash
date
when i try rpmbuild -ba test.spec it gives me
# Not a target:
.f:
# Implicit rule search has not been done.
# Modification time never checked.
# File has not been updated.
# commands to execute (built-in):
$(LINK.f) $^ $(LOADLIBES) $(LDLIBS) -o $#
# Not a target:
.f.o:
# Implicit rule search has not been done.
# Modification time never checked.
# File has not been updated.
# commands to execute (built-in):
$(COMPILE.f) $(OUTPUT_OPTION) $<
# files hash-table stats:
# Load=70/1024=7%, Rehash=0, Collisions=278/1660=17%
# VPATH Search Paths
# No `vpath' search paths.
# No general (`VPATH' variable) search path.
# # of strings in strcache: 0
# # of strcache buffers: 0
# strcache size: total = 0 / max = 0 / min = 4096 / avg = 0
# strcache free: total = 0 / max = 0 / min = 4096 / avg = 0
# Finished Make data base on Sun Dec 7 00:51:01 2014
error: Bad exit status from /var/tmp/rpm-tmp.ZFlmeu (%install)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.ZFlmeu (%install)
/var/tmp/rpm-tmp.ZFlmeu content is below
#!/bin/sh
RPM_SOURCE_DIR="/home/ara/rpmbuild/SOURCES"
RPM_BUILD_DIR="/home/ara/rpmbuild/BUILD"
RPM_OPT_FLAGS="-O2 -g"
RPM_ARCH="x86_64"
RPM_OS="linux"
export RPM_SOURCE_DIR RPM_BUILD_DIR RPM_OPT_FLAGS RPM_ARCH RPM_OS
RPM_DOC_DIR="/usr/share/doc"
export RPM_DOC_DIR
RPM_PACKAGE_NAME="test"
RPM_PACKAGE_VERSION="1.0"
RPM_PACKAGE_RELEASE="1.el6"
export RPM_PACKAGE_NAME RPM_PACKAGE_VERSION RPM_PACKAGE_RELEASE
LANG=C
export LANG
unset CDPATH DISPLAY ||:
RPM_BUILD_ROOT="/home/ara/rpmbuild/BUILDROOT/test-1.0-1.el6.x86_64"
export RPM_BUILD_ROOT
PKG_CONFIG_PATH="/usr/lib64/pkgconfig:/usr/share/pkgconfig"
export PKG_CONFIG_PATH
set -x
umask 022
cd "/home/ara/rpmbuild/BUILD"
cd 'test-1.0'
rm -rf $RPM_BUILD_ROOT
make -p $RPM_BUILD_ROOT/usr/local/bin
cp myscriptdate $RPM_BUILD_ROOT/usr/local/bin
/usr/lib/rpm/brp-compress
/usr/lib/rpm/brp-strip
/usr/lib/rpm/brp-strip-static-archive
/usr/lib/rpm/brp-strip-comment-note
The make -p $RPM_BUILD_ROOT/usr/local/bin line is your problem.
While not the problem you almost certainly don't want -p on that line. As it doesn't do anything useful for you during compilation and your rpm build process has no need to see the make database of rules.
The real problem is that you are telling make that you would like it to build the $RPM_BUILD_ROOT/usr/local/bin target which it is incredibly unlikely that make actually knows how to build (thus causing make to fail to build it and giving you an error). Removing the -p will help you see the actual error that make is spitting out as it will not also spit out the rule database stuff.
I think you meant mkdir -p there instead. (Which should be available as the %{__mkdir_p} macro.)
I am using [/usr/bin/]install in a Makefile to copy some binaries into my $HOME directory. My umask is set to 700.
The problem is that I am using install -D -m 700 to install the binaries and the parent directory is created with permissions of 755 and not 700:
$ umask
077
$ ls
$ touch hello
$ ls -l
total 0
-rw------- 1 emuso emuso 0 Apr 5 13:15 hello
$ install -D -m 700 hello $PWD/this/is/hello
$ ls -ld this
drwxr-xr-x 3 emuso emuso 4096 Apr 5 13:17 this
$ ls -lR this
this:
total 4
drwxr-xr-x 2 emuso emuso 4096 Apr 5 13:17 is
this/is:
total 0
-rwx------ 1 emuso emuso 0 Apr 5 13:17 hello
I want that the directories this and is get permissions 700 instead of 755.
Solutions that come to my mind are:
using install -d -m 700 to create the directory structure by hand.
using chmod to fix permissions manually.
The major drawback for the first solution is that I have a directory structure, which I would have to travel and create by hand.
So my question is: Is there an elegant way to control permissions for directories created by "install -D"?
What you want to achieve does not seem possible with a single invocation to install only, so you might have to resort to a combination of mkdir and install. Depending on your exact situation, you might be able to take advantage of a canned recipe, using something like this:
define einstall
test -d "$(dir $#)" || mkdir -p "$(dir $#)"
install -m 700 $< $#
endef
some/new/test/hello: hello
$(einstall)
If you plan to play around with canned recipes with make v3.81 or older, please make sure to read this answer to Why GNU Make canned recipe doesn't work?
I saw the following interesting usage of tar in a co-worker's Bash scripts:
`tar cf - * | (cd <dest> ; tar xf - )`
Apparently it works much like rsync -av does, but faster. The question arises, how?
-m
EDIT: Can anyone explain why should this solution be preferable over the following?
cp -rfp * dest
Is the former faster?
It writes the archive to standard output, then pipes it to a subprocess -- wrapped by the parentheses -- that changes to a different directory and reads/extracts from standard input. That's what the dash character after the f argument means. It's basically copying all the visible files and subdirectories of the current directory to another directory.
On the difference between cp and tar to copy the directory hierarchies, a simple experiment can be conducted to show the difference:
alastair box:~/hack/cptest [1134]% mkdir src
alastair box:~/hack/cptest [1135]% cd src
alastair box:~/hack/cptest/src [1136]% touch foo
alastair box:~/hack/cptest/src [1137]% ln -s foo foo-s
alastair box:~/hack/cptest/src [1138]% ln foo foo-h
alastair box:~/hack/cptest/src [1139]% ls -a
total 0
-rw-r--r-- 2 alastair alastair 0 Nov 25 14:59 foo
-rw-r--r-- 2 alastair alastair 0 Nov 25 14:59 foo-h
lrwxrwxrwx 1 alastair alastair 3 Nov 25 14:59 foo-s -> foo
alastair box:~/hack/cptest/src [1142]% mkdir ../cpdest
alastair box:~/hack/cptest/src [1143]% cp -rfp * ../cpdest
alastair box:~/hack/cptest/src [1144]% mkdir ../tardest
alastair box:~/hack/cptest/src [1145]% tar cf - * | (cd ../tardest ; tar xf - )
alastair box:~/hack/cptest/src [1146]% cd ..
alastair box:~/hack/cptest [1147]% ls -l cpdest
total 0
-rw-r--r-- 1 alastair alastair 0 Nov 25 14:59 foo
-rw-r--r-- 1 alastair alastair 0 Nov 25 14:59 foo-h
lrwxrwxrwx 1 alastair alastair 3 Nov 25 15:00 foo-s -> foo
alastair box:~/hack/cptest [1148]% ls -l tardest
total 0
-rw-r--r-- 2 alastair alastair 0 Nov 25 14:59 foo
-rw-r--r-- 2 alastair alastair 0 Nov 25 14:59 foo-h
lrwxrwxrwx 1 alastair alastair 3 Nov 25 15:00 foo-s -> foo
The difference is in the hard-linked files. Notice how the hard-linked files are copied individually with cp and together with tar. To make the difference more obvious, have a look at the inodes for each:
alastair box:~/hack/cptest [1149]% ls -i cpdest
24690722 foo 24690723 foo-h 24690724 foo-s
alastair box:~/hack/cptest [1150]% ls -i tardest
24690801 foo 24690801 foo-h 24690802 foo-s
There are probably other reasons to prefer tar, but this is one big one, at least if you have extensively hard-linked files.
For a directory with 25,000 empty files:
$ time { tar -cf - * | (cd ../bar; tar -xf - ); }
real 0m4.209s
user 0m0.724s
sys 0m3.380s
$ time { cp * ../baz/; }
real 0m18.727s
user 0m0.644s
sys 0m7.127s
For a directory with 4 files of 1073741824 bytes (1GB) each
$ time { tar -cf - * | (cd ../bar; tar -xf - ); }
real 3m44.007s
user 0m3.390s
sys 0m25.644s
$ time { cp * ../baz/; }
real 3m11.197s
user 0m0.023s
sys 0m9.576s
My guess is this phenomenon is highly filesystem-dependent. If I'm right you will see a drastic difference between a filesystem that specializes in numerous small files, such as reiserfs 3.6, and a filesystem that is better at handling large files.
(I ran the above tests on HFS+.)
This is a unique usage of pipes. Basically, the first tar typically writes directly to a file, but instead it's going to write to stdout (the -), which is then redirected to the other tar which takes stdin rather than a file. Basically this is the same thing as tarring to a file and untarring later, except without the file in between.
The PowerTools book has the copy as:
tar cf - * | (cd <dest> && tar xvBf - )
The '&&' is a conditional that checks the return code of the preceding command. Ihat is, if the "cd " failed, the "tar xf -" would not be executed. I always throw in a -v (verbose) and a -B (reblock input).
I use tar all the time. It is especially useful for copying to a remote system, such as:
tar cvf - . | ssh someone#somemachine '(cd somewhere && tar xBf -)'
tar cf - * | (cd <dest> ; tar xf - )
is going to tar all not hidden files/directories of the current directory to stdout, then piping that into a new subshells' stdin. That shell first changes the current working directory to <dest>, and then untars it to that directory.
Some old versions of cp didn't have -f / -p (and similar) options for preserving permissions, so this tar trick did the job.
I believe the tar will do a Windows style 'merge' operation with deeply nested directories, whereas the cp will overwrite sub-directories.
For example if you have the layout:
dir/subdir/file1
and you copy it to a destination that contains:
dir/subdir/file2
Then with copy you will be left with:
dir/subdir/file1
But with the tar command, your destination will contain:
dir/subdir/file1
dir/subdir/file2
tar cf - *
This uses tar to send * to stdout
|
This does the obvious redirect of stdout to...
(cd <dest> ; tar xf - )
This, which changes PWD to the appropriate location and then extracts from stdin
I do not know why this would be faster than rsync, as there is no compression involved.
The tar solution will preserve symbolic links, whereas cp will just make copies and destroy the links.
tar has been a standard Unix utility a lot longer than rsync. You're more likely to find it in a situation when a directory hierarchy needs to be copied to another location (even another computer). rsync is probably easier to use these days, but is slower because it compares both the source and destinations and sync's them. tar just copies in one direction.
If you have GNU cp (which all Linux-based systems will), the cp --archive will work, even on hard-linked files, and tar is not needed.
As it happens, a co-worker wrote a nearly identical command into one of our scripts. After I spent some time puzzling over it, I asked why he had used that rather than cp. His answer, as I recall it, was that cp is slow when making a copy from one file system to another.
Whether or not this is true would require more testing than I care to spend on the question, but it makes a certain amount of sense. The first tar process reads from the source device as quickly as possible only waiting for that device to read. Meanwhile, the second tar process reads from its input pipe and writes as quickly as possible. It might have to wait for input, but if writes on the destination device are slower than reads on the source device it will only wait on the destination device. A single cp command will have to wait on both the source and the destination devices.
On the other hand, modern operating systems do a pretty good job of pre-caching IO operations. It's entirely possible cp will spend most of its time waiting on writes and getting reads from memory rather than the device itself. It seems like one would need really solid data to chose using two tar commands rather than the more straightforward cp command.