Accidentally ran the wrong command in mac terminal. What does mv * .. do? - bash

I accidently ran command
mv * ..
instead of
mv * .
on my mac. Now I cannot find the files. What did the command do?

.. is the parent directory, i.e. the directory one level higher in the hierarchy. For example, if you are in the directory /a/b/c/, then .. means /a/b/.
mv * . doesn't make any sense - you can't move files onto themselves.

The mv command stands for move. You moved your files to the directory above where they were located. You can always type
man mv
to get a description of a unix command.

Related

purpose of chdir with a single dot (cd .)

This might appear a noob question.
While working in bash, if we run cd ., it stays in the current folder.
I understand the functionality, however, I am not able to understand the rationale of this functionality?
What would be some practical ways to use this?
The primary use case I've seen for cd . is to test whether your file handle on the current directory is still valid.
If you're on a directory from a network share -- NFS, or the like -- it can be possible for directories to be remotely deleted, but for the local client to still believe they're accessible and in use.
cd . is a way to trigger an error if your handle on the current working directory is no longer valid.
This is the only "practical" case that came to my mind
$ cd .
cd: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
when your process has a current working directory referencing a directory that has been removed by another process.
That command has no functionality. But in a POSIX-compliant environment, if you add a -P option, then it has functionality: it resolves symlinks. So for example on a Mac, if you cd to a path with a symlink:
cd /System/Library/Frameworks/AppKit.framework/Versions/Current
...then do cd -P . ... you will point to:
/System/Library/Frameworks/AppKit.framework/Versions/C
. is a special file that represents the current directory.
There are plenty of things which make use of directories and it is sometimes useful to refer to the current directory.
Changing the directory to the current directory is not one of those.
A simple example where cd . fails:
mkdir my_error
cd my_error
rm -rf ../my_error
cd .
When the rm is embedded in a difficult script or can be done by some other cleanup process, is can be an useful check.
I use a build script which removes and recreates a directory.
The old directory disappears and new appears with new inode.
If, in one of my shells my $PWD is that reappeared directory and I notice
it became unusable (and I know it was recreated), I just
$ cd .
to get the (new) directory useable again and can continue my work there.

Move files up one directory in Linux / OSX

I'm currently going through this tutorial: https://learnpythonthehardway.org/book/appendix-a-cli/ex11.html Even after going over it, I couldn't figure out how to move a file up one directory.
I'm currently in testdirectory1.
I created a subdirectory subd3.
I successfully moved a file called testfile1copy.py into subd3 via mv file1copy.py subd3/
Now, how do I move that file back to testdirectory1?
mv subd3/file1copy.py .
. references the current directory. If $PWD is the full path of testdirectory1 (some shells will automatically set that variable for you), you could also do:
mv "$PWD/subd3/file1copy.py" "$PWD"

what is exactly doing rm -r *

Is someone can tell me what is exactly doing rm -r *
is it deleting all files and sub directory from current directory ?
or does it delete all the files on the machine ?
Deletes them all from the current directory, so it depends where you are at the time.
Do
pwd
to find out where you are - for example:
mark2#laptop-ubuntu:~$ pwd
/home/mark2
If I do it here, it'll delete everything under /home/mark2.

bash script renaming all files in current directory as well as all the directories that are above it in hierarchy?

I have written following script to rename all the files within all the folders of the current directory. But I got into problem because its not only renaming the files within folder but also all the folders in current directory and parent directories in hierarchy. I realised that this is because of . and .. present in current directory. But how to get rid of them ?
for i in *
do
cd $i
for j in *
do
mv $j $i$j
done
cd ..
done
The problem is, I have many folders and they contain images that are named as image0001 to image0100. And I want to copy all of the images to one folder. So they are overwriting each other. That is why I want to rename the images.
I think what's going wrong is probably that cd $i is sometimes failing (perhaps because $i is not a directory? or perhaps because it contains spaces, so triggers word splitting?), so then you stay in the same directory, and then cd .. moves you up a directory.
To fix this (and other potential issues that could crop up), I recommend making your script a bit more cautious:
for dir in */ ; do
pushd "$dir" || continue
for file in image???? ; do
mv "./$file" "${dir%/}$file"
done
popd
done

Advice on using crontab with bash

I want to make a cron job that checks if a folder exists, and it if does to delete all the contents of that folder. For example, I know that the following will delete the contents of my folder in using cron:
0 * * * * cd home/docs/reports/;rm -r *
However, I realized that if the folder is removed (or the wrong file path is given) instead of the contents of that folder being deleted, cd fails and all files are deleted on my operating system. To prevent this from happening (again) I want to check for the existence of the folder first, and then to delete the contents. I want to do something like the following, but I'm not sure how to use a bash script with cron.
if [ -d "home/docs/reports/" ]; then
cd home/docs/reports/;rm -r *
fi
I'm new to bash and cron (in case it is not obvious).
I think cron uses /bin/sh to execute commands. sh is typically a subset of bash, and you're not doing anything bash-specific.
Execute the rm command only if the cd command succeeds:
0 * * * * cd home/docs/reports/ && rm -r *
NOTE Please wait a few minutes while I test this. If this note is gone, I've tried it and it works.
Yes, it works. (Note that testing whether the directory exists is less reliable; it's possible that the directory exists but you can't cd into it, or it might cease to exist between the test and the cd command.)
But actually you don't need to use a compound command like that:
0 * * * * rm -r home/docs/reports/*
Still the && trick, and the corresponding || operator to execute a second command only if the first one fails, can be very useful for more complicated operations.
(Did you mean /home/docs rather than home/docs? The latter will be interpreted relative to your home directory.)
Though this worked ok when I tried it, use it at your own risk. Any time you combine rm -r with wildcards, there's a risk. If possible, test in a directory you're sure you don't care about. And you might consider using rm -rf if you want to be as sure as possible that everything is deleted. Finally, keep in mind that the * wildcard doesn't match files or directories whose names start with ..
#include <stddisclaimer.h>
EDIT :
The comments have given me a better understanding of what you're trying to do. These are files that users are going to download shortly after they're created (right?), so you don't want to delete anything less than, say, 5 minutes old.
Assuming you have GNU findutils, you can do something like this:
0 * * * * find /home/docs/reports/* -cmin +5 -delete 2>/dev/null
Using the -delete option to find means you're deleting files and/or directories one at a time, not deleting entire subtrees; the main difference is that an old directory with a new file in it will not be deleted. Applying -delete to a non-empty directory will fail with an error message.
Read the GNU find documentation (info find) for more information on the -cmin and -delete options. Note that -ctime operates on the time of the last status change of the file, not its creation time (Unix doesn't record file creation times). For your situation, it's likely to be the same.
(If you omit the /* on the path, it will delete the reports directory itself.)
Wrap your entire command (including the if logic) into a script.sh.
Specify #!/bin/bash at the top of your script.
Make it executable:
chmod +x script.sh
Then specify the full path of the script in your cron job.
Easiest thing by far is to do SHELL=/bin/bash at the top of your crontab. Works for me.

Resources