bash script list files from given user - bash

I have a problem with this one.
It is constantly returning me, not a directory, but is certainly is
#!/usr/local/bin/bash
DIR=$1
if [ -d "$DIR" ]; then
ls -1Apl /home/$DIR | grep -v /\$
else
echo "not a directory"
fi
One more thing, I need a little hint. I have to list files from a given user in a given directory, where I get both the user and directory as parameters.
Just suggestions, please.

Are you in the /home directory when you run this? If not, you may want to change it to:
if [ -d "/home/$DIR" ]; then
to match the ls command. This is assuming you're running it with something like myscript pax to examine the /home/pax directory, which seems to be the case.
And if you want to only list those files in there owned by a specific user, you can use awk to only print those with column 3 set to the desired value ($usrnm), something like:
ls -1Apl /home/$DIR | grep -v /\$ | awk -v user=${usrnm} '$3==user{print}{}'

You're not testing for the existence of the same directory as you're trying to list - maybe you mean -d "/home/$DIR"? Or from your requirement, do you have two parameters?
user="$1"
dir="$2"
# and then examine "/home/$user/$dir"

Related

Bash script to check if a new file has been created on a directory after run a command

By using bash script, I'm trying to detect whether a file has been created on a directory or not while running commands. Let me illustrate the problem;
#!/bin/bash
# give base directory to watch file changes
WATCH_DIR=./tmp
# get list of files on that directory
FILES_BEFORE= ls $WATCH_DIR
# actually a command is running here but lets assume I've created a new file there.
echo >$WATCH_DIR/filename
# and I'm getting new list of files.
FILES_AFTER= ls $WATCH_DIR
# detect changes and if any changes has been occurred exit the program.
After that I've just tried to compare these FILES_BEFORE and FILES_AFTER however couldn't accomplish that. I've tried;
comm -23 <($FILES_AFTER |sort) <($FILES_BEFORE|sort)
diff $FILES_AFTER $FILES_BEFORE > /dev/null 2>&1
cat $FILES_AFTER $FILES_BEFORE | sort | uniq -u
None of them gave me a result to understand there is a change or not. What I need is detecting the change and exiting the program if any. I am not really good at this bash script, searched a lot on the internet however couldn't find what I need. Any help will be appreciated. Thanks.
Thanks to informative comments, I've just realized that I've missed the basics of bash script but finally made that work. I'll leave my solution here as an answer for those who struggle like me.:
WATCH_DIR=./tmp
FILES_BEFORE=$(ls $WATCH_DIR)
echo >$WATCH_DIR/filename
FILES_AFTER=$(ls $WATCH_DIR)
if diff <(echo "$FILES_AFTER") <(echo "$FILES_BEFORE")
then
echo "No changes"
else
echo "Changes"
fi
It outputs "Changes" on the first run and "No Changes" for the other unless you delete the newly added documents.
I'm trying to interpret your script (which contains some errors) into an understanding of your requirements.
I think the simplest way is simply to rediect the ls command outputto named files then diff those files:
#!/bin/bash
# give base directory to watch file changes
WATCH_DIR=./tmp
# get list of files on that directory
ls $WATCH_DIR > /tmp/watch_dir.before
# actually a command is running here but lets assume I've created a new file there.
echo >$WATCH_DIR/filename
# and I'm getting new list of files.
ls $WATCH_DIR > /tmp/watch_dir.after
# detect changes and if any changes has been occurred exit the program.
diff -c /tmp/watch_dir.after /tmp/watch_dir.before
If the any files are modified by the 'commands', i.e. the files exists in the 'before' list, but might change, the above will not show that as a difference.
In this case you might be better off using a 'marker' file created to mark the instance the monitoring started, then use the find command to list any newer/modified files since the market file. Something like this:
#!/bin/bash
# give base directory to watch file changes
WATCH_DIR=./tmp
# get list of files on that directory
ls $WATCH_DIR > /tmp/watch_dir.before
# actually a command is running here but lets assume I've created a new file there.
echo >$WATCH_DIR/filename
# and I'm getting new list of files.
find $WATCH_DIR -type f -newer /tmp/watch_dir.before -exec ls -l {} \;
What this won't do is show any files that were deleted, so perhaps a hybrid list could be used.
Here is how I got it to work. It's also setup up so that you can have multiple watched directories with the same script with cron.
for example, if you wanted one to run every minute.
* * * * * /usr/local/bin/watchdir.sh /makepdf
and one every hour.
0 * * * * /user/local/bin/watchdir.sh /incoming
#!/bin/bash
WATCHDIR="$1"
NEWFILESNAME=.newfiles$(basename "$WATCHDIR")
if [ ! -f "$WATCHDIR"/.oldfiles ]
then
ls -A "$WATCHDIR" > "$WATCHDIR"/.oldfiles
fi
ls -A "$WATCHDIR" > $NEWFILESNAME
DIRDIFF=$(diff "$WATCHDIR"/.oldfiles $NEWFILESNAME | cut -f 2 -d "")
for file in $DIRDIFF
do
if [ -e "$WATCHDIR"/$file ];then
#do what you want to the file(s) here
echo $file
fi
done
rm $NEWFILESNAME

Is there a way to find all the files that got created a specific time?

I want to make a bash script in unix where the user gives a number between 1 and 24.then it has to scan every file in the directory that the user is and find all the files that got created the same time as the number.
I know that unix wont store teh birth time for most of the files.So I found that each file has crtime, which you can find with this line of code: debugfs -R 'stat /path/to/file' /dev/sda2
The problem is that i have to know every crtime so I can search them by the hour.
thanks in advance and sorry for the complicated explenation and bad english
Use a loop to execute stat for each file, then use grep -q to check whether the crtime matches the time given by the user. Since paths in debugfs -R 'stat path' do not necessarily correspond to paths on your system and there might be quoting issues, we use inode numbers instead.
#! /usr/bin/env bash
hour="$1"
for f in ./*; do
debugfs -R "stat <$(stat -c %i "$f")>" /dev/sda2 2> /dev/null |
grep -Eq "^crtime: .* 0?$hour:" &&
echo "$f"
done
Above script assumes that your working directory is on an ext4 file systems on /dev/sda2. Because of debugfs you have to run the script as the superuser (for instance by using sudo). Depending on your system there might be an alternative to debugfs which can be run as a regular user.
Example usage:
Print all files and directories which were created between 8:00:00 am and 8:59:59 am.
$ ./scriptFromAbove 8
./some file
./another file
./directories are matched too
If you want to exclude directories you can add [ -f "$f" ] && in front of debugfs.
The OP has already identified the issue that Unix system calls do not retrieve the file creation time - just the modification times (either mtime or ctime). Assuming this is acceptable substitution, an EFFICIENT way to find all the files in the current directory created on a specific hour is to leverage 'ls' and an awk filter.
#! /bin/sh
printf -v hh '%02d' $1
ls -l --time-style=long-iso | awk -v hh=$hh '+$7 == hh'
While this solution does not access the actual creation time, it does not require super user access (which is usually required for debugfs)

Why is this bash script not changing path?

I wrote a basic script which changes the directory to a specific path and shows the list of folders, but my script shows the list of files of the current folder where my script lies instead of which I specify in script.
Here is my script:
#!/bin/bash
v1="$(ls -l | awk '/^-/{ print $NF }' | rev | cut -d "_" -f2 | rev)"
v2=/home/PS212-28695/logs/
cd $v2 && echo $v1
Does any one knows what I am doing wrong?
Your current script makes no sense really. v1 variable is NOT a command to execute as you expect, but due to $() syntax it is in fact output of ls -t at the moment of assignment and that's why you have files from current directory there as this is your working directory at that particular moment. So you should rather be doing ordinary
ls -t /home/PS212-28695/logs/
EDIT
it runs but what if i need to store the ls -t output to variable
Then this is same syntax you already had, but with proper arguments:
v1=$(ls -t /home/PS212-28695/logs/)
echo ${v1}
If for any reason you want to cd then you have to do that prior setting v1 for the same reason I explained above.

loop symbolic link passwd in shell script

in need to make a loop symbolic link to any file i want on any users i each ..!
i can use this command
awk -F: ' { p="/home/"$1; printf "%s\n%s\n%s\n",p"/public_html/example.php",p"/www/example.html",p"/tmp/example.txt" }' /etc/passwd | sort
but how can make a loop to all users
and the output for loop like ?????????:
example.php > /home/user1/public_html/example.php
example.html > /home/user2/www/example.php
example.php > /home/user2/tmp/example.txt
example.php > /home/user3/public_html/example.php
example.php > /home/user3/www/example.html
example.php > /home/user3/tmp/example.txt
[...snip...]
And I mean that the way ..
The work is repeated or test paths for the selected files above
And the creation a symbolic link for all rights files in each path by using the command
ln -s
i try to Executable the commands
#/bin/bash
mkdir folder
a=awk -F: ' { p="/home/"$1; printf "%s\n%s\n%s\n",p"/public_html/example.php",p"/www/example.html",p"/tmp/example.txt" }' /etc/passwd | sort
ln -s "$a" > folder
done
But it fails
and i wait your answer
,,,thank you stackoverflow.com
Try something like this:
IFS=:
while read user rest; do
for file in public_html/example.php www/example.php; do
if [ -e "/home/$user/$file" ]; then
mkdir -p "/home/$user/folder"
ln -s "/home/$user/$file" "/home/$user/folder/example.php"
break
fi
done
done </etc/passwd
This parses /etc/passwd in the shell. IFS=: sets the internal field separator, which is than used by read to split the lines read into fields. The first field is assigned to the shell variable user, and all remaining fields are assigned to rest (and ignored).
Note: This is probably OK for a script that is used once, but there are several issues you should be aware of.
You probably don't want to do the symlinking for all user, just for regular ones. This the script ensures this by assuming the user's home directory is /home/$user, which typically excludes root and other special-purpose users. However, this is fragile, e.g. the home directory can be named differently on some systems.
No symlink is created if none of the files were found in the user's home directory.
The permissions of the symlink are not adjusted. The resulting symlink has the permissions that the script was running with.

A simple mv command in a BASH script

The aim of my script:
look at all the files in a directory ($Home/Music/TEST) and its sub-directories (they are music files)
find out what music genre each file belongs to
if the genre is Heavy, then move the file to another directory ($Home/Music/Output)
This is what I have:
#!/bin/bash
cd Music/TEST
for files in *
do
if [ -f "$files" ];then
# use mminfo to get the track info
genre=`mminfo "$files"|grep genre|awk -F: '{print $2}'|sed 's/^ *//g'|sed 's/[^a-zA-Z0-9\ \-\_]//g'`
if [ $genre = Heavy ] ;then
mv "$files" "~/Music/Output/$files"
fi
fi
done
Please tell me how to write the mv command. Everything I have tried has failed. I get errors like this:
mv: cannot move ‘3rd Eye Landslide.mp3’ to ‘/Music/Output/3rd Eye Landslide.mp3’: No such file or directory
Please don't think I wrote that mminfo line - that's just copied from good old Google search. It's way beyond me.
Your second argument to mv appears to be "~/Music/Output/$files"
If the ~ is meant to signify your home directory, you should use $HOME instead, like:
mv "$files" "$HOME/Music/Output/$files"
~ does not expand to $HOME when quoted.
By the look of it the problem occurs when you move the file to its destination.Please check that /Music/Output/ exits from your current directory.Alternatively use the absolute path to make it safe. Also it's a good idea not use space in the file-name.Hope this will helps.:)
Put this command before mv command should fix your problem.
mkdir -p ~/Music/Output

Resources