loop symbolic link passwd in shell script - shell

in need to make a loop symbolic link to any file i want on any users i each ..!
i can use this command
awk -F: ' { p="/home/"$1; printf "%s\n%s\n%s\n",p"/public_html/example.php",p"/www/example.html",p"/tmp/example.txt" }' /etc/passwd | sort
but how can make a loop to all users
and the output for loop like ?????????:
example.php > /home/user1/public_html/example.php
example.html > /home/user2/www/example.php
example.php > /home/user2/tmp/example.txt
example.php > /home/user3/public_html/example.php
example.php > /home/user3/www/example.html
example.php > /home/user3/tmp/example.txt
[...snip...]
And I mean that the way ..
The work is repeated or test paths for the selected files above
And the creation a symbolic link for all rights files in each path by using the command
ln -s
i try to Executable the commands
#/bin/bash
mkdir folder
a=awk -F: ' { p="/home/"$1; printf "%s\n%s\n%s\n",p"/public_html/example.php",p"/www/example.html",p"/tmp/example.txt" }' /etc/passwd | sort
ln -s "$a" > folder
done
But it fails
and i wait your answer
,,,thank you stackoverflow.com

Try something like this:
IFS=:
while read user rest; do
for file in public_html/example.php www/example.php; do
if [ -e "/home/$user/$file" ]; then
mkdir -p "/home/$user/folder"
ln -s "/home/$user/$file" "/home/$user/folder/example.php"
break
fi
done
done </etc/passwd
This parses /etc/passwd in the shell. IFS=: sets the internal field separator, which is than used by read to split the lines read into fields. The first field is assigned to the shell variable user, and all remaining fields are assigned to rest (and ignored).
Note: This is probably OK for a script that is used once, but there are several issues you should be aware of.
You probably don't want to do the symlinking for all user, just for regular ones. This the script ensures this by assuming the user's home directory is /home/$user, which typically excludes root and other special-purpose users. However, this is fragile, e.g. the home directory can be named differently on some systems.
No symlink is created if none of the files were found in the user's home directory.
The permissions of the symlink are not adjusted. The resulting symlink has the permissions that the script was running with.

Related

How to check if a file exists or not and create/delete if does/does not exist in shell

In shell, I want to check if a file exists or not then create if it doesn't exist or delete if it exists. For this I need a one liner and am trying to do something like:
ls | awk '\filename\' <if exist delete else create>
I need the ls as my problem has some command that outputs a list of strings that need to be pipelined to awk then possibly touch/mkdir.
#!/bin/bash
if [ -z "$1" ] || [ ! -f "$1" ] # $1 is input filename and -f check if $1 is a regular file
then
rm "$1" #delete the file
else
touch "$1" #create the file
fi
save the file as filecreator.sh
change the permission to allow execution with sudo chmod a+rx
while running the script use ./filecreator.sh yourfile.extension
You can see the file in your directory.
Using oc projects and oc new-project instad of ls and touch as indicated in a comment.
oc projects |
while read -r proj; do
if [ -d "$proj" ]; then
rm -rf "$proj"
else
oc new-project "$proj"
fi
done
I don't think there is a useful way to write this as a one-liner. If you like, you can replace the newlines with semicolons, except after then and else.
You really should put your actual requirements in the question itself. ls is a superbly useless example because it cannot list a file which doesn't already exist, and you should not use ls in scripts at all.
rm yourfile 2>/dev/null || touch yourfile
If the file existed before, rm will succeed and erase the file, and the touch won't be executed. You end up with no file afterwards.
If the file did not exist before, rm will fail (but the error message is not visible, since it is directed to the bitbucket), and due to the non-zero exit code of rm, the touch will be executed. You end up with an empty file afterwards.
Caveat: If the file exists, but you don't have permissions to remove it, you won't notice this error, due to the redirection of stderr. Hence, for debugging and later diagnosis, it might be better to redirect stderr to some file instead.

Is there a way to find all the files that got created a specific time?

I want to make a bash script in unix where the user gives a number between 1 and 24.then it has to scan every file in the directory that the user is and find all the files that got created the same time as the number.
I know that unix wont store teh birth time for most of the files.So I found that each file has crtime, which you can find with this line of code: debugfs -R 'stat /path/to/file' /dev/sda2
The problem is that i have to know every crtime so I can search them by the hour.
thanks in advance and sorry for the complicated explenation and bad english
Use a loop to execute stat for each file, then use grep -q to check whether the crtime matches the time given by the user. Since paths in debugfs -R 'stat path' do not necessarily correspond to paths on your system and there might be quoting issues, we use inode numbers instead.
#! /usr/bin/env bash
hour="$1"
for f in ./*; do
debugfs -R "stat <$(stat -c %i "$f")>" /dev/sda2 2> /dev/null |
grep -Eq "^crtime: .* 0?$hour:" &&
echo "$f"
done
Above script assumes that your working directory is on an ext4 file systems on /dev/sda2. Because of debugfs you have to run the script as the superuser (for instance by using sudo). Depending on your system there might be an alternative to debugfs which can be run as a regular user.
Example usage:
Print all files and directories which were created between 8:00:00 am and 8:59:59 am.
$ ./scriptFromAbove 8
./some file
./another file
./directories are matched too
If you want to exclude directories you can add [ -f "$f" ] && in front of debugfs.
The OP has already identified the issue that Unix system calls do not retrieve the file creation time - just the modification times (either mtime or ctime). Assuming this is acceptable substitution, an EFFICIENT way to find all the files in the current directory created on a specific hour is to leverage 'ls' and an awk filter.
#! /bin/sh
printf -v hh '%02d' $1
ls -l --time-style=long-iso | awk -v hh=$hh '+$7 == hh'
While this solution does not access the actual creation time, it does not require super user access (which is usually required for debugfs)

shell create new folder

I have many files' path, but I need to copy all files into other location /sample, and I want to copy files into different folders:
/ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59329/111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_1.fq.gz
/ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59329/111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_2.fq.gz
/ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59329/clean_111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_1.fq.gz.total.info
I want to copy those files into AS34_59329 folder inside /sample
/ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59328/111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_1.fq.gz
/ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59328/111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_2.fq.gz
/ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59328/clean_111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_1.fq.gz.total.info
I want to copy those file into AS34_59328 folder inside /sample
I write codes to scp all file into /sample folder, but I don't know how to put each files into different sub-directory, like:
/ifshk5/BC_IP/PROJECT/T11073/T11073_RICekkR/Fq/AS34_59328/clean_111220_I631_FCC0E5EACXX_L4_RICwdsRSYHSD11-2-IPAAPEK-93_1.fq.gz.total.info
put into AS34_59328
#! /bin/bash
while read myline
do
for i in $myline
do
if [ -f $i]; then
#how to put different files into different sub-directory
scp -r $i xxx#191.168.174.43:/sample
fi
done
done < data.list
new changed part
#! /bin/bash
while read myline
do
for i in $myline
do
if [ -f $i ]
then
relname=$(echo $i | sed 's%\(/[^/][^/]*\)\{5\}/%%')
echo $relname
fi
done
done < /home/jesse/T11073_all_3254.fq.list
It appears you need to strip the leading 5 components of the pathname off the filename. Since you don't have spaces in your names (the way you're using for i in $myline precludes that possibility), you can use:
#! /bin/bash
while read myline
do
for i in $myline
do
if [ -f $i ]
then
relname=$(echo $i | sed 's%\(/[^/][^/]*\)\{5\}/%%')
scp -r $i xxx#191.168.174.43:/sample/$relname
fi
done
done < data.list
The regex is just a way of looking for a sequence of five sets of slash followed by one or more non-slashes plus one more slash and deleting them. Since slashes figure prominently in the search, I used % to mark the sections of the s/// operation instead.
For example, given the input:
/a/b/c/d/e/f/g
the output from the sed is:
f/g
Note that this code does not explicitly create directories on the remote machine; it just specifies where the file is to go. If you need to create them too, you will have to investigate ssh, probably, to run mkdir -p /sample/$(dirname $relname) on the remote machine (where the dirname operation can be run either locally or remotely).
Note that scp has a recursive copy mode (-r) which would simplify things considerably if you knew you needed to copy all the files from the local directory to the remote.

Copy a list of files from a file

I have file containing a list of files separated by end of lines
$ cat file_list
file1
file2
file3
I want to copy this list of files with FTP
How can I do that ? Do I have to write a script ?
You can turn your list of files into list of ftp commands easily enough:
(echo open hostname.host;
echo user username;
cat filelist | awk '{ print "put " $1; }';
echo bye) > script.ftp
Then you can just run:
ftp -s script.ftp
Or possibly (with other versions of ftp)
ftp -n < script.ftp
Something along these lines - the somecommand depends on what you want to do - I don't get that from your question, sorry.
#!/bin/bash
# Iterate through lines in file
for line in `cat file.txt`;do
#your ftp command here do something
somecommand $line
done
edit: If you really want to persue this route for multiple files (you shouldn't!), you can use the following command in place of somecommand $line:
ncftpput -m -u username -p password ftp.server.com /remote/folder $line
ncftpput propably also takes an arbitrary number of files to upload in one go, but I havn't checked it. Notice that this approach will connect and disconnect for every single file!
Thanks for the very helpful example of how to feed a list of files to ftp. This worked beautifully for me.
After creating my ftp script in Linux (CentOs 5.5), I ran the script with:
ftp –n < ../script.ftp
My script (with names changed to protect the innocent) starts with:
open <ftpsite>
user <userid> <passwd>
cd <remote directory>
bin
prompt
get <file1>
get <file2>
And ends with:
get <filen-1>
get <filen>
bye

bash script list files from given user

I have a problem with this one.
It is constantly returning me, not a directory, but is certainly is
#!/usr/local/bin/bash
DIR=$1
if [ -d "$DIR" ]; then
ls -1Apl /home/$DIR | grep -v /\$
else
echo "not a directory"
fi
One more thing, I need a little hint. I have to list files from a given user in a given directory, where I get both the user and directory as parameters.
Just suggestions, please.
Are you in the /home directory when you run this? If not, you may want to change it to:
if [ -d "/home/$DIR" ]; then
to match the ls command. This is assuming you're running it with something like myscript pax to examine the /home/pax directory, which seems to be the case.
And if you want to only list those files in there owned by a specific user, you can use awk to only print those with column 3 set to the desired value ($usrnm), something like:
ls -1Apl /home/$DIR | grep -v /\$ | awk -v user=${usrnm} '$3==user{print}{}'
You're not testing for the existence of the same directory as you're trying to list - maybe you mean -d "/home/$DIR"? Or from your requirement, do you have two parameters?
user="$1"
dir="$2"
# and then examine "/home/$user/$dir"

Resources