In a folder, i've 600 files, numbered from 001 to 600. It looks like foo_001.bar. In a text file, i've the number & titles of this folder. Now i want to rename foo_001.bar with the corresponding 001 title foobar from the text file.
But i don't have a clue how to do this properly on Linux Mint. Can someone help me or give me a tip?
Content of the titles.txt looks like this. With a tab (can be altered easy off course) between the number and the title.
001 title of 1
002 this is 2
003 and here goes 3
004 number four
005 hi this is five
etc
Content of the folder looks like this. No exceptions.
file_001.ext
file_002.ext
file_003.ext
file_004.ext
file_005.ext
etc
Just loop through your file with read, get the seperated columns with awk cut (Thank you, #Jack) and mv your file accordingly. In this very simple implementation I assume that your text file containing the new names is located at ./filenames and your script is called from the directory containing your files.
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
NR=$(echo "$line" | cut -f 1)
NAME=$(echo "$line" | cut -f 2)
if [ -f "foo_${NR}.ext" ] ; then
mv "foo_${NR}.ext" "$NAME"
fi
done < filenames
Related
ı have 16 fastq files under the different directories to produce readlength.tsv seperately and ı have some script to produce readlength.tsv .this is the script that ı should use to produce readlength.tsv
zcat ~/proje/project/name/fıle_fastq | paste - - - - | cut -f1,2 | while read readID sequ;
do
len=`echo $sequ | wc -m`
echo -e "$readID\t$len"
done > ~/project/name/fıle1_readlength.tsv
one by one ı can produce this readlength but it will take long time .I want to produce readlength at once thats why I created list that involved these fastq fıles but ı couldnt produce any loop to produce readlength.tsv at once from 16 fastq files.
ı would appreaciate ıf you can help me
Assuming a file list.txt contains the 16 file paths such as:
~/proje/project/name/file1_fastq
~/proje/project/name/file2_fastq
..
~/path/to/the/fastq_file16
Then would you please try:
#!/bin/bash
while IFS= read -r f; do # "f" is assigned to each fastq filename in "list.txt"
mapfile -t ary < <(zcat "$f") # assign "ary" to the array of lines
echo -e "${ary[0]}\t${#ary[1]}" # ${ary[0]} is the id and ${#ary[1]} is the length of sequence
done < list.txt > readlength.tsv
As the fastq file format contains the id in the 1st line and the sequence
in the 2nd line, bash built-in mapfile will be better to handle them.
As a side note, the letter ı in your code looks like a non-ascii character.
I am having some problems with using the first column ${1} as input to a script.
Currently the portions of the script looks like this.
#!/bin/bash
INPUT="${1}"
for NAME in `cat ${INPUT}`
do
SIZE="`du -sm /FAServer/na3250-a/homes/${NAME} | sed 's|/FAServer/na3250-a/homes/||'`"
DATESTAMP=`ls -ld /FAServer/na3250-a/homes/${NAME} | awk '{print $6}'`
echo "${SIZE} ${DATESTAMP}"
done
However, I want to modify the INPUT="${1}" to take the first {1} within a specific file. This is so I can run the lines above in another script and use a file that is previously generated as the input. Also to have the output go out to a new file.
So something like:
INPUT="$location/DisabledActiveHome ${1}" ???
Here's my full script below.
#!/bin/bash
# This script will search through Disabled Users OU and compare that list of
# names against the current active Home directories. This is to find out
# how much space those Home directories take up and which need to be removed.
# MUST BE RUN AS SUDO!
# Setting variables for _adm and storage path.
echo "Please provide your _adm account name:"
read _adm
echo "Please state where you want the files to be generated: (absolute path)"
read location
# String of commands to lookup information using ldapsearch
ldapsearch -x -LLL -h "REDACTED" -D $_adm#"REDACTED" -W -b "OU=Accounts,OU=Disabled_Objects,DC="XX",DC="XX",DC="XX"" "cn=*" | grep 'sAMAccountName'| egrep -v '_adm$' | cut -d' ' -f2 > $location/DisabledHome
# Get a list of all the active Home directories
ls /FAServer/na3250-a/homes > $location/ActiveHome
# Compare the Disabled accounts against Active Home directories
grep -o -f $location/DisabledHome $location/ActiveHome > $location/DisabledActiveHome
# Now get the size and datestamp for the disabled folders
INPUT="${1}"
for NAME in `cat ${INPUT}`
do
SIZE="`du -sm /FAServer/na3250-a/homes/${NAME} | sed 's|/FAServer/na3250-a/homes/||'`"
DATESTAMP=`ls -ld /FAServer/na3250-a/homes/${NAME} | awk '{print $6}'`
echo "${SIZE} ${DATESTAMP}"
done
I'm new to all of this so any help is welcome. I will be happy to clarify any and all questions you might have.
EDIT: A little more explanation because I'm terrible at these things.
The lines of code below came from a previous script are a FOR loop:
INPUT="${1}"
for NAME in `cat ${INPUT}`
do
SIZE="`du -sm /FAServer/na3250-a/homes/${NAME} | sed 's|/FAServer/na3250-a/homes/||'`"
DATESTAMP=`ls -ld /FAServer/na3250-a/homes/${NAME} | awk '{print $6}'`
echo "${SIZE} ${DATESTAMP}"
done
It is executed by typing:
./Script ./file
The FILE that is being referenced has one column of user names and no other data:
User1
User2
User3
etc.
The Script would take the file and look at the first users name, which is reference by
INPUT=${1}
then run a DU command on that user and find out what the size of their HOME drive is. That would be reported by the SIZE variable. It will do the same thing with the DATESTAMP in regards to when the HOME drive was created for the user. When it is done doing the tasks for that user, it would move on to the next one in the column until it is done.
So following that logic, I want to automate the entire process. Instead of doing this in two steps, I would like to make this all a one step process.
The first process would be to generate the $location/DisabledActiveHome file, which would have all of the disabled users names. Then to run the last portion to get the Size and creation date of each HOME drive for all the users in the DisabledActiveHome file.
So to do that, I need to modify the
INPUT=${1}
line to reflect the previously generated file.
$location/DisabledActiveHome
I don't understand your question really, but I think you want this. Say your file is called file.txt and looks like this:
1 99
2 98
3 97
4 96
You can get the first column like this:
awk '{print $1}' file.txt
1
2
3
4
If you want to use that in your script, do this
while read NAME; do
echo $NAME
done < <(awk '{print $1}' file.txt)
1
2
3
4
Or you may prefer cut like this:
while read NAME; do
echo $NAME
done < <(cut -d" " -f1 file.txt)
1
2
3
4
Or this may suit even better
while read NAME OtherUnwantedJunk; do
echo $NAME
done < file.txt
1
2
3
4
This last, and probably best, solution above uses IFS, which is bash's Input Field Separator, so if your file looked like this
1:99
2:98
3:97
4:96
you would do this
while IFS=":" read NAME OtherUnwantedJunk; do
echo $NAME
done < file.txt
1
2
3
4
INPUT="$location/DisabledActiveHome" worked like a charm. I was confused about the syntax and the proper usage and output
I'm fairly new to using bash.
I have several hundred documents, each named QP1172, QP1474, QP9926, etc. I need the name of the file to be in the first row of the document (so QP1172 for example would be in row 1 of the document QP1172.txt).
Does anyone know how I could do this? Thank you!
You could do something like
for f in QP????.txt; do echo $f | cat - $f >$f.withname; done
to create new files QP1172.txt.withname etc., and then replace the old ones with them after checking that everything looks ok.
(cat here concatenates the name (given on standard input) with the file contents of each file.)
ADDED: To make it easier to let the new versions get the right name afterwards it might be easier to let them have the same name, but in another directory.
mkdir withname
for f in QP????.txt; do echo $f | cat - $f >withname/$f; done
You could use sed to insert a line at the beginning:
for f in QP*; do
sed -i "1i$f" "$f"
done
1i$f means "insert a line containing the value of $f before line 1".
I am trying to keep some media files organized - from naming convention to folder hierarchy. The files are kept on a server so having this run as a cron is the best bet. The main issue is that some of the images, depending on who takes them, are not named appropriately.
The folder structure that I currently have is:
Landscapes (2012)
Buildings (2013)
Contemporary Designs (2014)
etc.
Files are saved as:
Landscapes 010 (2012).jpg # correct
Landscapes 011 (2012).jpg # correct
Landscapes (2012) 012 (Regional_Airport).jpg #sometimes happens and need to delete the (Regional_Airport)
I would like the file names to be trimmed to keep the name date and the sequence #(regardless of the position) *fancy would be to keep them all in the same order, where for example (Regional_Airport) would be removed and 012 placed in front of (2012).
Move the file into the folder that has the same name and year
I am fairly limited in my scripting abilities, but found this while messing with Hazel (which is a limited option, because my computer does not stay on all day).
#!/bin/bash
RootFolder=/Volumes/image_store
# split only on new line
OLDIFS="$IFS" # save it
IFS="
"
file="$1"
for folder in $(ls -1 $RootFolder/) ; do
ufolder=`echo $folder | tr "[:upper:]" "[:lower:]"`
ufile=`echo $file | tr "[:upper:]" "[:lower:]"`
if grep -q $ufolder <<< $ufile ; then
mv $file to $RootFolder/$folder
/usr/local/bin/growlnotify -m "Moved $file to $folder in Image_Store" ImageMover
exit 0
fi
done
IFS=OLDIFS
I have a list of .txt files all in the same directory with the name "w_i.txt" where i runs from 1 to n. Each of these files contains a single number (non integer). I want to be able to read in the value from each of these files into a column in Apple's Numbers. I want w_1.txt in row 1 and w_n.txt's value in row n of that column. Should I use Applescript for this and if so what code would be required?
I think I would tackle this as a shell script, rather than applescript.
The following script will iterate over your set of files in numerical order, and produce plain text output. You can redirect this to a text file. I don't have access to Apple's Numbers, but I'd be very surprised if you can't import data from a plain text file.
I have hard-coded the max file index as 5. You'll want to change that.
If there are any files missing, a 0 will be output on that line instead. You could change that to a blank line as required.
Also I don't know if your files end in newlines or not, so the cat/read/echo line is one way to just get the first token of the line and not worry about any following whitespace.
#!/bin/bash
for i in {1..5} ; do
if [ -e w_$i.txt ] ; then
cat w_$i.txt | { IFS="" read -r n ; echo -e "$n" ; }
else
echo 0
fi
done
If all files end with newlines, you could just use cat:
cd ~/Documents/some\ folder; cat w_*.txt | pbcopy
This works even if the files don't end with newlines and sorts w_2.txt before w_11.txt:
sed -n p $(ls w_*.txt | sort -n)