Need to update the csv file with the timestamp of the files from another location - shell

I have a csv file score.csv with at path /NAS/DQ with 2 columns Scorename,filename.
scorename,filename
ABC,cust.txt
XYZ,bank.txt
These filescust.txt and bank.txt are placed at /NAS/files_path. There will be unique instance of each file placed at this path everyday.
I want to append the file timestamp from /NAS/files_path to /NAS/DQ csv file.
So the timestamp should be updated everytime to the csv file at /NAS/DQ location.
I am new to unix and currently looking for ways to do it.
Any help is appreciated!!

Sed will be a good candidate for this:
sed -ri '2,$s/(^.*$)/\1 '$(date)'/' filename
Substitute the existing line for the existing line plus a space and the date. The format of the date can be amended as required with +%.. We don't want to format the first line, so run the amendments from lines 2 to the last line ($)

Related

Search the File Pattern from File Name

I have a file Patter_File.txt which stores lines like below -
ABC|ABC_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat|8|,|70|NAME
ABC|ABC_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat|9|,|70|PLACE
XYZ|XYZ_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9].dat|23|,|70|SSN
XYZ|XYZ_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9].dat|33|,|70|DOB
MNO|MNO_SUMMIT.dat|40|,|70|ADDRESS
MNO|MNO_SUMMIT.dat|5|,|70|COUNTRY
So this PATTERN_FILE.txt stores some information of the actual file but file name is stored in the pattern(if file name has date in the name) except the actual name.
My requirement is a command in which I should pass the actual file name like "ABC_20200408.dat" and it should return all the related lines from this file. Can someone please help.
below command is working fine but in this case I have to pass each pattern one by one to check which one is working.
echo "ABC_20200408.dat"|grep ABC_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat

Read csv and update csv

I have csv file which has list of hadoop file path, so I have to read each hadoop file from each row and calling hadoop - get. It is working fine. But I would like to mark the csv 2nd column with files are copied to destination folder.
something like flag. How to do edit the second column in while loop and save it in same csv?
Input.csv
path,flag
file1path,
file2path,
So after copying each file want to mark flag as Y in the same file.

Shell Script to Convert CSV to Text File

I need to create a shell script that reads a different folder based on today's date and inside the folder contains multiple files and one csv file that will have unique name everyday that is tab delimited. I want to pull this file and resave it as a text file.
Example of file path:
data/model/output20190725 (folder contains multiple files, new folder is created everyday)
-logfile1
-logfile2
-part3983isis4838.csv (this csv file will have a new and randomly generated name everyday, the csv file is also tab delimited)
I know how to go from a csv file to a text file, but I don't know how to add the logic of the folder name and the csv name changing everyday.
I saw that I could possibly use grep, but I don't know how to navigate to today's date folder and pull the csv and pass to the next argument to make the conversion.
grep -l .csv * |

Shell script - replace a string in the all the files in a directory based on file name

I want to replace a constant string in multiple files based on the name of the file.
Example:
In a directory I have many files named like 'X-A01', 'X-B01', 'X-C01'.
In each file there is a string 'SS-S01'.
I want to replace string 'SS-S01' in the first file with 'X-A01', second file with 'X-B01' and third file with 'X-C01'.
Please help me how can we do it as I have hundreds of files like this and do not want to manually edit all files one by one.
Remember to back up your files(!) before running this command, since I have not actually tried it myself:
You could do something like:
for file in <DIR>/*; do sed -i "s/SS-S01/${file##*/}/" "$file"; done
This will loop over each file in <DIR> and for each loop iteration assign the file name to $file. For each file, sed will replace the first occurence of SS-S01 in that file by the file name.

Bash Script to read CSV file and search directory for files to copy

I'm working on creating bash script to read a CSV file (comma delineated). The file contains parts of names for files in another directory. I then need to take these names and use them to search the directory and copy the correct files to a new folder.
I am able to read the csv file. However, csv file only contains part of the file names so I need to use wildcards to search the directory for the files. I have been unable to get the wildcards to work within the directory.
CSV File Format (in notepad):
12
13
14
15
Example file names in target directory:
IXI12_asfds.nii
IXI13_asdscds.nii
IXI14_aswe32fds.nii
IXI15_asf432ds.nii
The prefix to all of the files is the same: IXI. The csv file contains the unique numbers for each target file which appear right after the prefix. The middle portion of the filenames are unique to each file.
#!/bin/bash
# CSV file with comma delineated numbers.
# CSV file only contains part of the file name. Need to add IXI to the
beginning, and search with a wildcard at the end.
input="CSV_file.csv"
while IFS=',' read -r file_name1
do
name=(IXI$file_name1)
cp $name*.nii /newfolder
done < "$input"
The error I keep getting is saying that no folder with the appropriate name is able to be identified.

Resources