I'm trying to replace a file once the OMC is set.
This code sets my "OMC" to the specific country code
Once this is done, I want to copy an xml file from a specific directory (this is just a random xml file I've edited with different parameters) and replace the original file in the OMC directory once the following script has detected the OMC.
In other words, to use an example, following this code I need to replace a file I have, let's just say, "temp/test.xml" to the directory "ABC/test.xml".
The "ABC/" directory will be the one the following the country code selected below.
getprop ro.boot.bootloader >> /tmp/BLmodel
ACTUAL_CSC=`cat /efs/imei/mps_code.dat`
ACTUAL_OMC=`cat /efs/imei/omcnw_code.dat`
SALES_CODE=`cat /system/omc/sales_code.dat`
sed -i -- "s/CSC=//g" /tmp/aroma/csc.prop
NEW_CSC=`cat /tmp/aroma/csc.prop`
buildprop=/system/build.prop
Related
I want to replace a constant string in multiple files based on the name of the file.
Example:
In a directory I have many files named like 'X-A01', 'X-B01', 'X-C01'.
In each file there is a string 'SS-S01'.
I want to replace string 'SS-S01' in the first file with 'X-A01', second file with 'X-B01' and third file with 'X-C01'.
Please help me how can we do it as I have hundreds of files like this and do not want to manually edit all files one by one.
Remember to back up your files(!) before running this command, since I have not actually tried it myself:
You could do something like:
for file in <DIR>/*; do sed -i "s/SS-S01/${file##*/}/" "$file"; done
This will loop over each file in <DIR> and for each loop iteration assign the file name to $file. For each file, sed will replace the first occurence of SS-S01 in that file by the file name.
I'm working on creating bash script to read a CSV file (comma delineated). The file contains parts of names for files in another directory. I then need to take these names and use them to search the directory and copy the correct files to a new folder.
I am able to read the csv file. However, csv file only contains part of the file names so I need to use wildcards to search the directory for the files. I have been unable to get the wildcards to work within the directory.
CSV File Format (in notepad):
12
13
14
15
Example file names in target directory:
IXI12_asfds.nii
IXI13_asdscds.nii
IXI14_aswe32fds.nii
IXI15_asf432ds.nii
The prefix to all of the files is the same: IXI. The csv file contains the unique numbers for each target file which appear right after the prefix. The middle portion of the filenames are unique to each file.
#!/bin/bash
# CSV file with comma delineated numbers.
# CSV file only contains part of the file name. Need to add IXI to the
beginning, and search with a wildcard at the end.
input="CSV_file.csv"
while IFS=',' read -r file_name1
do
name=(IXI$file_name1)
cp $name*.nii /newfolder
done < "$input"
The error I keep getting is saying that no folder with the appropriate name is able to be identified.
I want to validate my XML's for well-formed ness, but some of my files are not having a single root (which is fine as per business req eg. <ri>...</ri><ri>..</ri> is valid xml in my context) , xmlwf can do this, but it flags out a file if it's not having single root, So wanted to build a custom script which internally uses xmlwf, my custom script should do below,
iterate through list of files passed as input (eg. sample.xml or s*.xml or *.xml)
for each file prepare a temporary file as <A>+contents of file+</A>
and call xmlwf on that temp file,
Can some one help on this?
You could add text to the beginning and end of the file using cat and bash, so that your file has a root added to it for validation purposes.
cat <(echo '<root>') sample.xml <(echo '</root>') | xmlwf
This way you don't need to write temporary files out.
I have more than 30 files to load the data.
The path changes at every run in those files. So the path becomes
INFILE "/home/dmf/Cycle7Data/ITEM_IMAGE.csv"
INFILE "/home/dmf/Cycle8Data/ITEM_IMAGE.csv"
The file names change on every control file (SUPPLIER.csv)
Is there any way to pass the File path in a variable, or set any Env. Variable?
So that the control file is not edited everytime
You can pass the data file name on the command line; from the documentation:
DATA specifies the name of the data file containing the data to be loaded. If you do not specify a file extension or file type, then the default is .dat.
If you specify a data file on the command line and also specify data files in the control file with INFILE, then the data specified on the command line is processed first. The first data file specified in the control file is ignored. All other data files specified in the control file are processed.
So pass the relevant file name with each invocation, e.g.
sqlldr user/passwd control=myfile.ctl data=/home/dmf/Cycle7Data/ITEM_IMAGE.csv
If you have lots of files to load from a directory you could have a shell script that loops over the directory contents and passes each file name in turn to an SQL*Loader session.
I want to compress the contents of a folder. The catch is that I need to modify the content of a file before compressing it. The modification should not alter the contents in the original folder but should be their in the compressed file
So far I was able to figure out altering file contents using sed command-
sed 's:/site_media/folder1/::g' index.html >index.html1
where /site_media/folder1/ is the string which I want to replace with empty string. Currently this code is creating another file named index.html1 as I don't want to make the changes inplace for the file index.html.
I tried pipelining this command with the zip command as follows
sed 's:/site_media/folder1/::g' folder1/index.html > index1.html |zip zips/folder1.zip folder1/
but I am not getting any contents when I unzip the file folder1.zip. Also the modified file in the compressed folder should be named index.html (and not index.html1)
You want to do two things. Then, say command1 && command2. In your case:
sed '...' folder1/index.html > index1.html && zip zips/folder1.zip folder1/
If you pipe commands, you use the output of the first to feed the second, which is something you don't want in this case.