SQLLDR file path argument - oracle

I have more than 30 files to load the data.
The path changes at every run in those files. So the path becomes
INFILE "/home/dmf/Cycle7Data/ITEM_IMAGE.csv"
INFILE "/home/dmf/Cycle8Data/ITEM_IMAGE.csv"
The file names change on every control file (SUPPLIER.csv)
Is there any way to pass the File path in a variable, or set any Env. Variable?
So that the control file is not edited everytime

You can pass the data file name on the command line; from the documentation:
DATA specifies the name of the data file containing the data to be loaded. If you do not specify a file extension or file type, then the default is .dat.
If you specify a data file on the command line and also specify data files in the control file with INFILE, then the data specified on the command line is processed first. The first data file specified in the control file is ignored. All other data files specified in the control file are processed.
So pass the relevant file name with each invocation, e.g.
sqlldr user/passwd control=myfile.ctl data=/home/dmf/Cycle7Data/ITEM_IMAGE.csv
If you have lots of files to load from a directory you could have a shell script that loops over the directory contents and passes each file name in turn to an SQL*Loader session.

Related

Search the File Pattern from File Name

I have a file Patter_File.txt which stores lines like below -
ABC|ABC_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat|8|,|70|NAME
ABC|ABC_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat|9|,|70|PLACE
XYZ|XYZ_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9].dat|23|,|70|SSN
XYZ|XYZ_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9].dat|33|,|70|DOB
MNO|MNO_SUMMIT.dat|40|,|70|ADDRESS
MNO|MNO_SUMMIT.dat|5|,|70|COUNTRY
So this PATTERN_FILE.txt stores some information of the actual file but file name is stored in the pattern(if file name has date in the name) except the actual name.
My requirement is a command in which I should pass the actual file name like "ABC_20200408.dat" and it should return all the related lines from this file. Can someone please help.
below command is working fine but in this case I have to pass each pattern one by one to check which one is working.
echo "ABC_20200408.dat"|grep ABC_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat

Shell Script to Convert CSV to Text File

I need to create a shell script that reads a different folder based on today's date and inside the folder contains multiple files and one csv file that will have unique name everyday that is tab delimited. I want to pull this file and resave it as a text file.
Example of file path:
data/model/output20190725 (folder contains multiple files, new folder is created everyday)
-logfile1
-logfile2
-part3983isis4838.csv (this csv file will have a new and randomly generated name everyday, the csv file is also tab delimited)
I know how to go from a csv file to a text file, but I don't know how to add the logic of the folder name and the csv name changing everyday.
I saw that I could possibly use grep, but I don't know how to navigate to today's date folder and pull the csv and pass to the next argument to make the conversion.
grep -l .csv * |

Deleting and Replacing a file in shell command

I'm trying to replace a file once the OMC is set.
This code sets my "OMC" to the specific country code
Once this is done, I want to copy an xml file from a specific directory (this is just a random xml file I've edited with different parameters) and replace the original file in the OMC directory once the following script has detected the OMC.
In other words, to use an example, following this code I need to replace a file I have, let's just say, "temp/test.xml" to the directory "ABC/test.xml".
The "ABC/" directory will be the one the following the country code selected below.
getprop ro.boot.bootloader >> /tmp/BLmodel
ACTUAL_CSC=`cat /efs/imei/mps_code.dat`
ACTUAL_OMC=`cat /efs/imei/omcnw_code.dat`
SALES_CODE=`cat /system/omc/sales_code.dat`
sed -i -- "s/CSC=//g" /tmp/aroma/csc.prop
NEW_CSC=`cat /tmp/aroma/csc.prop`
buildprop=/system/build.prop

Bash Script to read CSV file and search directory for files to copy

I'm working on creating bash script to read a CSV file (comma delineated). The file contains parts of names for files in another directory. I then need to take these names and use them to search the directory and copy the correct files to a new folder.
I am able to read the csv file. However, csv file only contains part of the file names so I need to use wildcards to search the directory for the files. I have been unable to get the wildcards to work within the directory.
CSV File Format (in notepad):
12
13
14
15
Example file names in target directory:
IXI12_asfds.nii
IXI13_asdscds.nii
IXI14_aswe32fds.nii
IXI15_asf432ds.nii
The prefix to all of the files is the same: IXI. The csv file contains the unique numbers for each target file which appear right after the prefix. The middle portion of the filenames are unique to each file.
#!/bin/bash
# CSV file with comma delineated numbers.
# CSV file only contains part of the file name. Need to add IXI to the
beginning, and search with a wildcard at the end.
input="CSV_file.csv"
while IFS=',' read -r file_name1
do
name=(IXI$file_name1)
cp $name*.nii /newfolder
done < "$input"
The error I keep getting is saying that no folder with the appropriate name is able to be identified.

informatica Post command task

I am working with multiple source files with single source instance. I created three flat files and one destination table to experiment multiple sources. I am using ‘File list’ concept, for that I created a text file which contains all the flat file names.
Example:
Filename : File_list.txt
File content : Price1.txt
Price2.txt
Price3.txt
In the above example Price1.txt, Price2.txt and Price3.txt are flat file names. I specified File_list.txt as a source file while running the Workflow in Informatica. So it will iterate through all the flat files in the specified file (File_list.txt) and insert all the values to destination table.
Now what I want to do is once data is inserted to the destination, I need to delete that source file in that directory location.
How to achieve this?.
You'll need to write a custom script that will use the File_list.txt as input and perform the delete operations. You can then call it using Post-Session Success Command session component, or as a separate Command Task in the workflow linked using a $YourSessionName.Status = SUCCEEDED condition.

Resources