Our problem is probably regarding already added values in one of the tables in oracle db version 12.1.0
In CMS we are seeing that the values are without empty rows:
But when we copy this text and paste it in notepad for example, we got empty row (line breaks, blank row), after every line:
So, at this point, we are heading to a problem with data imported to the database. The data type of that field is VARCHAR 2000, we have around 10 000 records already in that table and half of them include these empty rows after pasting. Is there any chance that we can remove these empty lines in that column?
You can see in your dump that there is a sequence: 13,13,10 which is Carriage return, Carriage return + Line Feed.
https://www.petefreitag.com/item/863.cfm
If you replace 13,13,10 by 13,10 you should get the desired results.
replace(column_name,chr(13)||chr(13)||chr(10),chr(13)||chr(10))
Related
I am working with data imported from a pdf file. There is an extra column in the Power Query import (Data.Column7), containing data that belongs in the adjacent columns on either side (Data.Column6 and Data.Column8). Columns 6 and 8 have null values in the cells where the data was pushed into Column 7. I would like to replace the null values in Columns 6 and 8 with the correct data from Column 7, leaving all other values Columns 6 and 8 as is.
After looking at the post here:
Power Query / Power BI - replacing null values with value from another column
and watching this video:
https://www.youtube.com/watch?v=ikzeQgdKA0Q
I tried the following formula:
= Table.ReplaceValue(#"Expanded Data",null, each _[Data.Column7] ,Replacer.ReplaceText,{"Data.Column6","Data.Column8"})
(Note, "Expanded Data" is the last step before this Replace Value step.)
I am not getting any kind of syntax error, but the Replace Value step isn't doing anything at all. My null values in Columns 6 and 8 have not been replaced with the correct data from Column 7.
Any insight into how to achieve replacement would be greatly appreciated. Thank you.
(I should mention, I am a new Power Query user, so please be detailed and assume I know nothing!)
I'm sure there must be some way to do this with the ReplaceValue function, but I think it might be easier to do the following:
1: Create a new column with definition NewData6= if[Data.Column6]=null then [Data.Column7] else [Data.Column6]
2: Do the same thing for 8 : NewData8= if[Data.Column8]=null then [Data.Column7] else [Data.Column8]
3: Delete Data.Column6/7/8
4: Rename the newly made columns if neccesary.
You can do these steps either in the advanced editor, or just use the create custom column button in the add column tab.
If the columns are of the text data type, then it might have empty strings instead of actual nulls.
Try replacing null with "" in your formula.
Below is my Table structure and .CTL FILE & .CSV file while loading data i am always getting error on first row & other data is getting is getting loaded. if i left a complete blank line on first record all data gets inserted.
can you please help us why i am getting error on first record.
TABLE_STRUCTURE
ING_DATA
(
ING_COMPONENT_ID NUMBER NOT NULL,
PARENT_ING_ID NUMBER NOT NULL,
CHILD_ING_ID NUMBER NOT NULL,
PERCENTAGE NUMBER(7,4) NOT NULL
);
CTL FILE
LOAD DATA
INFILE 'C:\Users\pramod.uthkam\Desktop\Apex\Database\SQL LOADER-PROD\ING_COMPONENT\ingc.csv'
BADFILE 'D:\SQl Loader\bad_orders.txt'
INTO TABLE ING_data
FIELDS
TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
ING_Component_ID ,
Parent_ING_ID ,
Child_ING_ID ,
Percentage
)
CSV FILE
1,3,4,95.0000
2,3,5,5.0000
3,6,7,5.0000
4,6,4,95.0000
5,18,19,19.0000
6,18,20,80.0000
7,18,21,1.0000
8,34,35,85.0000
LOG FILE
Record 1: Rejected - Error on table ING_COMPONENT, column ING_COMPONENT_ID.
ORA-01722: invalid number
Table ING_COMPONENT:
7 Rows successfully loaded.
1 Row not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 66048 bytes(64 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 7
Total logical records rejected: 1
Total logical records discarded: 0
BAD FILE
1,3,4,95.0000
I tried loading your file by creating it as is. For me it runs fine. All 8 rows got loaded. No issues. I was trying in Red Hat Linux.
Then I tried 2 things.
dos2unix myctl.ctl
Ran SQLLDR. All rows got inserted.
Then I tried:
unix2dos myctl.ctl
RAN SQLLDR again, all 8 records rejected. So what I believe is that, your first record line ending is not as per you sqlplus readable format. When you enter a blank line manually, your default environment (like in my case UNIX) creates the correct line ending, and the records get read. I'm not sure, but I assume this, based on my own try as above.
So lets say you are loading this file in windows(I think this because your path looks like :)) In your csv file, give a blank line in beginning, then remove it, and do the same thing after first record also(give a blank line after first record, then remove it). Then try again. Maybe it works if the issue is what I suspect.
I have a flat file with about half a million records in this format:
last_login=2014022
BPN=1234567890
first_last_names=portal admin
username=portal_admin
email=portal_admin#gmail.com
last_login=2010092
username=UCES1005
BPN=1001117643
email=deepak.prakash#pse
first_last_names=1026 BROAD ASSOCIATES
last_login=2014040
email=rgomes1#optonline.net
username=rgomes1
first_last_names=Robert Gomes
BPN=1001928140
I need to populate a table with these records. The first word is the column name and the second is the value. Each record is separated by a new line.
What is the best way or how do I import this data into a database? (Oracle or Access DB)
I would use plsql and read the file line by line.
Use:
UTL_FILE.FOPEN()
UTL_FILE.GET_LINE()
etc...
Process each line as needed.
I am currently using DB2 . I do not know much about load statement.
I am using this query to load data..
LOAD FROM "IXAC.CSV" OF DEL METHOD P ('IX',1,2,3,4,) MESSAGES
"SYAC.MSG" INSERT INTO SYNC.AC_COUNT ( "TYPE", AC1, AC2, AC3,
AC4 ) ; COMMIT;
In "IXAC.CSV" there are 4 int values separated with comma. My problem is that how can i insert 'IX' with load statement as a constant with each row insert.
I tried this but not found any success. I am newer in database.
Help me ..
Thanks in advance ...
Change your table definition in the database to have a default value for the column 'IX' (it looks like you want "TYPE"?).
Then do the load as normal, leaving out the IX column.
if you are able to edit the .csv file a workaround is that you can use a text editor (such as ultra edit) that supports wildcards or regular expressions in its find/replace feature and replace each carriage return/line feed with a CR/LF followed by "IX," (quotes optional depending on if you want to specify a text delimiter on insert). then your .csv file will have all your data.
i have a source file i want to load through sqlload in my Oracle 10g
the problem is one of the source field can be larger than 4000 character. Is it possible to tell oracle to split a source field in several columns ?
let's say one column would have 4000 first character and the second one the 4000 next
Thanks
I'd load it into a CLOB and then do the splitting (if necessary) using DBMS_LOB.SUBSTR over on the database side. But is there a critical business reason to get it into multiple varchar2 columns, or could it just stay in the CLOB?