How to set longvarbinary values in jdbc? - jdbc

I read that inorder to populate binary values for Insert query you need to create a PreparedStatement and then use setBytes() API to set the byte array as the binary parameter.
My problem is that when i do the same I get "data exception: String data,right truncation".
I read about this that this might come if we populate a value of size more than the declared size. But here I am using a very small byte [] ("s".getbytes()).
I also tried setBinaryStream() but with the same result!
I also tried setting null value. Still I get the same error.

The length of the VARBINARY or LONGVARBINARY column must be enough to accept the data you are inserting. Your CREATE TABLE statement can contain VARBINARY as the type of the column, allowing up to 16MB per each data item.
If you use BINARY as the type, it means only one byte is allowed.

Related

Informatica - Concatenate Max value from each colum present in multiple rows for same Primary Key

enter image description here
I have tried traditional approach of using Agg (Group By: ID, Store Name) and Max(Each Object) columns separately.
Then in next expression, Concat(Val1 Val2 Val3 || Val4).
How ever, I'm getting output as '0100'.
But, REQUIRED OUTPUT: 1100
Please let me know, how this can be done in IICS.
IICS is similar to the Powercenter on-prem.
First use an aggregator.
in Group By tab add ID, Store Name
in Aggregate tab add max(object1)... please note to set data type and length correctly.
Then use an expression transformation.
link ID, Store Name first.
Then concat the max_* columns using pipe -
out_max=max_col1||max_col2||... please note to set data type and length correctly.
This should generate correct output. I think you are having wrong output because of data length or data type of object fields. Make sure you trim spaces from object data before aggregator.

bigtable.SampleRowKeys return a single key

I'm trying to write code that does a full table scan in go by using the bigtable.Table.SampleRowKeys RPC method. The table as around 7m rows (verified with cbt), yet the call returns a single key, whereas the documentation mention:
// SampleRowKeys returns a sample of row keys in the table. The returned row keys will delimit contiguous sections of
// the table of approximately equal size, which can be used to break up the data for distributed tasks like MapReduce.
Am I missing something?
Turns out that the returned keys are midpoints, thus if it returns a single key, say [k1], then the ranges are [("", k1), (k1, "")].

How to select a substring from Oracle blob field

I need to get part of a blob field which has some json data. one part of the blob is like this CustomData:{HDFC;1;0;sent} . I need separate values after CustomData like I need to get HDFC, 1, 0, sent separately.
This is what I have tried in two separate queries which works:
This gives me index of CustomData within payment_data blob field for example it returns 11000
select dbms_lob.instr(payment_data, utl_raw.cast_to_raw('CustomData'))
from table_x;
I am specifying 3rd parameter as what first query returned + length of test CustomData: to get {HDFC;1;0;sent}
select UTL_RAW.CAST_TO_VARCHAR2(dbms_lob.substr(payment_data,1000,11011))
from table_x;
Problem is I need to take dynamic offset in 2nd query and not run 1st query as individual. Specifying dynamic offset is not working with dbms_lob.substr() function. Any suggestions how can I combine these two queries into one?
Once I get {HDFC;1;0;sent}, I also need to get these delimited values separately, so combining these three into one would even be better if someone can help with it. I can use regexp_substr to get delimited text once I get first two combined.
If you want extract text data from blob first u need convert it to clob using dbms_lob.converttoclob.
If you have Oracle 12c or higher you may use JSON SQL functions, for example, JSON_TABLE.
If your Oracle version between 10 and 11 you may use regex functions or instr + substr if your version less than 10.

Storing and retrieving value inconsistency in a table in Oracle

I am facing a weird problem. I have a table (observation_measurement) in oracle DB and it has many fields. One field name is observation_name. this observation_name field stores different measurements with it's value from a text file.
For example, observation_name stores four measurements a,b,c,d (name of the measurements) and their corresponding values 1,2,3,4 (values of those measurements).
Later it is reading same text file. This time that text file has three measurements a,b,d (c is not there) and their values are 7,8,9 and then store in the table. So, if I need the latest values for all observation_names then I should get a=7,b=8,c=null,d=9. But it is giving me
a=7,b=8,c=3,d=9. I dont know why it is getting old data for c measurement.
Any ideas?
NULL has to be handled specially in Oracle, like IS NULL or IS NOT NULL.
Hope, your update logic involves some validation over the column and it leaves NULL values untreated.
Since, some validation fails because of NULL, the old value is retained in the table.
Can you please update your question with the Query used to UPDATE the table.

Integration services string length need to be truncated

I'm using integration services (SSIS), at the moment I'm getting the data from an excel source, the string Description comes with a length greater than 15 chars: the problem is that I can't find a way to truncate this data in order to save it in the database (the column database is varchar(15) and I can't change it).
I was trying to use a derived column in order to truncate the data with no success.
Add a derived column transformation and use the SUBSTRING function to get only the first 15 characters of the string. Read about the Substring function in SSIS here SUBSTRING SSIS Expression
Your expression in the derived column would look something like SUBSTRING(Description, 0, 15)

Resources