MySQL column encryption with Hibernate in Spring MVC - spring

I need to encrypt data saved onto my DB. I am currently using spring and hibernate to save data.
I have looked at some materials and tried to implement the code, however, it has resulted in various generic errors, some of the material was not targeted to MySQL etc.
Here's the code that has got me furthest
#Column(name="disability_description")
#Length(max=500)
#ColumnTransformer(
read = "AES_DECRYPT(disability_description, 'mykey')",
write = "AES_ENCRYPT(?, 'mykey')"
)
private String disabilityDescription;
This, however, doesn't work as I get the following errors
org.hibernate.exception.GenericJDBCException: could not execute statement
java.sql.SQLException: Incorrect string value: '\xF9\x82u\x01\x99\x1A...' for column 'disability_description' at row 1
Please point in the right direction. I am lost. Also mykey doesn't point to anything, I just entered a random word.

I doubt that your column is not of type BINARY:
Mysql Doc:
AES_ENCRYPT() encrypts the string str using the key string key_str and
returns a binary string containing the encrypted output.

Related

Why does Kafka Connect treat timestamp columns differently?

I have a Kafka Connect configuration set up to pull data from DB2. I'm not using Avro, just the out-of-the-box json. Among the colums in the db are several timestamp columns, and when they are streamed, they come out like this:
"Process_start_ts": 1578600031762,
"Process_end_ts": 1579268248183,
"created_ts": 1579268247984,
"updated_ts": {
"long": 1579268248182
}
}
The last column is rendered with this sub-element, though the other 3 are not. (This will present problems for the consumer.)
The only thing I can see is that in the DB, that column alone has a default value of null.
Is there some way I can force this column to render in the message as the prior 3?
Try to flatten your message using Kafka Connect Transformations.
The configuration snippet below shows how to use Flatten to concatenate field names with the period . delimiter character (you have to add these lines to the connector config):
"transforms": "flatten",
"transforms.flatten.type": "org.apache.kafka.connect.transforms.Flatten$Value",
"transforms.flatten.delimiter": "."
As a result, your JSON message should look like this:
{
"Process_start_ts": 1578600031762,
"Process_end_ts": 1579268248183,
"created_ts": 1579268247984,
"updated_ts.long": 1579268248182
}
See example Flatten SMT for JSON.
I'm not sure created is any different than the first two. The value is still a long, only the key is all lower case. It's not clear how it would know what the default should be - are you using sure you're not using AvroConverter? If not, it's not clear what fields would have defaults
The updated time is nested like that, based on the Avro or structured JSON Kafka Connect specifications that say the type name is included as part of the record to explicitly denote the type of a nullable field

Querying in Hbase can't find key because its got a hexadecimal in it

Not much of a hbase guy so bear with me. Just a data analyst trying to do his job.
Let's say for the same of simplicity there's a hbase table called Student with the following info:
Key - Student ID
Value - SSN
So I'm trying to run the following command:
get 'Student_id','88812'
I'm trying to produce the following:
COLUMN CELL
H:00_ETAG timestamp=1525760141144, value=1234567891
However, nothing yields. After scanning the table I've come to discover that the key has some sort of hexidecimal value in front of it. So the key is actually like
\x80\x00\x02F188812
I understand that in order to execute the get command I'd just need to use double quotes like this
get 'Student',"\x80\x00\x02F188812"
Now where the real issue arises for me is the fact that I have NO clue what the hexadecimal prefix for each of these keys should be. It seems like the table that I'm working out of has a different hexadecimal prefix for each key. Is there a way that I can somehow execute the get command without the hexadecimal or at least find out what the hexadecimal should be? How about doing a reverse search where instead I try and find the key by searching by value?
And no, I can't scan the entire table since there are millions of records that exist.

Oracle OCCI not able to retrieve the values after decimal point properly

I am retrieving the data from OracleDB using occi in my application.
While retrieving it, I found that the digits after decimal points were not properly retrieved.
For example, in the DB the original value was 12345.12 but while retrieving from resultset the value i got was 12345.1.
I need to retrieve the whole value (preferably double helps me a lot for my application mapping purpose). Any suggestions will help me a lot.
column in the Oracle DB is NUMBER(11,2) datatype.
I tried to retrieve from result set in the following ways but still got the same truncated value in it.
resultSet -> getDouble(1);
Number nr = resultSet -> getNumber(1);
double d = nr.operator double();
I have tried resultSet->getString(1) and able to get the whole value. Yes i need to cast it to double but getting the data is important than casting. So i will go for it. If anyone has any better solution post it and I am ready to take it.

VBA ACE issue with Oracle Decimal

We use VBA to retrieve data from an Oracle database using the Microsoft.ACE.OLEDB.12.0 provider.
We have used this method without issue for a long time, but we have encountered a problem with a specific query of data from a specific table.
When running it under VBA, we get "Run-Time error '1004': Application-defined or object-defined error. However investigating further, we find the following:
The queries we run are dynamically generated, and how we handle them is to read the results into a variant array, then output that array into Excel. When we step-through our particular query, we find that one specific database field is "blank": The locals window shows the value to be completely blank: it is not an empty string, it is not a null, it is not zero. VarType() shows it to be a decimal data type, and yet it is empty.
I can't seem to prevent this error from locking-out:
On Error Resume Next
...still breaks.
if (isEmpty(theValue)) then
...doesn't catch it, because it is not empty
if (theValue is nothing) then
...doesn't catch it because it is not an object
etc.
We used the SQL in the a .NET application, and got a more informative error:
Decimal's scale value must be between 0 and 28, inclusive. Parameter name: Scale
So: I see this as two issues:
1.) In VBA, how do I trap the variant datatype value-that-is-not-empty-or-null, and;
2.) How do I resolve the Oracle Decimal problem?
For #2, I see from the Oracle decimal data type, it can support precision of up to 31 places, and I assume that the provider I am using can only support 28. I guess I need to Cast it to a smaller type in the query.
Any other thoughts?

Insert statement failing for influxDB

I am using line protocol to write simple data as shown below into Influx DB.
interface1,KEY=bytes_allocated,fieldname=KV datavalue=761
Above statement is working fine,now,
If the data value contains any alphabet it gives error.
interface1,KEY=bytes_allocated,fieldname=KV datavalue=761A
Error i am getting is
Failed to write: {"error":"unable to parse
'interface1,KEY=bytes_allocated,fieldname=KV datavalue=761A': invalid number"}
Wondering how can i write "761A" into DB? or force influxDB to consider 761A as string value instead of number?
If you want a field value to be treated as a string, it must be wrapped in "s.
For example
interface1,KEY=bytes_allocated,fieldname=KV datavalue="761A"

Resources