My SAS session details are:-
ENCODING=LATIN1
LOCALE=EN_US
I am reading encrypted customer names, decrypting them using a java code that has
been implemented in the sas code using javaobj.
data outlib.X1 ( ENCODING = 'asciiany' );
set inlib.INP1 (ENCODING = 'asciiany' ) end=eof;
length decryptedvalue $100;
declare javaobj jObj("<jarname>");
jObj.callstringmethod("<method>", FIRST_NAME , decryptedvalue);
run;
The jObj.callstringmethod returns the decrypted first_name value in the string decryptedvalue. I do a proc export at the end of my SAS code and store all the decrypted names as csv files.
In the last run some of the names have special characters e.g. RÉAL.
This is causing SAS execution to fail with following error :-
ERROR: Transcoding failure at line 30 column 2.
ERROR: DATA STEP Component Object failure. Aborted during the EXECUTION phase.
Is there some way to make SAS session (LATIN1) accept these unicode characters? Can i set
the encoding of the decryptedvalue variable? I dont want to run my entire SAS session in unicode using sas_u8. I only want to accept these characters
Even reading these problematic values as blank is ok.
I have already tried following things :-
Set inencoding= utf8 for the libname
Make ENCODING = 'asciiany' for the input dataset.
Any inputs would be useful.
Thanks in advance!
SAS Tech support suggested this :-
Adding export EASYSOFT_UNICODE=YES in the sasenv_local file .
We are now able to run the entire SAS code (including SQL queries) in the sas u8 session.
Thanks to all for your support.
Related
I have a table with a field
VALCONTENT BLOB SUB_TYPE TEXT SEGMENT SIZE 80
When I browse the table, right click on an entry, select "Edit blob" the content is shown.
If I enter "normal" test ("Hello world"), I can click "Save" and things work.
If I use umlauts ("Hällö Wörld"), I get an error message:
IBPP::SQLExcetption, Contenten: Statement: Execure (Update MyTable set
foo= ? where ..." Message isc_dsql_execute2 failed, -303, incompatible
column, malformed string
Am I doing something wrong or is FlameRobin not able to handle UTF8?
I am using Firebird 4.0 64bit, FlameRobin 0.9.3 Unicode x64 (all just downloaded).
Extracting the DDL with "iSQL -o" shows in the first line
/* CREATE DATABASE 'E:\foo.fdb' PAGE_SIZE 16384 DEFAULT CHARACTER SET
UTF8; */
I can reproduce the issue (with blob character set UTF8 and connection character set UTF8), which suggests this is a bug in FlameRobin. I recommend reporting it on https://github.com/mariuz/flamerobin/issues. I'm not sure what the problem is. Updating does seem to work fine when using connection character set WIN1252.
Consider using a different tool, maybe DBeaver, or IBExpert, etc.
I have a requirement of inserting greek characters, such as 'ϕ', into Oracle. My existing DB structure wasn't supporting it. On investigating found various solutions and I adopted the solution of using NCLOB instead of CLOB. It works perfectly fine when I use unicode, 03A6, for 'ϕ', and use UNISTR function in SQL editor to insert. Like the one below.
UPDATE config set CLOB = UNISTR('\03A6')
However, it fails when I try to insert the character through my application, using hibernate. On debugging, I find that the string before inserting is '\u03A6'. After insert, I see it as ¿.
Can some one please help me how I can resolve this? How do I make use of UNISTR?
PN: I don't use any native sqls or hqls. I use entity object of the table.
Edit:
Hibernate version used is 3.5.6. Cannot change the version as there are so many other plugins dependent on this. Thus, cannot use #Nationalized or #Type(type="org.hibernate.type.NClobType") on my field in Hibernate Entity
After racking my brain on different articles and trying so many options, I finally decided to tweak my code a bit to handle this in Java and not through hibernate or Oracle DB.
Before inserting into the DB, I identify unicode characters in the string and format it to append &#x in the beginning and ; at the end. This encoded string will be displayed with its actual unicode in the UI (JSP, HTML) that is UTF-8 compliant. Below is the code.
Formatter formatter = new Formatter();
for (char c : text.toCharArray()) {
if (CharUtils.isAscii(c)) {
formatter.format("%c", c);
} else {
formatter.format("&#x%04x;", (int) c);
}
}
String result = formatter.toString();
Ex:
String test = "ABC \uf06c DEF \uf06cGHI";
will be formatted to
test = "ABC DEF GHI";
This string when rendered in UI (and even in word doc) it displays it as
ABC DEF GHI
I tried it with various unicode characters and it works fine.
I have a sas macro which if certain conditions are met creates a user defined format which is used later in the macro. However this user format is not always created. So when sas validates the syntax when the macro is called it errors as the user defined format is not known when the condition is not met. The statement to use the user defined format is wrapped in an if condition which has not been met but the macro still errors.
Any advice to overcome this problem greatly received.
One good way to deal with this is to create a dummy format that doesn't actually do anything, before the conditional creation. That way, you have something to prevent the error.
%macro fizz_buzz(format=0);
*Format that does nothing;
proc format;
value FIZZBUZZF
other=[best.]
;
quit;
*Conditionally created same format;
%if &format=1 %then %do;
proc format;
value FIZZBUZZF
3,6,9,12='FIZZ'
5,10='BUZZ'
15='FIZZBUZZ'
other=[2.]
;
quit;
%end;
data _null_;
do _i = 1 to 15;
put _i fizzbuzzf.;
end;
run;
%mend fizz_buzz;
%fizz_buzz(format=0);
What do you mean by IF condition? SAS will check the syntax of a DATA step before it starts executing. So cannot prevent the reference to format in the data step by using an IF or similar execution time code. So this code will generate an error even though the condition in the IF statement can never be true.
data bad;
if 0=1 then format x $1XYZ.;
run;
If you use a macro %IF statement so that the reference to format is never generated in the SAS code that the macro creates then you should not have any error. So if you had a similar data step being generated by a macro and used a %IF to prevent the macro from generating the invalid format name then the code will run without error.
data good;
%if (0=1) %then %do;
format x $1XYZ.;
%end;
run;
Most likely you just want to use a macro variable to hold the format name and set it empty if the format is not created.
data good;
format x &format_name ;
run;
I'm trying to pass a IBM file to hex values, so I coded this:
//R45ORF80V JOB (EFAS,2SGJ000),'LLAMI',NOTIFY=R45ORF80,
// MSGLEVEL=(1,1),MSGCLASS=X,CLASS=A,
// REGION=0M,TIME=5
//*---------------------------------------------------
//SORTEST EXEC PGM=ICEMAN
//SORTIN DD DSN=LF58.DFE.V1408001,DISP=SHR
//SORTOUT DD DSN=LF58.DFE.V1408001.OUT,
// DISP=(NEW,CATLG,DELETE),
// LRECL=1026,DATACLAS=CDMULTI
//SYSOUT DD SYSOUT=X
//SYSPRINT DD SYSOUT=X
//SYSUDUMP DD SYSOUT=X
//SYSIN DD *
SORT FIELDS=COPY
OUTREC FIELDS=(1,513,HEX)
END
/*
But I get the following error:
ICE043A INVALID DATA SET ATTRIBUTES: SORTOUT RECFM - REASON CODE IS 08
What am I dismissing? Anyway, the SYSIN is correct?
You cut off the most important part of the message, the message-code (I've edited into the question).
When you get a message out of DFSORT which you do not already recognize, you have a few choices: locate the manual DFSORT Messages, Codes and Diagnosis Guide for your release; use the IBM LookAT webservice (http://www-03.ibm.com/systems/z/os/zos/bkserv/lookat/); an internet search; ask your colleagues.
One of these should get you to:
ICE043A INVALID DATA SET ATTRIBUTES: ddname attribute
- REASON CODE IS rsn
Explanation: Critical. An error associated with a record format, record length or
block size was detected, or a conflict between these attributes was detected...
A Reason Code of 8 is:
Input and output data sets have mixed fixed length and variable length
record formats, or mixed valid and invalid record formats. Examples:
The SORTIN data set has RECFM=FB and the SORTOUT data set has
RECFM=VB. The SORTIN01 data set has RECFM=VB and the SORTOUT data set
has RECFM=F or RECFM=U
Basically it is as piet.t suspected in the comments, either your input is variable and output fixed (looks like you have something in the DATACLAS, is that the correct one?), or the other way around.
With SORT you do not need to supply any DCB information on the output dataset. That it, no RECFM, LRECL or BLKSIZE. Look at your SYSOUT. That will tell you the RECFM of your input dataset. If that is wrong, you are using the wrong file, or it has been created incorrectly. If it is correct, then strip all the DCB information off the output dataset.
If you still have problems after talking to your storage people about the DATACLAS, then paste the sysout from a current run of your JOB.
For the other issues you have, if you need help with those, start a new question.
I think I started getting this error when I switched from MySQL to PostgreSQL.
I had written code to encrypt decrypt model attributes containing sensitive data and I had it working until the db switch.
I have the following code:
#pbk = OpenSSL::PKey::RSA.new File.read("#{RAILS_ROOT}/cert/pb_sandwich.pem")
#pvk = OpenSSL::PKey::RSA.new File.read("#{RAILS_ROOT}/cert/tuna_salad.pem"), 'pass45*'
model.sendata = Base64.encode64 #pbk.public_encrypt(model.sendata)
I run that code on save. I've also tried with and with out first using Base64.
Then when I try to read:
#pvk.private_decrypt Base64.decode64(model.sendata)
I get this error:
OpenSSL::PKey::RSAError: data greater than mod len
I never got that before when I used MySQL. I can't really remember what datatype the sendata column was in MySQL but in my current PostgreSQL setup that column is datatype bytea
I'm assuming that is the problem since it used to work fine with MySQL. What datatype should the column be if I wanted to skip having to do that extra step to Base64 encode/decode? If that is the problem that is.
Another thing of note is that I've tried generating the private key with mod lengths: 2048, 4096, and 5120 and I always get the same error. Also, the sendata field isn't very long before encoding, it's under 40 chars.
I'm stumped right now, any ideas?
You are probably not storing the keys properly in the Database. There's probably some field that is being truncated.
The message you are getting probably means that the data is too long to be encrypted with such a small key. If this is the case, you should encrypt the data with AES and encrypt the AES key with RSA. Then send both the encryted data and the encrypted key.