CognosTM1 Error: Syntax Error on or before Value - business-intelligence

I am trying to insert a Numeric value( mesure ) to database from Tm1 cube.
The variable 'Value' type in tm1 is a Numeric, and the 'Value' type in Database is decimal.
I tried to do a check if the variable contains string values , so I put them in a sperated file. And if they are Numeric I insert them in the database.
But it seems that there is an error in my sql query which states :
Syntax Error on or before Value
I dont know why it gives me an error even I checked o it and verified if it is Numeric.
Here is a snippet of my code :
zType = DTYPE( 'Sales', Sales );
IF(zType #= 'N');
SQL_INSERT_N2 = 'INSERT INTO DB VALUES ( '''| dim1|''' , '''| dim2|''', '''|Value|''') ' ; #error in this line
ELSE;
zValue =NumberToString( Value ) ;
zText = dim1 | ';' | dim2 | ';' | Value | ;
ASCIIOUTPUT( zFile, zText );
ENDIF;

The problem was that I was declaring a numeric value as a string value by adding '''|Value|'''.
So it should be something like this :
SQL= 'INSERT INTO DB VALUES ( '''| dim1|''' , '''| dim2|''', '|NumberToString(Value)|') ' ;

Related

Oracle Spatial: Update a numeric column based on Point-coordinates

(Already solved with a second solution, but I wonder why the first idea does not work).
I have a table with FID, Geom (Point-data), orientation and so on. I want to update the Orientation based on the coordinates, like "set orientation = 99 where X = something and Y = something"
I have this:
UPDATE WW_POINT
SET
ORIENTATION = 99.9
WHERE
F_CLASS_ID_ATTR = 77
AND GEOM.SDO_POINT.X = 2695056.511
AND GEOM.SDO_POINT.Y = 1279718.364;
The result is:
Error starting at line : 1 in command -
UPDATE WW_POINT
SET
ORIENTATION = 99.9
WHERE
F_CLASS_ID_ATTR = 77 -- haltunsgverbindung
AND GEOM.SDO_POINT.X = 2695056.511
AND GEOM.SDO_POINT.Y = 1279718.364
Error at Command Line : 7 Column : 12
Error report -
SQL Error: ORA-00904: "GEOM"."SDO_POINT"."Y": invalid identifier
00904. 00000 - "%s: invalid identifier"
*Cause:
*Action:
A simple select returns X and Y as expected:
SELECT
X.GEOM.SDO_POINT.X
, X.GEOM.SDO_POINT.Y
FROM
WW_POINT X
So the question is: What is wrong here?
My second solution with SDO_EQUAL seem to work fine:
UPDATE WW_POINT
SET
ORIENTATION = 389.608
WHERE
F_CLASS_ID_ATTR = 77 -- haltunsgverbindung
AND SDO_EQUAL (
GEOM
, MDSYS.SDO_GEOMETRY (
2001
, 2056
, SDO_POINT_TYPE (
2695056.511
, 1279718.364
, NULL
)
, NULL
, NULL
)
) = 'TRUE';
Comparing both of your queries: the second one (working) has a form <table alias>.<column name>.<attr>.<attr>, but the first one is <column name>.<attr>.<attr>.
From the documentation:
t_alias
Specify a correlation name, which is an alias for the table, view, materialized view, or subquery for evaluating the query. This alias is required if the select list references any object type attributes or object type methods.
Given this sample table:
create table t (p)
as
select
SDO_GEOMETRY (
2001, 2056
, SDO_POINT_TYPE (1, 1, NULL)
, NULL, NULL
)
from dual
A query without table alias fails:
select t.p.sdo_point.x
from t
where t.p.sdo_point.x = 1
ORA-00904: "T"."P"."SDO_POINT"."X": invalid identifier
And the query with table alias works as expected allowing attribute access in the select list as well as in the where clause:
select t.p.sdo_point.x
from t t
where t.p.sdo_point.x = 1
P.SDO_POINT.X
1
db<>fiddle here

Oracle JSON_QUERY with path as query column value

I try to get part of JSON column in each result row this select
SELECT TRIM(a.symbol),
TRIM(a.ex_name),
to_char(a.date_rw, 'dd-MON-yyyy'),
a.pwr,
a.last,
JSON_QUERY(b.mval, '$."-9"') as value
FROM adviser_log a
INNER JOIN profit_model_d b
ON a.date_rw = b.date_rw
WHERE a.date_rw = '08-OCT-2021'
select result:
VERY NAS 08-OCT-2021 -9 8.9443 {"sl":-3.6,"tp":5,"avg":1.368,"max":5,"min":-3.6,"count":1}
As a json path I put "-9" literal but I wanna put as path a.pwr is it possible
I tried put CONCAT('$.', a.pwr) without result
Is it any way to create dynamical json path into JSON_QUERy
I want to match part json which key compared with a.pwr to each row in select
Thx
You can use a function to dynamically get the JSON value:
WITH FUNCTION get_value(
value IN CLOB,
path IN VARCHAR2
) RETURN VARCHAR2
IS
BEGIN
RETURN JSON_OBJECT_T( value ).get_object( path ).to_string();
END;
SELECT TRIM(a.symbol) AS symbol,
TRIM(a.ex_name) AS ex_name,
to_char(a.date_rw, 'dd-MON-yyyy') AS date_rw,
a.pwr,
a.last,
get_value(b.mval, a.pwr) AS value
FROM adviser_log a
INNER JOIN profit_model_d b
ON a.date_rw = b.date_rw
WHERE a.date_rw = DATE '2021-10-08'
Which, for your sample data:
CREATE TABLE adviser_log (symbol, ex_name, date_rw, pwr, last) AS
SELECT 'VERY', 'NAS', DATE '2021-10-08', -9, 8.9443 FROM DUAL;
CREATE TABLE profit_model_d (date_rw DATE, mval CLOB CHECK (mval IS JSON));
INSERT INTO profit_model_d (
date_rw,
mval
) VALUES (
DATE '2021-10-08',
'{"-9":{"sl":-3.6,"tp":5,"avg":1.368,"max":5,"min":-3.6,"count":1}}'
);
Outputs:
SYMBOL
EX_NAME
DATE_RW
PWR
LAST
VALUE
VERY
NAS
08-OCT-2021
-9
8.9443
{"sl":-3.6,"tp":5,"avg":1.368,"max":5,"min":-3.6,"count":1}
db<>fiddle here

This is someone else's code. I am trying to get it to work but dont know whats wrong.

Can someone please help me get rid of this error?
Msg 156, Level 15, State 1, Line 12
Incorrect syntax near the keyword 'INTO'.
Msg 102, Level 15, State 1, Line 24
Incorrect syntax near 'LocationID'.
use DB1
-- Declare variables
declare #AnalysisSID varchar(10)
declare #ExposureDB varchar(250)
declare #AnalysisName varchar(250)
-- !! begin user input !! -------------------------------------------------------------------
set #AnalysisSID =9
set #ExposureDB = (select ExposureDataSourceName
from t9_LOSS_DimExposureDataSource) --<--!!! update result table number
-- !! end user input !! ----------------------------------------------------------------------
set #AnalysisName = (select AnalysisName from tAnalysisResult where ResultSID = #AnalysisSID)
Exec(
'
if not (exists (select * from sysobjects where name = ''LOSSES_FORFEED''))
begin
CREATE TABLE LOSSES_FORFEED (
Name varchar(250) NOT NULL,
LOCID varchar(100) NOT NULL,
ContractID varchar(100) NOT NULL,
GroundUp_Loss float,
Gross_Loss float)
end
INTO LOSSES_FORFEED
SELECT
'''+#AnalysisSID+''' as AnalysisID, '''+#AnalysisName+''' as Name,
l.LocationID as LOCID,
c.ContractID,
sum(GroundUpLoss) as GroundUp_Loss,
sum(GrossLoss) as Gross_Loss
FROM t'+#AnalysisSID+'_LOSS_ByLocationSummary loss
join '+#ExposureDB+'..tLocation l on loss.LocationSID = l.LocationSID
join '+#ExposureDB+'..tContract c on l.ContractSID = c.ContractSID
group by
c.ContractID,
l.LocationID
')
Have you tried to add an INSERT before INTO LOSSES_FORFEED:
INSERT INTO LOSSES_FORFEED
That should work - at least for an MS SQl Server Database. But also the amount of arguments don't match: the table LOSSES_FORFEED has 5 columns but in the select statement you're providing 6 arguments.

H2 - CREATE TABLE creates wrong data type

Testing my DAL with H2 in-memory database currently doesn't work because the data tye BINARY gets converted to VARBINARY:
CREATE TABLE test (
pk_user_id INT AUTO_INCREMENT(1, 1) PRIMARY KEY,
uuid BINARY(16) UNIQUE NOT NULL
);
which results in a wrong data type if I check if the columns with the expected data types exists:
2017-03-20 16:24:48 persistence.database.Table check Unexpected column
(UUID) or type (-3, VARBINARY)
tl;dr
which results in a wrong data type
No, not the wrong type, just another label for the same type.
The binary type has five synonyms: { BINARY | VARBINARY | LONGVARBINARY | RAW | BYTEA }
All five names mean the same type, and all map to byte[] in Java.
Synonyms for datatype names
Data types are not strictly defined in the SQL world. The SQL spec defines only a few types. Many database systems define many types by many names. To make it easier for a customer to port from one database system to theirs, the database vendors commonly implement synonyms for data types to match those of their competitors where the types are compatible.
H2 like many other databases systems have more than one name for a datatype. For a binary type where the entire value is loaded into memory, H2 defines five names for the same single data type:
{ BINARY | VARBINARY | LONGVARBINARY | RAW | BYTEA }
Similarly, H2 provides for a signed 32-bit integer datatype by any of five synonyms:
{ INT | INTEGER | MEDIUMINT | INT4 | SIGNED }
So you can specify any of these five names but you will get the same effect, the same underlying datatype provided by H2.
Indeed, I myself ran code to create the column using each of those five names for the binary type. In each case, the metadata for the column name reports the datatype as VARBINARY.
While it does not really matter which of the five is used internally to track the column’s datatype, I am a bit surprised as to the use of VARBINARY because the H2 datatype documentation page heading advertises this type as BINARY. So I would expect BINARY to be used by default in the metadata. You might want to log a bug/issue for this if you really care, as it seems either the doc heading should be changed to VARBINARY or H2’s internal labelling for the datatype should be changed to BINARY.
Below is some example Java JDBC code confirming the behavior you report in your Question.
I suggest you change your datatype-checking code to look for any of the five possible names for this datatype rather than check for only one specific name.
try {
Class.forName ( "org.h2.Driver" );
} catch ( ClassNotFoundException e ) {
e.printStackTrace ( );
}
try ( Connection conn = DriverManager.getConnection ( "jdbc:h2:mem:" ) ;
Statement stmt = conn.createStatement ( ) ; ) {
String tableName = "test_";
String sql = "CREATE TABLE " + tableName + " (\n" +
" pk_user_id_ INT AUTO_INCREMENT(1, 1) PRIMARY KEY,\n" +
" uuid_ BINARY(16) UNIQUE NOT NULL\n" +
");";
// String sql = "CREATE TABLE " + tableName +
// "(" +
// " id_ INT AUTO_INCREMENT(1, 1) PRIMARY KEY, " +
// " binary_id_ BINARY(16) UNIQUE NOT NULL, " +
// " uuid_id_ UUID, " +
// " age_ INTEGER " + ")";
stmt.execute ( sql );
// List tables
DatabaseMetaData md = conn.getMetaData ( );
try ( ResultSet rs = md.getTables ( null, null, null, null ) ) {
while ( rs.next ( ) ) {
System.out.println ( rs.getString ( 3 ) );
}
}
// List columns of our table.
try ( ResultSet rs = md.getColumns ( null, null, tableName.toUpperCase ( Locale.US ), null ) ) {
System.out.println ( "Columns of table: " + tableName );
while ( rs.next ( ) ) {
System.out.println ( rs.getString ( 4 ) + " | " + rs.getString ( 5 ) + " | " + rs.getString ( 6 ) ); // COLUMN_NAME, DATA_TYPE , TYPE_NAME.
}
}
} catch ( SQLException e ) {
e.printStackTrace ( );
}
CATALOGS
COLLATIONS
…
USERS
VIEWS
TEST_
Columns of table: test_
PK_USER_ID_ | 4 | INTEGER
UUID_ | -3 | VARBINARY
Tips:
Adding a trailing underscore to all your SQL names avoids collisions with any of the over one thousand reserved words found in the SQL world. The SQL spec promises a trailing underscore will never be used by a SQL system. For example, your use of the column name uuid could conflict with H2’s UUID datatype.
Your code uuid BINARY(16) suggests you are trying to store a UUID (a 128-bit value where some bits have defined semantics). Note that H2 supports UUID natively as a data type as does Postgres and some other database systems. So change uuid_ BINARY(16) to uuid_ UUID.

Confused by T-SQL: different output when I run a query in query window and as part of scalar function

Note: the patient data displayed below is "dummy" data that I made up. It is not the actual information for an actual patient.
I have a function with the following conversion in it:
Declare #bdate date
set #bdate = CONVERT ( date , left(#dob,8) , 112 )
If I just run this in a query window, it converts the date fine
select CONVERT(date, left('19900101', 8), 112) //returns a good date
But if I step through a scalar function with the same code in it in visual studio I get an error...
Declare #bdate date
set #bdate = CONVERT ( date , left(#pidPatientDob,8) , 112 )
throws...
Running [dbo].[getAgeAtTestDate] ( #obxTestDate = '20120101',
#pidPatientDob = '19900101' ).
Conversion failed when converting date and/or time from character
string. Invalid attempt to read when no data is present.
Why does it work in the query window but not in the function? It seems like the parameters are getting filled properly in the function.
Here is the full text of the function, which is returning null (I think because of the error)
ALTER FUNCTION [dbo].[getAgeAtTestDate]
(
-- Add the parameters for the function here
#obxTestDate as nvarchar(50), #pidPatientDob as nvarchar(50)
)
RETURNS int
AS
BEGIN
Declare #bdate date
set #bdate = CONVERT ( date , left(#pidPatientDob,8) , 112 )
Declare #testDate date
set #testDate = CONVERT ( date , left(#testDate,8) , 112 )
-- Return the result of the function
RETURN datediff(mm, #testDate, #bdate)
END
Your parameter is called obxTestDate, not testDate, so change;
set #testDate = CONVERT ( date , left(#testDate,8) , 112 )
into
set #testDate = CONVERT ( date , left(#obxTestDate,8) , 112 )
and things will work better.
As a side note, I think you reversed the DATEDIFF too, the start date should come before the end date;
RETURN datediff(mm, #bdate, #testDate)

Resources