How can I convert these 3 columns in oracle syntax into Spotfire to create calculated columns?
decode(nvl(sum(a.out_qty),0),0,0,round(60/(decode(sum(a.std_out),0,1,(sum(a.std_out)/nvl(sum(a.out_qty),1)))),2)) as st_upeh,
decode(nvl(sum(a.lot_time),0),0,0,round(nvl(sum(a.out_qty),0)*60/sum(a.lot_time),2)) as lottime_upeh,
round(60/decode(sum(A.std_out),0,1,sum(A.std_out)/nvl(sum(A.out_qty),1))*(sum(A.std_time)/nvl(sum(A.work_time),1)) ,2) AS real_upeh
Related
I have a dblink called times.
Im creating oracle report.
This is my query:
select
tbloola_master."account_name" acctname
from
"tblOOHA_Master"#times tbloola_master;
Problem:
When I run the report, I GOT EMPTY REPORT.
I tried converting the field using TO_CHAR and it works.
select
TO_CHAR(tbloola_master."account_name") acctname
from
"tblOOHA_Master"#times tbloola_master;
I don't understand.Please help.Thanks.
I must generate a table of calendar dates from dateIni to dateEnd in Powercenter Designer.
dateIni is is fixed, for example '2013-01-01'
dateEnd is sysdate + 'n' months
I'm trying to generate from a java tranformation, that can generate several dynamic rows but needs an input row and I do not have any input... it there any other better approach using seq generator???
As an example table content result must be
date
=======
'2013-01-01'
'2013-01-02'
'2013-01-03'
...
...
'2016-03-10'
You can pass a single input row from any source into the Java transformation and then generate rows with consecutive dates in a loop.
You can create a simple table with two columns - dateIni and dateEnd. It will contain a single row that will both kickstart the Java code and provide configuration for the mapping.
When working with an Oracle database you can also use the following query in your source qualifier:
SELECT level
FROM dual
CONNECT BY
level <= 1000 --(or any other number)
This will generate 1000 rows.
With an Expression-transformation you can change this into dates:
ADD_TO_DATE(to_date('20190101','yyyymmdd'), 'DAY',Level)
I'm a Business Intelligence intern and am trying to write a simple ETL batch job to bring one table into our warehouse using SAP Data Services Designer. The source table has a timestamp column, which halts the job's execution, saying:
You cannot select directly from timestamp column . Using
a timestamp column in SQL transform may cause this error. See
documentation or notify Customer Support.
From the technical manual, this limitation is confirmed in the timestamp section, which reads:
You cannot use timestamp columns in the SQL transform or in an Oracle
stored procedure. To use a timestamp column in the SQL transform,
convert the timestamp column in the select list of the SQL transform
to a character format using the to_char function and convert it back
to timestamp using the to_date function."
I've tried remedying the problem by changing the output schema's column to a datetime type, and converting the timestamp in the SQL transform with
TO_DATE(TO_CHAR(SQL.DATETIME_STAMP, 'YYYY-MON-DD HH24:MI:SS'), 'YYYY-MON-DD HH24:MI:SS')
I'm missing a key concept as it still fails with error 54003 no matter what I try. Thoughts, anyone?
I want to add a column to a PL SQL query but the syntax is different than SQL and I am not sure how to do it. In SQL I could use:
SELECT CustomerName,"Big" AS Round FROM Customers;
And using the now famous northwind database this would result:
CustomerName / Round
Alfred / Big
But if I use this same syntax in PL SQL it results in a
"ORA-00904: "Big": invalid identifier" error.
How do I create this column and populate it with the data I require?
Use single quotes ' rather than double " and your query should work.
SELECT CustomerName, 'Big' AS Round FROM Customers;
I have an SSIS package that is pulling data from an oracle database using the Native OLD DB\Oracle provider for OLD DB.
My package successfully but slowly pulls the data from one view in Oracle to a staging table in my SQL Database. The problem I am having is some of fields in Oracle are 4000 char in length and in SQL Server I only need the first 255 characters. Would it be better to do a substring in my query for the oracle and only take the size i need, or to take all 4000 characters? Is there a better way to handle this data import?
here is a sample of the query I am using to extract the data from Oracle:
select a
, b
, c
, substring (c,1,255) as c, substring (d,1,255) as d
, e
, CASE WHEN EXTRACT(YEAR from LAST_TAKEN_DT) < 1900
THEN NULL
WHEN EXTRACT(YEAR from LAST_TAKEN_DT) > 2025
THEN NULL
END AS LAST_TAKEN_DT
from oracle_View1
First off if it was upto you I would suggest using the Attunity Oracle adapters over the OLEDB connection. It is definitely a lot faster and you could choose to do the substring in the Oracle query or within your SSIS package using a derived column.