Download header line into TXT with GUI_DOWNLOAD - user-interface

Got a little problem by using the function 'GUI_DOWNLOAD'. Tried to append a header line at the top of my .txt file i created. My solution is:
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
filename = lv_file
filetype = 'ASC'
write_field_separator = 'X'
TABLES
data_tab = it_outh. "internal table just with the header line
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
filename = lv_file
filetype = 'ASC'
append = 'X'
write_field_separator = 'X'
TABLES
data_tab = it_output. "internal table with selected data
The code works. But the formatting is crap, cause of the different length (header line and data)
Is the only way to fix that problem, using shorter descriptions in it_outh? Or does anyone of you have a better idea for me?
Have a nice day.
Regards,
Dennis

Here is a quick-n-dirty workaround how to make an aligned output of itab with header into text:
FIELD-SYMBOLS: <fs_table> TYPE STANDARD TABLE.
DATA: lref_struct TYPE REF TO cl_abap_structdescr,
o_table TYPE REF TO data.
lref_struct ?= cl_abap_structdescr=>describe_by_name( 'CRCO' ).
DATA(components) = lref_struct->get_components( ).
DATA(fields) = VALUE ddfields( FOR line IN lref_struct->get_ddic_field_list( ) ( line ) ).
" making all types as char by replacing with existing PRZ char field
MODIFY components FROM VALUE abap_componentdescr( type = components[ name = 'PRZ' ]-type ) TRANSPORTING type WHERE name <> ''.
lref_struct = cl_abap_structdescr=>create( components ).
DATA(o_ref_table) = cl_abap_tabledescr=>create( p_line_type = lref_struct p_table_kind = cl_abap_tabledescr=>tablekind_std ).
CHECK o_ref_table IS BOUND.
CREATE DATA o_table TYPE HANDLE o_ref_table.
ASSIGN o_table->* TO <fs_table>.
APPEND INITIAL LINE TO <fs_table>. " reserving line for headers
SELECT
CAST( mandt AS CHAR( 12 ) ) AS mandt,
CAST( objty AS CHAR( 2 ) ) AS objty,
CAST( objid AS CHAR( 8 ) ) AS objid,
CAST( laset AS CHAR( 6 ) ) AS laset,
CAST( endda AS CHAR( 8 ) ) AS endda,
CAST( lanum AS CHAR( 4 ) ) AS lanum,
CAST( begda AS CHAR( 8 ) ) AS begda,
CAST( aedat_kost AS CHAR( 8 ) ) AS aedat_kost,
CAST( aenam_kost AS CHAR( 12 ) ) AS aenam_kost,
CAST( kokrs AS CHAR( 10 ) ) AS kokrs,
CAST( kostl AS CHAR( 6 ) ) AS kostl,
CAST( lstar AS CHAR( 12 ) ) AS lstar,
CAST( lstar_ref AS CHAR( 12 ) ) AS lstar_ref,
CAST( forml AS CHAR( 12 ) ) AS forml,
CAST( prz AS CHAR( 12 ) ) AS prz,
CAST( actxy AS CHAR( 12 ) ) AS actxy,
CAST( actxk AS CHAR( 12 ) ) AS actxk,
CAST( leinh AS CHAR( 12 ) ) AS leinh,
CAST( bde AS CHAR( 12 ) ) AS bde,
CAST( sakl AS CHAR( 1 ) ) AS sakl
UP TO 10 ROWS
FROM crco
APPENDING CORRESPONDING FIELDS OF TABLE #<fs_table>.
" writing headers
ASSIGN <fs_table>[ 1 ] TO FIELD-SYMBOL(<empty>).
LOOP AT fields ASSIGNING FIELD-SYMBOL(<field>).
ASSIGN COMPONENT <field>-fieldname OF STRUCTURE <empty> TO FIELD-SYMBOL(<heading>).
CHECK sy-subrc = 0.
<heading> = <field>-scrtext_m.
ENDLOOP.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
filename = 'C:\tab.txt'
filetype = 'ASC'
write_field_separator = 'X'
TABLES
data_tab = <fs_table>.
Though, it is not that simple, but it definitely does the job
The trick that is used in the above snippet: target internal table is created dynamically making all the fields char unlike real DB table, then the dummy line is added at the top of the table and all the headings are put there.
This approach requires additional preliminary work (like explicit casting of all DB fields) but I see no other way to make formatted TXT output.

Related

DAX Index and Match function?

I'm trying to use function LOOKUPVALUE, but I have more than one ID and return error "A table of multiple values was supplied where a single value was expected"
For example:
I need to fill all lines in "Measure_id"
Group_ID
ID
Desc
Measure_ID
112233
0
close
12345
112233
12345
open
12345
112233
0
close
12345
223344
0
close
23456
223344
0
close
23456
223344
23456
open
23456
112233
12345
open
12345
LOOKUPVALUE is not compatible in DAX measure. You can replace it with following.
_return =
VAR _1 =
MAX ( 'fact'[Group_ID] )
VAR _2 =
CALCULATE (
MAX ( 'fact'[ID] ),
ALL ( 'fact' ),
FILTER ( VALUES ( 'fact'[Group_ID] ), 'fact'[Group_ID] = _1 )
)
RETURN
_2
A simpler way of calculating this is by using ALLEXCEPT() in a measure like so:
Measure =
CALCULATE (
MAX ( 'Table'[ID] ) ,
ALLEXCEPT ( 'Table' , 'Table'[Group_ID] )
)
This yields the same result as #smpa01 but is a much more performant measure, around 30-40% quicker on this data set, according to DAX Studio analysis.

clob datatype is causing performace issue

UPDATED: The code is working as expected but the performance is very slow. When I do a search without including CLOB data then the query runs very fast but if I include CLOB variable in my search the query is very slow. I am using CLOB to pass large string data('aaaaaaa,bbbb,c,ddddd...') and store those data in global table for better performance, I thought doing such will maximize query performance. How can I improve/utilize my CLOB variable for better perfomance? Please look at the code below for more information. Appreciated for any help. I am still struggling with performance can anyone help/provide any suggestions please.
GLOBAL TT GlobalTemp_EMP( //this already exists
emp_refno (30 byte);
)
Create or replace PROCEDURE Employee(
emp_refno IN CLOB
)
AS
Begin
OPEN p_resultset FOR
with inputs ( str ) as ( //red error line here
select to_clob(emp_refno )
from dual
),
prep ( s, n, token, st_pos, end_pos ) as (
select ',' || str || ',', -1, null, null, 1
from inputs
union all
select s, n+1, substr(s, st_pos, end_pos - st_pos),
end_pos + 1, instr(s, ',', 1, n+3)
from prep
where end_pos != 0
)
INSERT into GlobalTemp_EMP //red error line here
select token from prep;
select e.empname, e.empaddress, f.department
from employee e
join department f on e.emp_id = t.emp_id
and e.emp_refno in (SELECT emp_refno from GlobalTemp_EMP) //using GTT In subquery
put this code between BEGIN and OPEN p_resultset FOR : this might have some performance issue though.
INSERT into GlobalTemp_EMP
with inputs ( str ) as (
select to_clob(emp_refno )
from dual
),
prep ( s, n, token, st_pos, end_pos ) as (
select ',' || str || ',', -1, null, null, 1
from inputs
union all
select s, n+1, substr(s, st_pos, end_pos - st_pos),
end_pos + 1, instr(s, ',', 1, n+3)
from prep
where end_pos != 0
)
select token from prep where token is not NULL;
The below doesn't seem to be valid syntax:
GLOBAL TT GlobalTemp_EMP( //this already exists
emp_refno (30 byte);
)
I don't know the reason for using byte semantics, or whether you defined it as a clob or char or varchar2.
If it is currently a clob, then perhaps you could define the column as emp_refno varchar2(30 char) and add a unique index, changing the Employee procedure to only insert new IDs. An index would help the insertions more than when you read it out.
If you want to insert a huge amount of data into GlobalTemp_EMP faster, I would recommend making it a regular table, pre-processing the data (such as in Perl or other language) to split IDs outside Oracle and then use SQL*Loader. Or perhaps an external table.
I don't think using a global temporary table will improve your performance at all (at least without indexes). Are you sure these are CLOBs? At a glance, these seem to be varchars.
To compare CLOBs, you should be using dbms_lob.compare. I think = will do an implicit conversion to a varchar (and truncate), then do the comparison.

Count how many sub-activities were created based on an activity

I have a dimension that stores workflows(cases, subcases). I would like to do a count of how many subcases are created for each case.
Workflow Dimension
Workflow
------------------------------
Case Number WorkflowType
------------------------------
10 Case
20 Case
30 Case
20-1 Subcase
20-2 Subcase
20-3 Subcase
10-1 Subcase
The desire output I would like is, for every case count how many subcases were created.
Workflow
------------------------------------------------
Case Number WorkflowType CountOfSubcases
------------------------------------------------
10 Case 1
20 Case 3
30 Case 0
------------------------------------------------
Total 4
I have a current dax measure that works, but the total at the bottom does not show when looking at multiple rows, only display when one case is selected.
Total Subcases =
VAR CC = FIRSTNONBLANK ( Workflow[Case Number], 1 )
RETURN
COUNTX (
FILTER (
ALL( Workflow ),
SUBSTITUTE ( Workflow[Case Number], RIGHT ( Workflow[Case Number], 2
), "" )
= CC
&& Workflow[WorkflowType] = "SubCase"
),
Workflow[WorkflowID]
)
If anybody could help me tweak my measure or present with a new measure, that would be great.
Note: I'm pointing my report to Analysis Services.
Thanks in advance.
You can fix your measure as follows:
Total Subcases = 0 +
COUNTX (
FILTER (
ALL( Workflow ),
SUBSTITUTE ( Workflow[Case Number], RIGHT ( Workflow[Case Number], 2 ), "" )
IN VALUES( Workflow[Case Number] )
&& Workflow[WorkflowType] = "SubCase"
),
Workflow[WorkflowID]
)
The VALUES function returns a list of all the values in the current filter context instead of just the one you were picking before.
Note: To make things easier to work with, I'd suggest splitting the Case Number column into two columns in the query editor stage. Then you don't have to work with all the string manipulation.
Edit: Note that x IN <Table[column]> is equivalent to the older CONTAINS syntax:
CONTAINS(Table, [column], x)
So if you can't use IN then try this formulation:
Total Subcases = 0 +
COUNTX (
FILTER (
ALL( Workflow ),
CONTAINS(
VALUES( Workflow[Case Number] ),
Workflow[Case Number],
SUBSTITUTE ( Workflow[Case Number],
RIGHT ( Workflow[Case Number], 2 ), "" )
)
&& Workflow[WorkflowType] = "SubCase"
),
Workflow[WorkflowID]
)

dax if column is null or blank how can I set my as result previous column

currently my dax formula is =pathitem([hierarchy path]), 3) for column3 but I want to add formula that if column is blank or null get value from column2. How can I write dax formula for that?
You could use something like this:
=
IF (
ISBLANK ( PATHITEM ( [Hierarchy Path], 3 ) ),
[Column3],
PATHITEM ( [Hierarchy Path], 3 )
)
Column3 would have to be Data Type Text.

Change Excel date number to Oracle date

I'm having date as 41293 in oracle, how can i show it in DD/MON/YYYY format?
If i copy pasted it in Excel and change it to date format, it shows 01/19/13
Please help me.
The value you have is the number of days since the 30th of December 1899. Try:
select to_char(
to_date('1899-12-30', 'YYYY-MM-DD') + 41293,
'DD/MON/YYYY') from dual
Quoting from Oracle forum:
You need a tool to do that, since format is to tell oracle what type of format you have on your date type in the spreadsheet. While you may not have opted to format the date in Excel, it will appear as a date in the previewer. Use the format from this as a guide to enter into the datatype panel.
so, if you have a date that looks like this in the previewer, 19-jan-2006, then your format for the data type panel if you choose to insert that column is going to be DD-MON-YYYY,
Option 1:
Try using the below functions
FUNCTION FROMEXCELDATETIME ( ACELLVALUE IN VARCHAR2 )
RETURN TIMESTAMP
IS
EXCEL_BASE_DATE_TIME CONSTANT TIMESTAMP
:= TO_TIMESTAMP ( '12/31/1899',
'mm/dd/yyyy' ) ;
VAL CONSTANT NUMBER
:= TO_NUMBER ( NULLIF ( TRIM ( ACELLVALUE ),
'0' ) ) ;
BEGIN
RETURN EXCEL_BASE_DATE_TIME
+ NUMTODSINTERVAL ( VAL
- CASE
WHEN VAL >= 60
THEN
1
ELSE
0
END,
'DAY' );
END;
FUNCTION TOEXCELDATETIME ( ATIMESTAMP IN TIMESTAMP )
RETURN VARCHAR2
IS
EXCEL_BASE_DATE_TIME CONSTANT TIMESTAMP
:= TO_TIMESTAMP ( '12/31/1899',
'mm/dd/yyyy' ) ;
DIF CONSTANT INTERVAL DAY ( 9 ) TO SECOND ( 9 )
:= ATIMESTAMP
- EXCEL_BASE_DATE_TIME ;
DAYS CONSTANT INTEGER := EXTRACT ( DAY FROM DIF );
BEGIN
RETURN CASE
WHEN DIF IS NULL
THEN
''
ELSE
TO_CHAR ( DAYS
+ CASE
WHEN DAYS >= 60
THEN
1
ELSE
0
END
+ ROUND ( ( EXTRACT ( HOUR FROM DIF )
+ ( EXTRACT ( MINUTE FROM DIF )
+ EXTRACT ( SECOND FROM DIF )
/ 60 )
/ 60 )
/ 24,
4 ) )
END;
END;
Option 2:
The excel function would be =TEXT(B2,"MM/DD/YY"), to convert an Excel date value stored in B2. Then try using the test character in Oracle
If considering 1900 Jan 1st as start date,
SELECT
TO_CHAR ( TO_DATE ( '1900-01-01',
'YYYY-MM-DD' )
+ 41293,
'DD/MON/YYYY' )
FROM
DUAL
Microsoft's Documentation
Excel stores dates as sequential serial numbers so that they can be used in calculations. January 1, 1900 is serial number 1, and January 1, 2008 is serial number 39448 because it is 39,447 days after January 1, 1900.
Excel has a bug feature where it considers 1900 to be a leap year and day 60 is 1900-02-29 but that day never existed and a correction needs to be applied for this erroneous day.
It does also state that:
Microsoft Excel correctly handles all other leap years, including century years that are not leap years (for example, 2100). Only the year 1900 is incorrectly handled.
Therefore only a single correction is required.
So:
Before 1900-03-01 you can use DATE '1899-12-31' + value.
On or after 1900-03-01 you can use DATE '1899-12-30' + value.
Which can be put into a CASE statement:
SELECT CASE
WHEN value >= 1 AND value < 60
THEN DATE '1899-12-31' + value
WHEN value >= 60 AND value < 61
THEN NULL
WHEN value >= 61
THEN DATE '1899-12-30' + value
END AS converted_date
FROM your_table

Resources