I have this expression which is in qlik sense syntax now i want to change / convert this into power bi dax syntax.. how i do that
if([Curency] = 'USD',
((Sum({< Fiscal_Year = {'2016','2017'},[Type_Billing] ={'N2','Z2'}>}[BSMA])
/Sum({< Fiscal_Year = {'2016','2017'},[Type_Billing] ={'N2','Z2'}>}[BSA_NETWR]))*vCurrency),
Sum({< Fiscal_Year = {'2016','2017'},[Type_Billing] ={'N2','Z2'}>}[BSMA])
/Sum({< Fiscal_Year = {'2016','2017'},[Type_Billing] ={'N2','Z2'}>}[BSA_NETWR]))
note : currenlty i dont have any power bi file
Related
I've been trying to do some tuning hyperparameters for the survival SVM model. I used the AutoTuner function from the mlr3tuning package. I want to do tuning for the whole dataset (No train & test split). I've found the resampling class which is "insample". When I look at the mlr3 dictionary, it said "Uses all observations as training and as test set."
My questions is, Is "insample" in mlr3tuning resampling can be used when we want to do hyperparameter tuning with the full dataset and if it applies, why when I tried to use the hyperparameter to the survivalsvm function from the survivalsvm package, it gives the different output of concordance index?
This is the code I used for hyperparameter tuning
veteran<-veteran
set.seed(1)
task = as_task_surv(x = veteran, time = 'time', event = 'status')
learner = lrn("surv.svm", type = "hybrid", diff.meth = "makediff3",
gamma.mu = c(0.1, 0.1),kernel = 'rbf_kernel')
search_space = ps(gamma = p_dbl(2^-5, 2^5),mu = p_dbl(2^-5, 2^5))
search_space$trafo = function(x, param_set) {
x$gamma.mu = c(x$gamma, x$mu)
x$gamma = x$mu = NULL
x}
ssvm_at = AutoTuner$new(
learner = learner,
resampling = rsmp("insample"),
search_space = search_space,
measure = msr('surv.cindex'),
terminator = trm('evals', n_evals = 5),
tuner = tnr('grid_search'))
ssvm_at$train(task)
And this is the code that I've been trying using the survivalsvm function from the survivalsvm package
survsvm.reg <- survivalsvm(Surv(veteran$time , veteran$status ) ~ .,
data = veteran,
type = "hybrid", gamma.mu = c(32,32),diff.meth = "makediff3",
opt.meth = "quadprog", kernel = "rbf_kernel")
pred.survsvm.reg <- predict(survsvm.reg,veteran)
conindex(pred.survsvm.reg, veteran$time)
I have a Date/Time column in format "13.10.2020 06:08:43" and I need to calculate the "age" from current time. Output should be like from given example: "4h" since its 10:08 atm - Date/Time column was 4h ago. I need this to be in M language to use in visual studio 2019.
I used:
let
Source = Sql.Database("xxxx", "xxxxx", [Query="SELECT#(lf)#(tab)storeid as 'Shop'#(lf)#(tab), concat('SCO', posnr) as POS#(lf)#(tab), concat(datediff(hour,[LastTransactionDate]#(lf)#(tab), CURRENT_TIMESTAMP),' h') as 'Age'#(lf)#(tab), POSIPaddress as 'IP'#(lf)#(tab),[PosNr]#(lf) ,[AdapterPosOrGroupId]#(lf) ,[UpdatedOn]#(lf) ,[LastTransactionDate]#(lf) ,[LastStartDate]#(lf) ,[Open]#(lf) ,[Locked]#(lf) ,[CashDisabled]#(lf) ,[IsCashEnabled]#(lf) ,[CashDevicesTraficLight]#(lf) ,[POSIPAddress]#(lf) ,[ScoPosServiceVersion]#(lf) ,[WinScoVersion]#(lf) ,[StoreType]#(lf)FROM [LaneEventDatabase ].[dbo].[POS.LaneCurrentState]"]),
#"Added Conditional Column" = Table.AddColumn(Source, "Custom", each if [CashDevicesTraficLight] = "green" then 1 else if [CashDevicesTraficLight] = "yellow" then 2 else if [CashDevicesTraficLight] = "red" then 3 else 4),
#"Changed Type" = Table.TransformColumnTypes(#"Added Conditional Column",{{"Custom", type number}})
in
#"Changed Type"
which works just great but visual studio doesnt like the way source is handled and while deploying gives me error:
Failed to save modifications to the server. Error returned: 'An M partition uses a data function which results in access to a data source different from those defined in the model.
'.
Just create a new Custom Column using this below code-
= Text.From(
(Duration.Days(DateTime.LocalNow()-[date]) * 24) +
Duration.Hours(DateTime.LocalNow()-[date])
) & "h"
Here is sample output-
Hi i am trying to run a file named query.sql using sqlplus on cmd but getting 'No Rows Selected' inside csv while the same query gives results when Run on Oracle SQL Developer
I have run the following command on cmd
sqlplus <username>/<password>#sid #query.sql > output.csv
The query inside query.sql is
SELECT
SR.SOID,
EL.SOID,
EO.EPTNUMBER,
EO.SLELABEL,
EL.LOGTEXT,
utl_raw.cast_to_varchar2(dbms_lob.substr(SR.SODATA, 3000, 1)) AS NAC_DATA
FROM
SORECORD SR,
SOAPPKEY SK,
EPTORDER EO,
EPTLOG EL
WHERE
SK.APPKEYNAME = 'MSISDN'
AND SK.appkeyvalue = '<ctn here>'
AND SK.SOID = SR.SOID
AND SR.SOTYPE = 'NAC'
AND SR.Receipttimestamp LIKE '07-JAN-20'
AND SR.SOID = EO.SOID
AND EO.EPTNUMBER = EL.EPTNUMBER
AND EL.SOID LIKE TO_CHAR((
Select
SUBSTR(TO_CHAR(SR.SOID), 1, LENGTH(SR.SOID) - 1)
FROM
SORECORD SR, SOAPPKEY SK
WHERE
SK.APPKEYNAME = 'MSISDN'
AND SK.appkeyvalue = '<ctn here>'
AND SK.SOID = SR.SOID
AND SR.SOTYPE = 'NAC'
AND SR.Receipttimestamp LIKE '07-JAN-20')) || '%'
AND EL.SOID > TO_NUMBER((
Select
SUBSTR(TO_CHAR(SR.SOID), 1, LENGTH(SR.SOID) - 1)
FROM
SORECORD SR, SOAPPKEY SK
WHERE
SK.APPKEYNAME = 'MSISDN'
AND SK.appkeyvalue = '<ctn here>'
AND SK.SOID = SR.SOID
AND SR.SOTYPE = 'NAC'
AND SR.Receipttimestamp LIKE '07-JAN-20') || '0');
I tried other queries to generate csv and they were working fine. I have no clue why this one is giving 'No Rows Selected' for sqlplus cmd when this same query fetches results in Oracle SQL Developer.
Can anyone help me out in pointing out the issue?
If SR.Receipttimestamp column's datatype is DATE, why are you comparing it to a string? '07-JAN-20' is a string, not a date. You're relying on Oracle's capabilities to implicitly convert that string into a valid date value, but - that doesn't work always. I presume that's what bothers your query.
I'd suggest you to rewrite it; a simple option - just to see whether it helps - is
where trunc(SR.Receipttimestamp) = date '2020-01-07'
i.e.
with trunc, "remove" time component (it'll set that column's value to 00:00 hours)
compare it to a date literal which is always in date 'yyyy-mm-dd' format
#Littlefoot, I made the mentioned change and it worked but facing a few issues in the csv file now
I have used the below Spool Code:
set pagesize 0
set feed off
set term off
spool '<my path here>'
SELECT
SR.SOID,
EL.SOID,
EO.EPTNUMBER,
EO.SLELABEL,
EL.LOGTEXT,
utl_raw.cast_to_varchar2(dbms_lob.substr(SR.SODATA, 3000, 1)) AS NAC_DATA
FROM
SORECORD SR,
SOAPPKEY SK,
EPTORDER EO,
EPTLOG EL
WHERE
SK.APPKEYNAME = 'MSISDN'
AND SK.appkeyvalue = '07996703863'
AND SK.SOID = SR.SOID
AND SR.SOTYPE = 'NAC'
AND trunc(SR.Receipttimestamp) = date '2020-01-07'
AND SR.SOID = EO.SOID
AND EO.EPTNUMBER = EL.EPTNUMBER
AND EL.SOID LIKE TO_CHAR((
Select
SUBSTR(TO_CHAR(SR.SOID), 1, LENGTH(SR.SOID) - 1)
FROM
SORECORD SR, SOAPPKEY SK
WHERE
SK.APPKEYNAME = 'MSISDN'
AND SK.appkeyvalue = '07996703863'
AND SK.SOID = SR.SOID
AND SR.SOTYPE = 'NAC'
AND trunc(SR.Receipttimestamp) = date '2020-01-07')) || '%'
AND EL.SOID > TO_NUMBER((
Select
SUBSTR(TO_CHAR(SR.SOID), 1, LENGTH(SR.SOID) - 1)
FROM
SORECORD SR, SOAPPKEY SK
WHERE
SK.APPKEYNAME = 'MSISDN'
AND SK.appkeyvalue = '07996703863'
AND SK.SOID = SR.SOID
AND SR.SOTYPE = 'NAC'
AND trunc(SR.Receipttimestamp) = date '2020-01-07') || '0');
spool off
and CMD used
sqlplus <username>/<password>#sid #<path of spool txt>
Now I am getting csv data as below
5764377,5764371,1,EEVMS,
Tue Jan 07 08:29:49:887 2020SOAP MESSAGE SENT:
<S:Envel
5764377,5764375,1,EEVMS,
Tue Jan 07 08:30:49:900 2020SOAP MESSAGE SENT:
<S:Envel
5764377,5764376,1,EEVMS,
Tue Jan 07 08:31:50:003 2020SOAP MESSAGE SENT:
<S:Envel
Now i am facing 2 issues in this:
I am getting partial data from EL.LOGTEXT whose data type is CLOB
Data column of utl_raw.cast_to_varchar2(dbms_lob.substr(SR.SODATA, 3000, 1)) AS NAC_DATA is all together missing from the csv
Here is the snapshot of query data on SQL Developer. I actually want the data in this tabular format in my CSV:
title pretty much says it all. I wrote this big, ugly query that works...however it takes forever to run. I have no experience whatsoever with optimizing SQL, and am pretty new to the language and its functions. I realize none of you will be familiar with the environment I am working in, but I was just wondering if any basic things jump out at you about it. Thanks!
SELECT MAX(INVOICE.invoice_id) invoice_id, MAX(invoice.vendor_num) vendor_num, max(invoice.vendor_name) vendor_name,
MAX(invoice.po_number) po_number, MAX(invoice.invoice_date) invoice_date, MAX(invoice.invoice_num) invoice_num,
MAX(invoice.terms_name) terms_name, MAX(invoice.invoice_amount) invoice_amount, MAX(invoice.amount_applicable_to_discount) amount_applicable_to_discount,
MAX(invoice.amount_paid) amount_paid, MAX(invoice.payment_date) payment_date, MAX(invoice.document_id) document_id,
MAX(invoice.filename) filename,
STAMPS.page_markups_view_id, STAMPS.TEXT, STAMPS.TOOL_NAME
FROM
(SELECT DISTINCT inv.invoice_id,
vendor.segment1 vendor_num,
vendor.vendor_name,
MAX(poh.segment1) po_number,
inv.invoice_date,
inv.invoice_num,
terms.name terms_name,
inv.invoice_amount,
inv.amount_applicable_to_discount,
inv.amount_paid,
pmt.check_date payment_date,
path.document_id,
vendor.segment1 || '-' || inv.invoice_num || '.pdf' filename
FROM apps.ap_invoices_all inv,
apps.ap_invoice_distributions_all dist,
apps.po_distributions_all podi,
apps.ap_invoice_payment_history_v pmt,
apps.fnd_attached_docs_form_vl fnd,
markview.mv_page_image_paths path,
apps.po_vendors vendor,
apps.po_headers_all poh,
apps.ap_terms terms
WHERE inv.invoice_id = to_number(fnd.pk1_value)
AND inv.invoice_id = dist.invoice_id
AND poh.po_header_id(+) = podi.po_header_id
AND podi.po_distribution_id(+) = dist.po_distribution_id
AND fnd.file_name = to_char(path.document_id)
AND inv.invoice_id = pmt.invoice_id
AND fnd.category_description = 'MarkView Document'
AND fnd.entity_name = 'AP_INVOICES'
AND inv.vendor_id = poh.vendor_id(+)
AND inv.terms_id = terms.term_id
AND inv.vendor_id = vendor.vendor_id
AND path.platform_name = 'UNIX_FS_TO_DOC_SERVER'
AND pmt.void = 'N'
and inv.invoice_id = 1908784
GROUP BY
inv.invoice_id,
vendor.segment1 ,
vendor.vendor_name,
inv.invoice_date,
inv.invoice_num,
terms.name ,
inv.invoice_amount,
inv.amount_applicable_to_discount,
inv.amount_paid,
path.document_id,
pmt.check_date,
vendor.segment1 || '-' || inv.invoice_num || '.pdf'
) INVOICE,
( SELECT mp.document_id,
moi.markup_object_id,
moi.page_markups_view_id,
moi.text,
mvt.tool_name,
mp.page_id
FROM markview.mv_markup_object moi,
markview.mv_tool mvt,
markview.mv_page_markups_view mpmv,
markview.mv_page mp
WHERE moi.tool_id = mvt.tool_id
AND mp.page_id = mpmv.page_id
AND mpmv.page_markups_view_id = moi.page_markups_view_id
AND mvt.tool_id IN
(
SELECT mvt.tool_id
FROM markview.mv_tool
WHERE mvt.tool_name IN ( 'Green Text',
'Blue Sticky Note' ) )) STAMPS
WHERE invoice.document_id = stamps.document_id(+)
GROUP BY
page_markups_view_id, TEXT, TOOL_NAME
)
i'm using spagobi with oracle DBMS but when i want to get values where year between 2010 and 2014 a got error : right parenthesis missing
select (sum(d.taux_depot *100)/count(r.trimestre) ) as taux , trimestre as trimestre
from datamart_cnss d , ref_temps r
where d.ID_TEMPS = r.ID_TEMPS
and (case when $P{anneecnss}=123 then (r.annee between 2010 and 2014 ) else $P{anneecnss} end) = r.annee
and (case when To_CHAR($P{regimecnss})=123 then To_CHAR(d.id_regime) else To_CHAR($P{regimecnss}) end) = To_CHAR(d.id_regime)
and (case when To_CHAR($P{bureau_cnss})=123 then To_CHAR(d.id_bureau) else To_CHAR($P{bureau_cnss}) end) = To_CHAR(d.id_bureau)
group by trimestre
order by trimestre asc
Thank you
This is not a valid construct:
case when $P{anneecnss}=123 then (r.annee between 2010 and 2014 ) else $P{anneecnss} end
You cannot have a condition inside the then part, just a value or expression that you can then compare with something else.
To apply that filter selectively you don't need to use a case statement, use and and or; I think this is equivalent:
where d.ID_TEMPS = r.ID_TEMPS
and (($P{anneecnss} = 123 and r.annee between 2010 and 2014)
or ($P{anneecnss} != 123 and $P{anneecnss} = r.annee))
and ($P{regimecnss} = 123 or To_CHAR($P{regimecnss}) = To_CHAR(d.id_regime))
and ($P{bureau_cnss} = 123 or To_CHAR($P{bureau_cnss}) = To_CHAR(d.id_bureau))
...