PowerBI Track a list of Customers Over Time - reporting

I've got a pickle with a Power BI report. I have a set of Customers I want to track. This customer set was collected by filtering out on several columns on our start date; that is, they all started off matching the same criteria on a certain date.
Criteria were Num Licenses under 10k, %Adherence below .1, and Num Features >= 3.
In a Database, this would look like this:
╔═══════════════╦═══════════╦═════════════╦════════════╦═════════════╗
║ Customer Name ║ StartDate ║ NumLicenses ║ %Adherence ║ NumFeatures ║
╠═══════════════╬═══════════╬═════════════╬════════════╬═════════════╣
║ Customer A ║ 2/21/2018 ║ 6000 ║ .08 ║ 5 ║
║ Customer B ║ 2/21/2018 ║ 4400 ║ .01 ║ 4 ║
║ Customer C ║ 2/21/2018 ║ 2150 ║ .07 ║ 4 ║
╚═══════════════╩═══════════╩═════════════╩════════════╩═════════════╝
I want to track this set of customers over time, so I want to see how this set of customers is doing the next week:
In a Database, this would look like this:
╔═══════════════╦═══════════╦═════════════╦════════════╦═════════════╗
║ Customer Name ║ StartDate ║ NumLicenses ║ %Adherence ║ NumFeatures ║
╠═══════════════╬═══════════╬═════════════╬════════════╬═════════════╣
║ Customer A ║ 2/28/2018 ║ 6000 ║ .11 ║ 7 ║
║ Customer B ║ 2/28/2018 ║ 4400 ║ .01 ║ 4 ║
║ Customer C ║ 2/28/2018 ║ 2150 ║ .07 ║ 2 ║
╚═══════════════╩═══════════╩═════════════╩════════════╩═════════════╝
So, with my current criteria as filters in PowerBI, Customers A and C would not show up in reporting on week 2 because they no longer fit the criteria.
I do not have access to the database from where this data is being pulled; I can only update the report itself and none of the queries.
I am interested in being able to see how the set of customers from Week 1 is faring on week 2 (and week 3, and so forth) even though they no longer match the criteria in the filters. I am also interested in seeing the same information on new customers that join in later weeks with stats matching the initial criteria.
My issue here is that I'm not sure how to calculate a column that flags a customer if they matched criteria on a certain date, because this seems to filter all of the data to only that date.
Is what I'm asking clear, and is it possible?

It's simple to set of a calculated column that check the criteria per row:
Criteria = IF(Customers[NumLicenses] < 10000 &&
Customers[Adherence] < 0.1 &&
Customers[NumFeatures] >= 3,
"Meets", "Fails")
Given this column, you can create a new calculated column that checks whether the customer has ever met the criteria:
CriteriaEverMet =
CALCULATE(MAX(Customers[Criteria]),
ALLEXCEPT(Customers, Customers[CustomerName])) = "Meets"
Here's another formula that gives the same result:
CriteriaEverMet =
"Meets" IN CALCULATETABLE(VALUES(Customers[Criteria]),
ALLEXCEPT(Customers, Customers[CustomerName]))

Related

About Oracle SQL process

I want to follow SQL transaction in Oracle. We have a software tool with 10g oracle system. The program is slowing down somewhere and I want to find that part. I could not see the event log. What do you think i can do? Is there a 3rd party software?
Thanks.
Oracle offers tracing tools. See documentation - Using Application Tracing Tools in the Performance Tuning Guide.
It is the TKPROF which is reliable and will show you exactly what is going on when you use the application, along with timings so - once you examine the result - you'll be able to do further steps and tune your code.
Sample output (just to show what I'm referring to):
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 29.60 60.68 266984 43776 131172 28144
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 29.60 60.68 266984 43776 131172 28144
^^^^^
This!
So, if you find that something takes a minute (60 seconds, right?) to complete, that might be suspicious.

Cross lookup in oracle without creating a table

I have a list of 20 records mapping year to a number from 2001 to 2021. For a couple of reasons these can not be loaded into a table, and I do not have permissions to create temporary tables. This lookup means I can't just run a single query in oracle - I have to export and join with a script. Is there a way I could just do a lookup in memory? I could do a CASE WHEN statement to handle each of the 20 cases. But is there some other smoother way to check values against a list in Oracle when you can't write to a table in between?
If I understood you correctly, a CTE might help:
SQL> with years as
2 (select 2000 + level as year
3 from dual
4 connect by level <= 21
5 )
6 select year
7 from years
8 /
YEAR
----------
2001
2002
2003
2004
<snip>
2020
2021
21 rows selected.
SQL>
You'd now join years with other tables, e.g.
with years as
...
select y.year, e.hiredate
from years y join employees e on e.year = y.year
where ...

How to get BigQuery table expiration date via query / cli / go?

I can't find a way to extract table expiration date that is not via the console (https://console.cloud.google.com/).
We maintain thousands of tables in BQ and we want to enforce the use of table expiration date - so the only way is to collect the data automatically.
is this possible via query/cli/go/python/perl/whatever?
this can be done via querying the INFORMATION_SCHEMA.TABLE_OPTIONS:
SELECT * FROM `my_project.my_dataset.INFORMATION_SCHEMA.TABLE_OPTIONS`
where option_name='expiration_timestamp'
the value will be in the option_name column.
If you want to extract the expiration time from a number of tables in a dataset, you can refer to this documentation [1] (the same query provided by #EldadT), which returns the catalog of tables in a dataset with the expiration time option.
Therefore, if you want to create a script in Python to get the result of this query, you can also check a Client Bigquery Library [2] to run that query and get the expiration time for each table in a dataset.
[1] https://cloud.google.com/bigquery/docs/tables#example_1_2
[2] https://cloud.google.com/bigquery/docs/reference/libraries#client-libraries-usage-python
You can use BigQuery CLI:
bq show mydataset.mytable
Output:
Last modified Schema Total Rows Total Bytes Expiration Time Partitioning Clustered Fields Labels
----------------- ------------------- ------------ ------------- ----------------- ------------------- ------------------ --------
16 Aug 10:42:13 |- col_1: integer 7 106 21 Aug 10:42:13
|- col_2: string

display data in table with column name

I have a table with 6 locations with staff allocated to an office. How do I display them in a table windows forms with the column name but not display if empty. If I had the following.
office1 office2 office3 office4 office5 office 6
------- ------- ------- ------- ------- --------
Dave Bob Ed
Alan Jeff John
Tom

ORACLE - insert records by multiple user on PK table

In oracle I have a table say EMPLOYEE(EMPID,EMPNAME) with primary key on EMPID.
Two users are having insert privileges to this table say USER1 and USER2.
User1 insert a record with EMPID=1 and EMPNAME='XYZ' and not committed. But if USER2 is trying to insert the same record EMPID=1 and EMPNAME='XYZ' then the screen get hangs till user1 commits or rollback.
Is there option to insert this record by both the users with out any hang and the user who commits second should get the PK violation error.
Thanks,
Niju Jose
In a single-user database, the user can modify data in the database without concern for other users modifying the same data at the same time. However, in a multiuser database, the statements within multiple simultaneous transactions can update the same data. Transactions executing at the same time need to produce meaningful and consistent results.
Isolation Level Dirty Read Nonrepeatable Read Phantom Read
---------------- ------------ ------------------ -------------
Read uncommitted Possible Possible Possible
Read committed Not possible Possible Possible
Repeatable read Not possible Not possible Possible
Serializable Not possible Not possible Not possible
In your case, Oracle acquires a row level lock. The insert case is simpler, because inserted rows are locked, but also are not seen by other users because they are not commited. When the user commits, he also releases the locks, so, other users can view these rows, update them, or delete them.
Read here about Data concurrency and consistency.
Also see here more details on Insert Mechanism
To explain more, I reproduce Florin's answer here
For example, let be tableA(col1 number, col2 number), with this data within it:
col1 | col2
1 | 10
2 | 20
3 | 30
If user John issues at time1:
update tableA set col2=11 where col1=1;
will lock row1.
At time2 user Mark issue an
update tableA set col2=22 where col1=2;
the update will work, because the row 2 is not locked.
Now the table looks in database:
col1 | col2
1 | 11 --locked by john
2 | 22 --locked by mark
3 | 30
For Mark table is(he does not see the changes uncommited)
col1 | col2
1 | 10
2 | 22
3 | 30
For John table is:(he does not see the changes uncommited)
col1 | col2
1 | 11
2 | 20
3 | 30
If mark tries at time3:
update tableA set col2=12 where col1=1;
his session will hang until time4 when John will issue an commit.(Rollback will also unlock the rows, but changes will be lost)
table is(in db, at time4):
col1 | col2
1 | 11
2 | 22 --locked by mark
3 | 30
Immediatley, after John's commit, the row1 is unlocked and marks's update will do the job:
col1 | col2
1 | 12 --locked by mark
2 | 22 --locked by mark
3 | 30
lets's mark issue a rollbak at time5:
col1 | col2
1 | 11
2 | 20
3 | 30
So your hang is due to the fact that the Oracle engine is waiting for a commit or a rollback from User1, before giving a definitive response to user2.

Resources