I am new to Tableau and one of my tasks if to show top 10 brands.
I have 3 columns: Profit, Sales, and NPS.
The data looks something like this:
| | Profits | Sales | NPS |
| --- | --- | --- | --- |
| Nike | 10m EUR | 32m EUR | 0.91 |
| Adidas | 6m EUR | 21m EUR | 0.88 |
| Levi | 12m EUR | 27m EUR | 0.94 |
I know how to select Top 10 brands based on Profits or Sales or NPS individually, but how do I make it so that when the user sorts by Profits it gets top or lowest 10 profitable brands based on sort type (ascending or descending). Similarly the same for Sales and NPS.
Outcome when sorting by Profits (desc):
| | Profits | Sales | NPS |
| --- | --- | --- | --- |
| Levi | 12m EUR | 27m EUR | 0.94 |
| Nike | 10m EUR | 32m EUR | 0.91 |
| Adidas | 6m EUR | 21m EUR | 0.88 |
Outcome when sorting by NPS (asc):
| | Profits | Sales | NPS |
| --- | --- | --- | --- |
| Zignov | -23K EUR | 193K EUR | -0.85 |
| R&R | -94K EUR | 202K EUR | -0.74 |
| Bumble | -133K EUR | 89K EUR | -0.69 |
Current Implementation: Right now, the dashboard displays top 10 brands by Profits, when I want to sort them by NPS, it resorts the same 10 brands, instead of from the complete dataset.
This can be achieved using a properly scoped Rank(). Ranks are tableau calculations such they are can only be used with an aggregated field, and are functionally identical to an SQL analytical expression.
I helped out another user in the Tableau forum with a near identical problem to yours, my answer is the second one here: https://community.tableau.com/s/question/0D54T00001Ii0QYSAZ/actions-are-not-working-in-top-n-customers-view-while-selecting-any-data-point-in-other-views
The difference for you is that you'll be best off creating two near identical calcs, one running descending to create a sort, the other running ascending to act as the filter.
Let me know how you get on.
Steve
Related
Hellow Team,
I would like to know how to extract the Trial balance from my journal entry data by using Laravel 9 Eloquent:
My Vouchers Table
| id |voucher_date| debit | credit| amount |
|----|------------|-------|-------|-----------|
| 1 | 2021-09-01 | 8 | 2 | 5000.000 |
| 6 | 2021-09-22 | 22 | 17 | 4750.000 |
| 8 | 2021-09-05 | 8 | 3 | 1485.000 |
| 9 | 2021-08-10 | 8 | 6 | 108.000 |
| 10 | 2021-07-07 | 8 | 23 | 98756.000 |
|11 | Etc. | ... |...... |........ |
Accounts table
| id | name | desc | status |
|----|-----------------------------------|-----------------------------------|--------|
| 1 | Assets | Current Assets | 1 |
| 2 | Stockholders equity | Stockholders or Owners equity | 1 |
| 3 | Liability | Liabilities related accounts | 1 |
| 4 | Operating Revenues | Operating Revenues | 1 |
| 5 | Operating Expenses | Operating Expenses | 1 |
| 6 | Non-operating revenues and gains | Non-operating revenues and gains | 1 |
| 7 | Non-operating expenses and losses | Non-operating expenses and losses | 1 |
| 8 | Etc. | More accounts....... | 1 |
My Desired output is like this: (Just an Example)
| Date | Account | Debit | Credit |
|------------|----------------------------------|---------:|----------:|
| 2021-09-01 | Stockholders equity | 0.00 | 5000.00 |
| 2021-09-05 | Liability | 0.00 | 1485.00 |
| 2021-08-10 | Non-operating revenues and gains | 0.00 | 108.00 |
| 2021-07-07 | Land | 0.00 | 98756.00 |
| 2021-02-25 | Land | 21564.00 | 0.00 |
| 2018-07-22 | Land | 3666.00 | 0.00 |
| 2018-05-14 | Non-operating revenues and gains | 0.00 | 489.00 |
| 2018-09-16 | Equipment | 692.00 | 0.00 |
| 2021-04-18 | Non-operating revenues and gains | 4986.00 | 0.00 |
| 2020-04-19 | Land | 4956.00 | 0.00 |
| 2019-03-15 | Buildings Asset | 0.00 | 4988.00 |
| 2019-12-04 | Inventory | 0.00 | 7946.00 |
| 2019-08-25 | Stockholders equity | 0.00 | 19449.00 |
| | | | |
| | Balance |36,990.00 |36,990.00 |
You need to assign a new foreign key to the the voucher table. And then you can simply apply the join to get the desired output. AS you mentioned that you are using debit and credit as foreign key how can they be used to uniquely identify the Vouchers table?
I'm building a small stocks/portfolio tracker and I'm having some trouble retrieving and counting cells of my transactions. Below you can find the dummy data in my transactions table of my database.
// Transactions table
ID | Name | Symbol | Currency | amount_bought | amount_sold | price | commission | bought | portfolio_id
_____________________________________________________________________________________________________________________
1 | Ocugen Inc | FRA.2H51 | EUR | 55 | NULL | 0.29 | 7.51 | 1 | 1
2 | Tesla, Inc | NASDAQ.TSLA | EUR | 5 | NULL | 654.87 | 4.23 | 1 | 1
3 | Ocugen Inc | FRA.2H51 | EUR | NULL | 40 | 1.31 | 7.55 | 0 | 1
I'm using a boolean named bought (final column) in order to "identify" my transaction as either being a sold or bought stock. Next, I want to retrieve all my transactions using my portfolio_id and group them on their name, symbol and currency in order to output the following:
// Desired result:
Name | Symbol | Currency | amount_current | commission_total | bought_total | sales_total
_____________________________________________________________________________________________________
Ocugen Inc | FRA.2H51 | EUR | 15 | 15.06 | 15.95 | 52.4
Tesla, Inc | NASDAQ.TSLA | EUR | 5 | 4.23 | 3274.35 | 0
Currently my code works exactly like I wanted, except for how my rows are being grouped. Because I'm using a case, in order to calculate the total amount of buys and sales of a single stock, I'm forced to include the bought column into my groupBy(). Therefore my results are also grouped on the bought in addition to the name, symbol and currency:
// Current result:
name | symbol | currency | amount_current | commission_total | bought_total | sales_total
_____________________________________________________________________________________________________
Ocugen Inc | FRA.2H51 | EUR | 15 | 7.51 | 15.95 | 0
Tesla, Inc | NASDAQ.TSLA | EUR | 5 | 4.23 | 3274.35 | 0
Ocugen Inc | FRA.2H51 | EUR | NULL | 7.55 | 0 | 52.4
Below you can find my code that generates the result above.
$transactions = Transaction::where('portfolio_id', $portfolio->id)
->groupBy('name', 'symbol', 'currency', 'bought')
->select([
'name',
'symbol',
'currency',
DB::raw('sum(amount_bought) - sum(amount_sold) as amount_current'),
DB::raw('sum(commission) AS commission_total'),
DB::raw('case when bought = 1 then sum(price) * sum(amount_bought) else 0 end as bought_total'),
DB::raw('case when bought = 0 then sum(price) * sum(amount_sold) else 0 end as sales_total')
])
->get();
How can I group my transactions on the stock name, symbol and currency and calculate their totals without grouping them on the bought column?
I don't really know Laravel, but you should be able to use:
$transactions = Transaction::where('portfolio_id', $portfolio->id)
->groupBy('name', 'symbol', 'currency')
->select([
'name',
'symbol',
'currency',
DB::raw('sum(amount_bought) - sum(amount_sold) as amount_current'),
DB::raw('sum(commission) AS commission_total'),
DB::raw('sum(case when bought = 1 then price * amount_bought else 0 end as bought_total'),
DB::raw('sum(case when bought = 0 then price * amount_sold) else 0 end) as sales_total')
])
->get();
That is, remove bought from the group by and make the case the argument to sum().
I have two tables 'locations' and 'markets', where, a many to many relationship exists between these two tables on the column 'market_id'. A report level filter has been applied on the column 'entity' from 'locations' table. Now, I'm supposed to distinctly count the 'location_id' from 'markets' table where 'active=TRUE'. How can I write a DAX query such that the distinct count of location_id dynamically changes with respect to the selection made in the report level filter?
Below is an example of the tables:
locations:
| location_id | market_id | entity | active |
|-------------|-----------|--------|--------|
| 1 | 10 | nyc | true |
| 2 | 20 | alaska | true |
| 2 | 20 | alaska | true |
| 2 | 30 | miami | false |
| 3 | 40 | dallas | true |
markets:
| location_id | market_id | active |
|-------------|-----------|--------|
| 2 | 20 | true |
| 2 | 20 | true |
| 5 | 20 | true |
| 6 | 20 | false |
I'm fairly new to powerbi. Any help will be appreciated.
Here you go:
DistinctLocations = CALCULATE(DISTINCTCOUNT(markets[location_id]), markets[active] = TRUE())
so I'm a BIRT beginner, and I just tried to get a real simple report from one of my tables of a postgres DB.
So I defined a flat table as datasource which looks like:
+----------------+--------+----------+-------+--------+
| date | store | product | value | color |
+----------------+--------+----------+-------+--------+
| 20160101000000 | store1 | productA | 5231 | red |
| 20160101000000 | store1 | productB | 3213 | green |
| 20160101000000 | store2 | productX | 4231 | red |
| 20160101000000 | store3 | productY | 3213 | green |
| 20160101000000 | store4 | productZ | 1223 | green |
| 20160101000000 | store4 | productK | 3113 | yellow |
| 20160101000000 | store4 | productE | 213 | green |
| .... | | | | |
| 20160109000000 | store1 | productA | 512 | green |
+----------------+--------+----------+-------+--------+
So I would like to add a table / crosstab to my birt report which creates a table (and after that a page break) for EVERY store which looks like:
**Store 1**
+----------------+----------+----------+----------+-----+
| | productA | productB | productC | ... |
+----------------+----------+----------+----------+-----+
| 20160101000000 | 3120 | 1231 | 6433 | ... |
| 20160102000000 | 6120 | 1341 | 2121 | ... |
| 20160103000000 | 1120 | 5331 | 1231 | ... |
+----------------+----------+----------+----------+-----+
--- PAGE BREAK ---
....
So what I tried in first was: Getting to work the standard CrossTab tutorial-template of BIRT.
I defined the DataSource, and created a datacube with dimension-group of 'store' and 'product' , and as SUM / detail -data the 'value' and for this example I just selected ONE day.
But the result looks like this:
+--------+----------+----------+----------+----------+-----+----------+
| | productA | productC | productD | productE | ... | productZ |
+--------+----------+----------+----------+----------+-----+----------+
| Store1 | 213 | | 3234 | 897 | ... | 6767 |
| Store2 | 513 | 2213 | 1233 | | ... | 845 |
| Store3 | 21 | | | 32 | ... | |
| Store4 | 123 | 222 | 142 | | ... | |
+--------+----------+----------+----------+----------+-----+----------+
It's because not every product is selled in every store, but the crosstab creates the columns by selecting ALL products available.
So, I just have no idea how to generate dynamicly different tables with different (but also dynamic) amount of columns.
The second step then would be to get the dates (days) to work.
But thanks in advance for every hint ot tutorial link to question one ;-)
You can just add a table with the complete datasource. Select the table and a group. Group by StoreID. You can set the pagebreak options for each grouping. Set the property for after to "always exluding last".
BIRT will add a group header. You can add multiple groupheader rows get the layout you're after.
For crosstabs it works in a similar way. After you added the crosstab to your page and set the info for the groups on rows and columns and added summaries. You can view the data. Select the crosstab and View the Row Area properties, select the pagegroup settings and add a new pagebreak. You can select on which group you want to break, choose your storeID group and select after: "always excluding last"
I have data like this in a Hive table:
+-------------------+-------+---------+--------+
| _c0 | name | value0 | value1 |
+-------------------+-------+---------+--------+
| 2015-10-07 13:01 | john | 10.0 | 100 |
| 2015-10-07 13:20 | john | 20.0 | 200 |
| 2015-10-07 13:41 | john | 15.0 | 300 |
| 2015-10-07 14:00 | john | 30.0 | 300 |
| 2015-10-07 14:20 | john | 60.0 | 200 |
| 2015-10-07 14:40 | john | 30.0 | 400 |
I need to get hourly averages.
| 2015-10-07 13:00 | john | 15.0 | 200 |
| 2015-10-07 14:00 | john | 40.0 | 300 |
I have an idea about doing it using a partition/over clause in psql but I'm not sure how to do this in Hive. An idea would be to split datetime into date and hour (e.g."2015-10-07 13") and use a group by and avg function, but that is probably not the best way.
Any ideas?
You should do it the way you suggested to do it. If you are just wanting the average by date and hour (and name probably), partitioning and using an over clause is not necessary.
Query:
select date, hour, name, avg(value0) avg0, avg(value1) avg1
from (
select split(_c0, ' ')[0] date
, split(split(_c0, ' ')[1], '\\:')[0] hour
, name
, value0
, value1
from db.table ) x
group by date, hour, name