Ussing SSRS, I have data with duplicate values in Field1. I need to get only 1 value of each month.
Field1 | Date |
----------------------------------
30 | 01.01.1990 |
30 | 01.01.1990 |
30 | 01.01.1990 |
50 | 02.01.1990 |
50 | 02.01.1990 |
50 | 02.01.1990 |
50 | 02.01.1990 |
40 | 03.01.1990 |
40 | 03.01.1990 |
40 | 03.01.1990 |
It should be ssrs expression with average value of each month or mb there are other solutions to get requested data by ssrs expression. Requested data in table:
30 | 01.01.1990 |
50 | 02.01.1990 |
40 | 03.01.1990 |
Hope for help.
There is no SumDistinct function in SSRS, and it is real lack of it (CountDistinct exist although). So you obviously can't achieve what you want easy way. You have two options:
Implement a new stored procedure with select distinct, returning reduced set of fields to avoid repeated data that you need. You then need to use this stored procedure to build new dataset and use in your table. But this way obviously may be not applicable in your case.
The other option is to implement your own function, which will save state of aggregation and perform distinct sum. Take a look at this page, it contains examples of code that you need.
Related
Lets say i have multiple columns coming from two different files like that :
USERNAME | AGE | GENDER | CHILDREN
Joe | 23 | male | 2
Annie | 45 | female | 5
| | |
And another one like this :
USERNAME | AGE |
Jonathan | 33 |
Mike | 41 |
And i want to merge the data of the columns that have the same name into one like this while keeping the data of the columns that are unique at each field:
USERNAME | AGE | GENDER | CHILDREN
Joe | 23 | male | 2
Annie | 45 | female | 5
Jonathan | 33 | |
Mike | 41 | |
Sorry if the answer is obvious, im new to talend, thanks.
What tool is available toy you?
The Append function in SAS for example can do this for you.
You can use the append approach in Python, R or other language you intend using.
For Talen:
Copy the complete subjob1 – copy me sub job and paste it to create a second sub job.
Link the two sub jobs using an onSubjobOK link.
Open tFixedFlowInput, and change Records from first subjob to Records from second subjob.
Open tFileOutputDelimited on the new sub job, and tick Append, as shown in the following screenshot:
use a tUnite component to accomplish that
here is the link of the documentation : https://help.talend.com/r/fr-FR/8.0/orchestration/tunite
your flow would be
tFileInput1(excel or csv ) ----------------------------------------------
|
| ->tUnite -> tLogRow
tFileInput2(excel or csv )->tMap (add to empty fields GENDER & Children )|
I want to create a table analysis in AWS Quicksight that shows the number of new user per day and also the total number of user that has registered up until that day for the specified month.
The following sample table is what I want to achieve in Quicksight.
It shows the daily register count for March:
+-----------+----------------------+----------------------+
| | Daily Register Count | Total Register Count |
+-----------+----------------------+----------------------+
| March 1st | 2 | 42 |
+-----------+----------------------+----------------------+
| March 2nd | 5 | 47 |
+-----------+----------------------+----------------------+
| March 3rd | 3 | 50 |
+-----------+----------------------+----------------------+
| March 4th | 8 | 58 |
+-----------+----------------------+----------------------+
| March 5th | 2 | 60 |
+-----------+----------------------+----------------------+
The "Total Register Count" column above should show the total count of users registered from the beginning up until March 1st, and then for each row it should be incremented with the value from "Daily Register Count"
I'm absolutely scratching my head trying to implement the "Total Register Count". I have found some form of success using runningSum function however I need to be able to filter my dataset by month, and the runningSum function won't count the number outside of the filtered date.
My dataset is very simple, it looks like this:
+----+-------------+---------------+
| id | email | registered_at |
+----+-------------+---------------+
| 1 | aaa#aaa.com | 2020-01-01 |
+----+-------------+---------------+
| 2 | bbb#aaa.com | 2020-01-01 |
+----+-------------+---------------+
| 3 | ccc#aaa.com | 2020-01-03 |
+----+-------------+---------------+
| 4 | abc#aaa.com | 2020-01-04 |
+----+-------------+---------------+
| 5 | def#bbb.com | 2020-02-01 |
+----+-------------+---------------+
I hope someone can help me with this.
Thank you!
I am new to QuickSight but the way I was able to get Total Register Count is by creating a calculated field called count and assigned it the fixed value of 1.
Then I created a second calculated field "Total Register Count" with the following formula
runningSum(sum(count), [{ registered_at} ASC], [])
It sounds as if the CountOver function would work well for you. You'll need to partition your count by the day of the month (using the extract function). Here is a link related to the CountOver function.
https://docs.aws.amazon.com/quicksight/latest/user/countOver-function.html
This is called a Level Aware Aggregation in QuickSight. Here is additional information on that:
https://docs.aws.amazon.com/quicksight/latest/user/level-aware-aggregations.html
Here is information on the extract function:
https://docs.aws.amazon.com/quicksight/latest/user/extract-function.html
If I were to take a stab at your formula, it would look like this:
countover(ID,[extract('DD',registered_at)],PRE_FILTER)
Your table would have the registered_at field as the date.
I am setting up a query where I need to rank multiple columns. I was able to sort the first column in descending order and inserted an index column. However, I am not able to rank the other columns.
I have included an example below:
Table to show agent performance
Agent | surveys | rank | outcalls |total calls |outcalls/total calls |rank
Dallas | 80% | 1 | 50 | 80 | 62.5% | ?
May | 75% | 2 | 90 | 100 | 90.0% | ?
Summer | 60% | 3 | 60 | 75 | 80.0% | ?
So basically from the example above, I was able to add an index column that ranked the surveys. How can I rank the outcalls/total calls column while still maintaining the rank in the other columns?
In this case, a simple approach would be to sort on outcalls/total calls, add another index column, and then sort the first rank column if you want to revert back to your starting order.
I have a table setup that contains some 640m records and I'm trying to create an index.
The manner in which I want to select records involves something like this:
index_i9(ORA_HASH(placard_bcd,128),event);
However that still returns about 3-4 million records, and from my testing, it takes a length amount of time (~12 minutes or so).
Is this a bad idea as an index? I don't think getting 3-4m records should take that long.
Any ideas?
Edit (adding more info):
The table has a bunch of columns but I don't know if I need to list all of them:
table_a
container NOT NULL NUMBER(19),
placard_bcd NOT NULL VARCHAR2(30),
event NOT NULL VARCHAR(5),
bin_number NUMBER(3),
...
...
It takes about 12 minutes to return all of the records that would return based on the index above. So to provide me all 3-4 million records.
The query used looks something like this:
select barcode, event, bin_number
from table_a
where ora_hash(barcode,128) = 105;
and event in ('CLOS','PASG','BUILD');
The explain plan provided is this:
Plan hash value: 4185630329
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 35074 | 4212K| 338 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| TABLE_A | 35074 | 4212K| 338 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | TABLE_A_I9 | 14030 | | 14 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access(("EVENT_TYPE"='BUILD' OR "EVENT_TYPE"='CLOS' OR "EVENT_TYPE"='PASG') AND
ORA_HASH("PLACARD_BCD",128)=105)
Everything seems correct, but it still is taking a while to provide me with the records.
say i have a order table, which contains multi time column(spend_time,expire_time,withdraw_time),
usually,i will query the table with the above column independently,so how do i create the partitions?
order_no | spend_time | expire_time | withdraw_time | spend_amount
A001 | 2017/5/1 | 2017/6/1 | 2017/6/2 | 100
A002 | 2017/4/1 | 2017/4/19 | 2017/4/25 | 500
A003 | 2017/3/1 | 2017/3/19 | 2017/3/25 | 1000
Usually the business situation is to calculate total spend_amount between certain spend_time or expire_time or withdraw_time, or the combination of the 3.
But with 3 time dimensions cross combination(each has about 1000 partitions) can be a lot of partitions(1000*1000*1000),is that ok and efficient?
my solution is that i create 3 tables with 3 different columns.Is this a efficient way to solve this problem?