VS 2010 reporting services grouping - visual-studio-2010

I want to load the list of the groups as well as data into two separate datatables (or one, but I don't see that possible). Then I want to apply the grouping like this:
Groups
A
B
Bar
C
Car
Data
Ale
Beer
Bartender
Barry
Coal
Calm
Carbon
The final result after grouping should be like this.
*A
Ale
*B
*Bar
Bartender
Barry
Beer
*C
Calm
*Car
Carbon
Coal
I only have a grouping list, not the levels or anything else. And the items falling under the certain group are the ones that do start with the same letters as a group's name. The indentation is not a must. Hopefully my example clarifies what I need, but am not able to name thus I am unable to find anything similar on google.
The key things here are:
1. Grouping by a provided list of groups
2. There can be unlimited layers of grouping

Since every record has it's children, the query should also take a father for each record. Then there is a nice trick in advanced grouping tab. Choosing a father's column yields as many higher level groups as needed recursively. I learnt about that in http://blogs.microsoft.co.il/blogs/barbaro/archive/2008/12/01/creating-sum-for-a-group-with-recursion-in-ssrs.aspx

I suggest reporting from a query like this:
select gtop.category top_category,
gsub.category sub_category,
dtab.category data_category
from groupTable gtop
join groupTable gsub on gsub.category like gtop.category + '%'
left join dataTable dtab on dtab.category like gsub.category + '%'
where len(gtop.category) = 1 and
not exists
(select null
from groupTable gchk
where gsub.category = gtop.category and
gchk.category like gsub.category + '%' and
gchk.category <> gsub.category and
dtab.category like gchk.category + '%')
- with report groups on top_category and sub_category, and headings for both groups. You will probably want to hide the sub_category heading row when sub_category = top_category.

Related

In Google Sheets - find word within cell, return cell without word

I have a google sheet with a column of item names, (i.e. "Amy dress, Brooke Tshirt, etc.) Some of these items have a prefix - JK or JL (JK - Amy Dress, JL - Brooke Dress) in addition to the non-prefixed versions. I'm trying to find a way to search for a prefix (JK - ) and return the item name associated with that prefix in a different column.
Search for "JK - ", find JK - Amy Dress, return Amy Dress. Please help!
Tried lookup and match, but this is too complicated for my skill set.
You can try to use a Google Sheets Query.
If you want something like this:
Based on the table above the query you'll have to use will be:
=query(A:B;"select * where B Starts with 'JK'";-1)
If you want to select only the B column just remove the A:
=query(B;"select * where B Starts with 'JK'";-1)
The query automatically creates a new "table" with all the values you need.
If you want to make it customizable use the following query:
=query(A:B;"select * where B Starts with '"&$G1&"'";-1)
In this case instead of "JK" we are searching something that starts with the content of the cell G1. So if you type JK in the G1 cell you will obtain the same result as before.
Hope it helps.

PowerBI: Slicer to filter a table Only when more than 1 value is selected

I have a table with 5 categories and units displayed into 2 types, Actual and budget.
I want to filter this table. Only when 2 or more values are selected in the slicer. Something like this.
I though of adding a measure, but dont know how to work the if statement exactly.
Measure = IF(COUNTROWS(ALLSELECTED(Report[Shipment Group])) = 1, "Something which would not filter the units", SELECTEDVALUE(Report[Units], SUM(Report[Units])))
Not sure if this is correct approach.Would like to know if any other approach is possible. Any help would be helpful. Thank you in advance.
This is a bit of an odd request, but I think I have something that works.
First, you need to create a separate table for your slicer values (or else you can't control filtering how you want). You can hit the new table button and define it as follows:
Groups = VALUES(Report[Shipment Group])
Set your slicer to use Groups[Shipment Group] instead of Report[Shipment Group].
Define your new measure as follows:
Measure = IF(COUNTROWS(ALLSELECTED(Groups[Shipment Group])) = 1,
SUM(Report[Units]),
SUMX(FILTER(Report,
Report[Shipment Group] IN VALUES(Groups[Shipment Group])),
Report[Units]))
or equivalently
Measure = IF(COUNTROWS(ALLSELECTED(Groups[Shipment Group])) = 1,
SUM(Report[Units]),
CALCULATE(SUM(Report[Units]),
FILTER(Report,
Report[Shipment Group] IN VALUES(Groups[Shipment Group]))))
Note: Double check that Power BI has not automatically created a relationship between the Groups and Report tables. You don't want that.

How to design querying multiple tags on analytics database

I would like to store user purchase custom tags on each transaction, example if user bought shoes then tags are "SPORTS", "NIKE", SHOES, COLOUR_BLACK, SIZE_12,..
These tags are that seller interested in querying back to understand the sales.
My idea is when ever new tag comes in create new code(something like hashcode but sequential) for that tag, and code starts from "a-z" 26 letters then "aa, ab, ac...zz" goes on. Now keep all the tags given for in one transaction in the one column called tag (varchar) by separating with "|".
Let us assume mapping is (at application level)
"SPORTS" = a
"TENNIS" = b
"CRICKET" = c
...
...
"NIKE" = z //Brands company
"ADIDAS" = aa
"WOODLAND" = ab
...
...
SHOES = ay
...
...
COLOUR_BLACK = bc
COLOUR_RED = bd
COLOUR_BLUE = be
...
SIZE_12 = cq
...
So storing the above purchase transaction, tag will be like tag="|a|z|ay|bc|cq|" And now allowing seller to search number of SHOES sold by adding WHERE condition tag LIKE %|ay|%. Now the problem is i cannot use index (sort key in redshift db) for "LIKE starts with %". So how to solve this issue, since i might have 100 millions of records? dont want full table scan..
any solution to fix this?
Update_1:
I have not followed bridge table concept (cross-reference table) since I want to perform group by on the results after searching the specified tags. My solution will give only one row when two tags matched in a single transaction, but bridge table will give me two rows? then my sum() will be doubled.
I got suggestion like below
EXISTS (SELECT 1 FROM transaction_tag WHERE tag_id = 'zz' and trans_id
= tr.trans_id) in the WHERE clause once for each tag (note: assumes tr is an alias to the transaction table in the surrounding query)
I have not followed this; since i have to perform AND and OR condition on the tags, example ("SPORTS" AND "ADIDAS") ---- "SHOE" AND ("NIKE" OR "ADIDAS")
Update_2:
I have not followed bitfield, since dont know redshift has this support also I assuming if my system will be going to have minimum of 3500 tags, and allocating one bit for each; which results in 437 bytes for each transaction, though there will be only max of 5 tags can be given for a transaction. Any optimisation here?
Solution_1:
I have thought of adding min (SMALL_INT) and max value (SMALL_INT) along with tags column, and apply index on that.
so something like this
"SPORTS" = a = 1
"TENNIS" = b = 2
"CRICKET" = c = 3
...
...
"NIKE" = z = 26
"ADIDAS" = aa = 27
So my column values are
`tag="|a|z|ay|bc|cq|"` //sorted?
`minTag=1`
`maxTag=95` //for cq
And query for searching shoe(ay=51) is
maxTag <= 51 AND tag LIKE %|ay|%
And query for searching shoe(ay=51) AND SIZE_12 (cq=95) is
minTag >= 51 AND maxTag <= 95 AND tag LIKE %|ay|%|cq|%
Will this give any benefit? Kindly suggest any alternatives.
You can implement auto-tagging while the files get loaded to S3. Tagging at the DB level is too-late in the process. Tedious and involves lot of hard-coding
While loading to S3 tag it using the AWS s3API
example below
aws s3api put-object-tagging --bucket --key --tagging "TagSet=[{Key=Addidas,Value=AY}]"
capture tags dynamically by sending and as a parameter
2.load the tags to dynamodb as a metadata store
3.load data to Redshift using S3 COPY command
You can store tags column as varchar bit mask, i.e. a strictly defined bit sequence of 1s or 0s, so that if a purchase is marked by a tag there will be 1 and if not there will be 0, etc. For every row, you will have a sequence of 0s and 1s that has the same length as the number of tags you have. This sequence is sortable, however you would still need lookup into the middle but you will know at which specific position to look so you don't need like, just substring. For further optimization, you can convert this bit mask to integer values (it will be unique for each sequence) and make matching based on that but AFAIK Redshift doesn't support that yet out of box, you will have to define the rules yourself.
UPD: Looks like the best option here is to keep tags in a separate table and create an ETL process that unwraps tags into tabular structure of order_id, tag_id, distributed by order_id and sorted by tag_id. Optionally, you can create a view that joins the this one with the order table. Then lookups for orders with a particular tag and further aggregations of orders should be efficient. There is no silver bullet for optimizing this in a flat table, at least I don't know of such that would not bring a lot of unnecessary complexity versus "relational" solution.

Oracle query with two paterns in one expression

Input:
TABLE NAME: SEARCH_RECORD
Column A Column B Column C Column D
ID CODE WORD CODE/WORD
--------------------------------------------
123 666Ani RAT 666Ani/RAT
124 777Cae CAT 777Cae/CAT
I need a query to check as a LIKE case
if i search with column B like '%6A' or column C '%A%' it will give result
suppose i want to get the like based on the column D search
**User will search like '%6A%'/'%AT%' (always / will be given by user)**
Expected output:
666Ani/RAT
so, I need a query for the above to get the ID as output (CASE query is preferable)
Need you valuable suggestion
.
It can't be done with simple like.
It should work if the pattern look like '%6A%/%AT%'. It is a valid pattern.
So, you can write: columnD like '%6A%/%AT%' or columnD like first_pattern||'/'||second_pattern if the come from as different variables.
Another approach, if you know for sure that there is only a /(you can check how many they are), may be to use two likes using substr to get first and then second part of the search string.
where
columnB like substr(match_string, 1, instr(match_string,'/'))
and
columnC like substr(match_string, instr(match_string,'/')+1)

Pig - how to select only some values from the list (not just simple distinct)?

Let's say I have intput_file.txt (user_id, event_code, event_date):
1,a,1
1,b,2
2,a,3
2,b,4
2,b,5
2,b,6
2,c,7
2,b,8
as you can see, user_id = 2, has events like this: abbbcb
I'd like to have a result like this:
1,{(a,1),(b,2)}
2,{(a,2),(b,6),(c,7),(b,8)}
So when we have few events, with the same code, I'd like to take only the last one.
Can you please share any hints?
Regards
Pawel
The main thing you are describing is what GROUP BY does.
In this case:
B = GROUP A BY user_id;
Gets your records together by user_id. Your data will now look like this:
1,{(a,1),(b,2)}
2,{(a,2),(b,6),(c,7),(b,8)}
You say you only want the last one (I assume you mean the one with the greatest event_date). To do this, you can do a nested FOREACH with an ORDER BY to sort by date, and then take the first one with LIMIT. Note that this has arbitrary behavior when there are ties.
C = FOREACH B {
DA = ORDER A BY event_date DESC;
DB = LIMIT DA 1;
GENERATE FLATTEN(group), FLATTEN(DB.event_code), FLATTEN(DB.event_date);
}
Your data should now look like this:
1,b,2
2,b,8
Another option would be to use a UDF to write some custom behavior on the groups given by GROUP BY:
B = GROUP A BY user_id;
C = FOREACH B GENERATE YourUDFThatYouBuilt(group, A);
In that UDF you'd write whatever custom behavior you want (in this case return the tuple with the greatest date)
It seems like you could use the DistinctBy UDF from Apache DataFu to achieve this. This UDF, given a bag, returns the first instance found for a given field. In your case the field you care about is event_code. But we have to reverse the order, as you actually want the last instance.
One clarification though. Correct me if I'm wrong, but I think the intended output is:
1,{(a,1),(b,2)}
2,{(a,3),(b,6),(c,7),(b,8)}
That is, the (a,3) event occurs for member 2. The (a,2) event occurs for member 1.
Here's how you can do it:
-- pass in 1 because we want distinct by event code (position 1)
define DistinctBy datafu.pig.bags.DistinctBy('1');
FOREACH (GROUP A BY user_id) {
-- reverse so we can take the last event code occurrence
A_reversed = ORDER A BY event_date DESC;
-- use DistinctBy to get the first tuple having an occurrence of a field value
A_distinct_by_code = DistinctBy(A_reversed);
-- put back in order again
A_ordered = ORDER A_distinct_by_code BY event_date ASC;
GENERATE group as user_id, A_ordered.(event_code,event_date);
}

Resources