Referencing a table given in column as text - powerquery

this should be an easy question. I'm fairly new to Power Query.
I have a report table, there is a column "Queries" which are names of queries i have in my workbook.
I wish to add a column to count the number of rows in the queries.
The formula i use is =Table.AddColumn(Source, "RowCount", each Table.RowCount([Query]))
My report table would looks like below:
| Queries | RowCount |
| Qry Apple | |
| Qry Orang | |
However I am getting the error:
Expression.Error: We cannot convert the value "Qry Apple" to type Table.
Details:
Value=Qry Apple
Type=Type
Does anyone know how to solve this?
Thanks!

= Table.AddColumn(Source, "Row Count", each Table.RowCount(Expression.Evaluate([Query],#sections[Section1])))
It seems like this is one of those things that requires random obscure knowledge about the structure of PQ. Expression.Evaluate needs to know the "environment" to resolve the string in, and it appears tables in PQ are sitting in a record called [Section1] in a global query called #sections.

I found a solution to this from Chris Webb's BI Blog: Expression.Evaluate() In Power Query/M.
Basically, we need to use Expression.Evaluate in order to read the text in the [Query] column as a table. Note also that you need to include the #shared parameter so it has access to the necessary environment. (For more details, see the linked blog and the references it gives.)
= Table.AddColumn(Source, "RowCount", each Table.RowCount(Expression.Evaluate([Query], #shared)))

Finally, found the answer, after a year of practicing,
I tried using Expression.Evaluate as suggested but to no avail, hence I don't think the function can properly convert a text into a #table. I stand to be corrected.
The solution do make use of #sections, thanks so much for the brilliant idea by #Alexis Olson and #Wedge!
I used the Record.Field function to "Get" the tables into a column as table objects, then finish it off with the Table.RowCount function. For clarity, I split them into two steps.
So here it is:
let
Source = Excel.CurrentWorkbook(),
MyTables = Source{[Name="MyTables"]}[Content],
GetTblObj = Table.AddColumn(MyTables, "MyTables", each Record.Field(#sections[Section1],[Query])),
RowCount = Table.AddColumn(GetTblObj, "RowCount", each Table.RowCount([MyTables]))
in
RowCount

Related

Crystal - Compare Strings that do not fully match

I am having some trouble with a query in Crystal 2008. I have two tables with columns that are loosely related, both contain addresses. One table column is just a street name while the other is a street name plus some additional info. I want to find all records where these have the same street name and only show those. Example below:
Address
AddressB
123 St
123 St, ABC City
123 St
345 St, ABC City
I have tried using a formula such as below
if({AddressB} startswith {Address}) then {AddressB} else 'ERROR'
I have also tried this with LIKE and as well as * wildcards. Nothing seems to work. I will admit I am pretty amateur-ish with SQL and crystal so formulas are a new frontier for me writing reports. Also I should note that tables are linked appropriately with inner joins.
Any help would be greatly appreciated!
This should work. Perhaps your {Address} column is padded with spaces, so try:
IF ({AddressB} startswith Trim({Address})) THEN {AddressB} ELSE 'ERROR'
Test the effect of replacing the reference to the column name with the static text value that you "think" is in that column.
If you get a different behavior, what you think is in that column is not what is actually in that column. For example, the column might contain non-printable characters. You can get rid of those using the Replace function.
If you don't get a different behavior, then show us the expression with the static text values. That would allow us to replicate the behavior and understand the situation.
Note: the problem might be in your table join logic. If you have no join condition, then all records in TableA would join to all the records in TableB. In that case, you need to place the fields in the detail section to get a proper sense of what is being compared to what. Or rethink your join logic. Perhaps you should move one table to a subreport, or a SQL Expression instead of trying to include both tables in the main report.

Combine Tables matching a name pattern in Power Query

I am trying to combine many tables that has a name that matches a patterns.
So far, I have extracted the table names from #shared and have the table names in a list.
What I haven't being able to do is to loop this list and transform in a table list that can be combined.
e.g. Name is the list with the table names:
Source = Table.Combine( { List.Transform(Name, each #shared[_] )} )
The error is:
Expression.Error: We cannot convert a value of type List to type Text.
Details:
Value=[List]
Type=[Type]
I have tried many ways but I am missing some kind of type transformation.
I was able to transform this list of tables names to a list of tables with:
T1 = List.Transform(Name, each Expression.Evaluate(_, #shared))
However, the Expression.Evaluate feels like an ugly hack. Is there a better way for this transformation?
With this list of tables, I tried to combine them with:
Source = Table.Combine(T1)
But I got the error:
Expression.Error: A cyclic reference was encountered during evaluation.
If I extract the table from the list with the index (e.g T1{2}) it works. So in this line of thinking, I would need some kind o loop to append.
Steps illustrating the problem.
The objective is to append (Tables.Combine) every table named T_\d+_Mov:
After filtering the matching table names in a table:
Converted to a List:
Converted the names in the list to the real tables:
Now I just need to combine them, and this is where I am stuck.
It is important to not that I don't want to use VBA for this.
It is easier to recreate the query from VBA scanning the ThisWorkbook.Queries() but it would not be a clean reload when adding removing tables.
The final solution as suggested by #Michal Palko was:
CT1 = Table.FromList(T1, Splitter.SplitByNothing(), {"Name"}, null, ExtraValues.Ignore),
EC1 = Table.ExpandTableColumn(CT1, "Name", Table.ColumnNames(CT1{0}[Name]) )
where T1 was the previous step.
The only caveat is that the first table must have all columns or they will be skiped.
I think there might be easier way but given your approach try to convert your list to table (column) and then expand that column:
Alternatively use Table.Combine(YourList)

Creating advanced SUMIF() calculations in Quicksight

I have a couple of joined Athena tables in Quicksight. The data looks something like this:
Ans_Count | ID | Alias
10 | 1 | A
10 | 1 | B
10 | 1 | C
20 | 2 | D
20 | 2 | E
20 | 2 | F
I want to create a calculated field such that it sums the Ans_Count column based on distinct IDs only. i.e., in the example above the result should be 30.
How do I do that?? Thanks!
Are you looking for the sum before or after applying a filter?
Sumif(Ans_Count,ID) may be what your looking for.
If you need to always return the result of the sum, regardless of the filter on the visual, look at the sumOver() function.
You can use distinctCountOver at PRE_AGG level to count unique number of values for a given partition. You could use that count to drive the sumIf condition as well.
Example : distinctCountOver(operand, [partition fields], PRE_AGG)
More details about what will be visual's group by specification and an example where there duplicate IDs will help give a specific solution.
It might even be as simple as minOver(Ans_Count, [ID], PRE_AGG) and using SUM aggregation on top of it in the visual.
If you want another column with the values repeated, use sumOver(Ans_Count, [ID], PRE_AGG). Or, if you want to aggregate via QuickSight, you would use sumOver(sum(Ans_Count), [ID]).
I agree with the above suggestions to use sumOver(sum(Ans_Count), [ID]).
I have yet to understand the use cases for pre_agg, so if anyone has concrete examples please share them!
Another suggestion would be to do a sumover + partition by in your table (if possible) before uploading the dataset, then checking if the results matche with Quicksight's aggregations. I find Quicksight can be tricky with calculated fields, aggregations, and nested ifs so I've been doing calculations in SQL where possible before bringing it in to quicksight to have a better grasp of what the outputs should look like. This obviously is an extra step, but can help in understanding how quicksight pulls off calcs and brings up figures (as the documentation doesn't always give much), and spotting things that don't look right (I've had a few) before you share your analysis with a wider group.

How do I return multiple columns of data using ImportXML in Google Spreadsheets?

I'm using ImportXML in a Google Spreadsheet to access the user_timeline method in the Twitter API. I'd like to extract the created_at and text fields from the response and create a two-column display of the results.
Currently I'm doing this by calling the API twice, with
=ImportXML("http://twitter.com/status/user_timeline/matthewsim.xml?count=200","/statuses/status/created_at")
in the cell at the top of one column, and
=ImportXML("http://twitter.com/status/user_timeline/matthewsim.xml?count=200","/statuses/status/text")
in another.
Is there a way for me to create this display with a single call?
ImportXML supports using the xpath | separator to include as many queries as you like.
=ImportXML("http://url"; "//#author | //#catalogid| //#publisherid")
However it does not expand the results into multiple columns. You get a single column of repeating triplets (or however many attributes you've selected) as shown below in column A.
The following is deprecated
2015.06.16: continue is not available in "the new Google Sheets" (see: The Google Documentation for continue).
However you don't need to use the automatically inserted CONTINUE() function to place your results.
=CONTINUE($A$2, (ROW()-ROW($A$2)+1)*$A$1-B$1, 1)
Placed in B2 that should cleanly fill down and right to give you sane column data.
ImportXML is in A2.
A3 and below are how the CONTINUE() functions are automatically filled in.
A1 is the number of attributes.
B1:D1 are the attribute index for their columns.
Another way to convert the rows of =CONTINUE() into columns is to use transpose():
=transpose(importxml("http://url","//a | //b | //c"))
Just concatenate your queries with "|"
=ImportXML("http://twitter.com/status/user_timeline/matthewsim.xml?count=200","/statuses/status/created_at | /statuses/status/text")
I posed this question to the Google Support Forum and this is was a solution that worked for me:
=ArrayFormula(QUERY(QUERY(IFERROR(IF({1,1,0},IF({1,0,0},INT((ROW(A:A)-1)/2),MOD(ROW(A:A)-1,2)),IMPORTXML("http://example.com","//td/a | //td/a/#href"))),"select min(Col3) where Col3 <> '' group by Col1 pivot Col2",0),"offset 1",0))
Replace the contents of IMPORTXML with your data and query and see if that works for you. I
Apparently, this attempts to invoke the IMPORTXML function only once. It's a solution for now, at least.
Here's the full thread.
This is the best solution (NOT MINE) posted in the comments below. To be honest, I'm not sure how it works. Perhaps #Pandora, the original poster, could provide an explanation.
=ArrayFormula(iferror(hlookup(1,{1;ARRAY},(row(A:A)+1)*2-transpose(sort(row(A1:A2)+0,1,0)))))
This is a very ugly solution and doesn't even explain how it works. At least I couldn't get it to work due to multiple errors, like i.e. to much parameters for IF (because an array is used). A shorter solution can be found here =ArrayFormula(iferror(hlookup(1,{1;ARRAY},(row(A:A)+1)*2-transpose(sort(row(A1:A2)+0,1,0))))) "ARRAY" can be replaced with IMPORTXML-Function. This function can be used for as much XPATHS one wants. – Pandora Mar 7 '19 at 15:51
In particular, it would be good to know how to modify the formula to accommodate more columns.

Linq stored procedure with dynamic results

So I'm extremely new to Linq in .Net 3.5 and have a question. I use to use a custom class that would handle the following results from a store procedure:
Set 1: ID Name Age
Set 2: ID Address City
Set 3: ID Product Price
With my custom class, I would have received back from the database a single DataSet with 3 DataTables inside of it with columns based on what was returned from the DB.
My question is how to I achive this with LINQ? I'm going to need to hit the database 1 time and return multiple sets with different types of data in it.
Also, how would I use LINQ to return a dynamic amount of sets depending on the parameters (could get 1 set back, could get N amount back)?
I've looked at this article, but didn't find anything explaining multiple sets (just a single set that could be dynamic or a single scalar value and a single set).
Any articles/comments will help.
Thanks
I believe this is what you're looking for
Linq to SQL Stored Procedures with Multiple Results - IMultipleResults
I'm not very familiar with LINQ myself but here is MSDN's site on LINQ Samples that might be able to help you out.
EDIT: I apologize, I somehow missed the title where you mentioned you wanted help using LINQ with Stored Procedures, my below answer does not address that at all and unfortunately I haven't had the need to use sprocs with LINQ so I'm unsure if my below answer will help.
LINQ to SQL is able hydrate multiple sets of data into a object graph while hitting the database once. However, I don't think LINQ is going to achieve what you ultimately want -- which as far as I can tell is a completely dynamic set of data that is defined outside of the query itself. Perhaps I am misunderstanding the question, maybe it would help if you provide some sample code that your existing application is using?
Here is a quick example of how I could hydrate a anonymous type with a single database call, maybe it will help:
var query = from p in db.Products
select new
{
Product = p,
NumberOfOrders = p.Orders.Count(),
LastOrderDate = p.Orders.OrderByDescending().Take(1).Select(o => o.OrderDate),
Orders = p.Orders
};

Resources