I am connecting to sharepoint list which returns a table where many columns have nested tables.
When I expand a column like that, the resulting columns have values mixed with nested tables in the same column. I.e. in one row it's a value (i.e. email title), and in another it's a [Table].
Because not all values are tables, there is no expand button anymore.
Clicking through one of these table values removes the rest of data, and there are more nested tables or records in it.
How can I expand all columns to the deepest level of nestedness? it varies between columns, too.
Related
In powerBI, I using the python script and generate one table. The generated table is a nested table. Each row value is actually a table. and so now I want to use DAX to copy one specific row value(which is a table), just the "dataset_filtered" table shown as below.
what's the DAX code for this ? or any good suggestions ?
The nested table
This is M rather than DAX.
Click the drop down arrow in the name column to filter the row you want.
Click the expand arrows in the Value column to expand the table.
I have a list of tables in Powerquery after partitioning from an original big table. I am trying to perform a nested join on each of the tables. Each of the 3 tables will have its own nested join, although the tables joined to and the key-containing columns are different for each table. Eg. List contains table1, table2,table3, table1 to be joined to Table A with table1[Host] as the key, table2 to be joined to Table B with table2[Location] as the key, table3 to be joined to Table C with table3[Name] as the key.
I can do the joins on each table separately (as opposed to having a list of tables), and then combine the output, but it gets unwieldy if there are many tables to be joined. Was wondering if there is a "neater" way to do so.
On a related note, is it possible to invoke different functions (1 function for each table in a list of tables), and how?
Cheers
I have a BIRT Excel Report with 10 columns. I have a query which executes and brings the data for all the 10 columns.
However, based on one of the input parameters, i need to display just 8 columns. I am able to hide the remaining 2 columns but i would like to delete those 2 columns from the report so that user does not see the hidden columns.
I tried to change the query but i am unable to dynamically set the select parameters.
Is there a way either in Query or in BIRT to remove few columns based on an input condition.
You cannot delete the columns, but it's sufficient to hide them dynamically using the column's visibility expression. You can add an aggregation to the table, using the MAX function for the column data (let's call it max_name).
E.g. if your table column shows the DS column NAME and you want to hide the column if NAME is empty for all rows:
Add an aggration (let's call it MAX_NAME) to the table, with the aggregation function MAX and the expression NAME. Then in the visibility expression of the table column, use !row["MAX_NAME"] as the expression.
After drag and drop the dataset. Right click on column header and select the delete column option.
Consider the following scenario:
Main Control Table: 100 rows (Denormalized table with multiple processing ID's).
Set of 10 Parent Tables populated based on Control table.
Set of 10 Child Tables populated based on the Parent tables.
For daily processing:
We need to delete the data from Child tables first.
Parent Tables next.
Control table last.
Then insert data into Control table using multiple Insert Statements as it is denormalized.
Is this possible in one mapping?
One suggestion is to use SQL Transform and just execute the SQL's one after the other.
Is there an alternative way of Handling this?
I need to query table1 find all orders and created date ( key is order number an date)).
In table 2 ( key is order number an date) Check if the order exists for a a date.
For this i am scanning table 1 and for each record checking if it exists in table 2. Any better way to do this
In this situation in which your key is identical for both tables, it makes sense to have a single table in which you store both data for Table 1 and Table 2. In that way you can do a single scan on your data and know straight away if the data exists for both criteria.
Even more so, if you want to use this data in MapReduce, you would simply scan that single table. If you only want to get the relevant rows, you could define a filter on the Scan. For example, in the case where you will not be populating rows at all in Table 2, you would simply use a ColumnPrefixFilter
If, however, you do need to keep this data separately in 2 tables, you could pre-split the tables with the same region boundaries for both tables - this will be helpful when you do the query that you are aiming for - load all rows in Table 1 when row exists in Table 2. Essentially this would be a map-side join. You could define multiple inputs in your MapReduce job, and since the region borders are the same, the splits will be such that each mapper will have corresponding rows from both tables. You would probably need to implement your own MultipleInput format for that (the MultiTableInputFormat class recently introduced in 0.96 does not seem to do that map side join)