I have table one with 5 columns and table two with 7 columns.
How can I append them into a new table with only the common columns (that is, the columns with common names)?
I tried all sorts of "Append Queries" in the Query Editor but it seems that it only works with exactly identical tables.
In DAX, UNION seems to have the same limitation.
Use Append Queries -> Append as New to create a new table.
The M function, Table.Combine(), will combine the columns where they are named the same and add columns with null values where they only exist in one table
Table.Combine({
Table.FromRecords({[Name = "David", Phone = "01235667886"]}),
Table.FromRecords({[Email= "me#gmail.com", Phone = "01124892522"]})
})
> Name Phone Email
> David 01235667886
> 01124892522 me#gmail.com
Related
On my table (point1) I am trying to get, that for each table of the grouped rows (point 2) I will have new row inserted (point 3) at the beginning of each table with the value in the column "Metadata1" equal to value form "Column2" for original row number 2 (starting counting from 0).
Link to excel file:
https://filebin.net/cnb4pia0vvkg937g
Its hard to know how much of your requirement is generic or specific, but
TransformFirst = Table.TransformColumns(#"PriorStepName",{{"Count", each
#table(
{"Column1","Column2","Metadata1"},
{{_{0}[Column1],_{0}[Column2],_{2}[Column2]}}
) & _
}}),
Together=Table.Combine(TransformFirst[Count])
in Together
modifies all the tables in column Count to include an extra row that is made up up Row0/Col1 Row0/Col2 and Row2/Col2
It then combines all those tables into one table
I'm trying to speed up the following
create table tab2 parallel 24 nologging compress for query high as
select /*+ parallel(24) index(a ix_1) index(b ix_2)*/
a.usr
,a.dtnum
,a.company
,count(distinct b.usr) as num
,count(distinct case when b.checked_1 = 1 then b.usr end) as num_che_1
,count(distinct case when b.checked_2 = 1 then b.usr end) as num_che_2
from tab a
join tab b on a.company = b.company
and b.dtnum between a.dtnum-1 and a.dtnum-0.0000000001
group by a.usr, a.dtnum, a.company;
by using indexes
create index ix_1 on tab(usr, dtnum, company);
create index ix_2 on tab(usr, company, dtnum, checked_1, checked_2);
but the execution plan tells me that it's going to be an index full scan for both indexes, and the calculations are very long (1 day is not enough).
About the data. Table tab has over 3 mln records. None of the single columns are unique. The unique values here are pairs of (usr, dtnum), where dtnum is a date with time written as a number in the format yyyy,mmddhh24miss. Columns checked_1, checked_2 have values from set (null, 0, 1, 2). Company holds an id for a company.
Each pair can only have one value checked_1, checked_2 and company as it is unique. Each user can be in multple pairs with different dtnum.
Edit
#Roberto Hernandez: I've attached the picture with the execution plan. As for parallel 24, in our company we are told to create tables with options 'parallel [num] nologging compress for query high'. I'm using 24 but I'm no expert in this field.
#Sayan Malakshinov: http://sqlfiddle.com/#!4/40b6b/2 Here I've simplified by giving data with checked_1 = checked_2, but in real life this may not be true.
#scaisEdge:
For
create index my_id1 on tab (company, dtnum);
create index my_id2 on tab (company, dtnum, usr);
I get
For table tab Your join condition is based on columns
company, datun
so you index should be primarly based on these columns
create index my_id1 on tab (company, datum);
The indexes you are using are useless because don't contain in left most position columsn use ij join /where condition
Eventually you can add user right most potition for avoid the needs of table access and let the db engine retrive alla the inf inside the index values
create index my_id1 on tab (company, datum, user, checked_1, checked_2);
Indexes (bitmap or otherwise) are not that useful for this execution. If you look at the execution plan, the optimizer thinks the group-by is going to reduce the output to 1 row. This results in serialization (PX SELECTOR) So I would question the quality of your statistics. What you may need is to create a column group on the three group-by columns, to improve the cardinality estimate of the group by.
I have table[Table 1] having three columns
OrganizationName, FieldName, Acres having data as follows
organizationname fieldname Acres
ABC |F1 |0.96
ABC |F1 |0.96
ABC |F1 |0.64
I want to calculate the sum of Distinct values of Acres
(eg: 0.96+0.64) in DAX.
One of the problems with doing what you want is that many measures rely on filters and not actual table expressions. So, getting a distinct list of values and then filtering the table by those values, just gives you the whole table back.
The iterator functions are handy and operate on table expressions, so try SUMX
TotalDistinctAcreage = SUMX(DISTINCT(Table1[Acres]),[Acres])
This will generate a table that is one column containing only the distinct values for Acres, and then add them up. Note that this is only looking at the Acres column, so if different fields and organizations had the same acreage -- then that acreage would still only be counted once in this sum.
If instead you want to add up the acreage simply on distinct rows, then just make a small change:
TotalAcreageOnDistinctRows = SUMX(DISTINCT(Table1),[Acres])
Hope it helps.
Ok, you added these requirements:
Thank You. :) However, I want to add Distinct values of Acres for a
Particular Fieldname. Is this possible? – Pooja 3 hours ago
The easiest way really is just to go ahead and slice or filter the original measure that I gave you. But if you have to apply the filter context in DAX, you can do it like this:
Measure =
SUMX(
FILTER(
SUMMARIZE( Table1, [FieldName], [Value] )
, [FieldName] = "<put the name of your specific field here"
)
, [Value]
)
i need a query to get duplicate entries from table A on bases of two columns (Acol2 and Acol3) and Bcol3 from Table b where A.Acol4= B.Bcol2.
theefore two crequirements
1-select duplicats entries( on basis of columns Acol2 & Acol3) and Bcol3 from table A and table B
2- where A.Acol4= B.Bcol2
I am able to write query to get duplicate entries but unable to get bcol3 with condition 2.
create the join and then group by to find duplicates
SELECT a.acol2, a.acol3, b.bcol3
FROM a, b
WHERE a.acol4 = b.bcol2
GROUP BY a.acol2, a.acol3, b.bcol3
HAVING COUNT (*) > 1
I have T-SQL query that joins multiple tables. I am using that in SSRS as Dataset query. I am only selecting two columns, ID and Names. I have three records with same "ID" values but three different "Names" values. In SSRS, I am getting the first "Names" value and I need to concatonate all three values with same ID and have it in one cell on a table.
How would I go about doing that?
I am using lookup to combine cube + sql
Pulling ID straight from a table but using Case statement for Names to define alias.
You can accomplish this in TSQL either using PIVOT to get them as separate columns which you can then combine in the report cell, or you can use one of these concatenation methods to get all the names in one column.
For example, you can do this:
SELECT SomeTableA.Id,
STUFF(
(SELECT ',' + SomeTableB.Names AS [text()]
FROM SomeTable SomeTableB
WHERE SomeTableB.Id = SomeTableA.Id
FOR XML PATH('')), 1, 1, '' )
AS ConcatenatedNames
FROM SomeTable SomeTableA
INNER JOIN AnotherTable
ON SomeTableA.Id = AnotherTable.SomeId
...