I have ever-changing header rows for a list of 1000+ shops (column order is changing) and I need to combine the first two rows into one header.
This is the simplified example table with 3 distinct shops and 4 weeks of data (the real data is > 40000 rows with 88 columns each)
Column0
Column1
Column2
Column3
Column4
Column5
Column6
Column8
Column11
Index
Shopbrand
Product
00/00182
Week
ProductA
ProductA
ProductA
ProductB
ProductB
ProductB
359
ShopBrand0
Datatype
00/00182
Sc Amount
Sc Value
Sc Profit
Sc Amount
Sc Value
Sc Profit
360
ShopBrand0
Week
00/00182
202201
361
ShopBrand0
Week
00/00182
202202
4
11,96
4
362
ShopBrand0
Week
00/00182
202203
5
14,95
8
363
ShopBrand0
Week
00/00182
202204
1
6,49
1,5
364
ShopBrand0
Product
00/00205
Week
ProductA
ProductA
ProductA
ProductB
ProductB
ProductB
400
ShopBrand0
Datatype
00/00205
Sc Amount
Sc Value
Sc Profit
Sc Amount
Sc Value
Sc Profit
401
ShopBrand0
Week
00/00205
202201
402
ShopBrand0
Week
00/00205
202202
403
ShopBrand0
Week
00/00205
202203
1
5,09
0,79
1
6,49
1,5
404
ShopBrand0
Week
00/00205
202204
0
0
-19,19
1
6,49
-10
405
ShopBrand0
Product
00/09002
Week
ProductA
ProductA
ProductA
ProductB
ProductB
ProductB
42557
ShopBrand1
Datatype
00/09002
Sc Amount
Sc Value
Sc Profit
Sc Amount
Sc Value
Sc Profit
42558
ShopBrand1
Week
00/09002
202201
2
11,1
3,22
4
23,36
5,88
42559
ShopBrand1
Week
00/09002
202202
5
25,45
3,95
42560
ShopBrand1
Week
00/09002
202203
3
14,97
2,09
2
8,98
0,48
42561
ShopBrand1
Week
00/09002
202204
2
8,98
0,48
7
33,83
3,88
42562
ShopBrand1
Note that every shop-id (column1) has TWO header rows, where the "product + datatype" (column3 onwards) need to be combined.
I can group by shop two get individual tables (real data approx 1200 shops), but who can I efficiently combine the two rows so that e.g. in Column3 I do get "ProductA : Sc Amount".
Note that due to the data source ProductA is NOT always in Column3, but for some shops it might be a different product that comes first, so I cannot simply take a fixed header for all the shops, I need to go through it for each shop individually.
This is the first part of the code with a bit more table data (no row combining, just grouping)
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText(
"rZnNbtw2FIVfJZg1B+X9kUQuE/QBCgRoF0YWRuKiRdo4COxF3r7iJeURZ3itU8CAxxZk+ZAiv3N5SN/dnX778fjl+fPTKZxi/CVGSrxe/vHw8HX90X73/ujyg3spU16/f/zr8fuHH/ffvsR3p0/h7vTr/dP908/vD32j5cHP797/+/j87ale/37/z/NDvVw1//z7CXtE5jhotL3TrkGOzJFqy1dfMhMuUS50/RCFPLfrnRLjSrJeTEVJQy4/U68kuJL1Z/3MQcsMUJh6KcWlppHUzQ2ZJ1xy3nem3E0hl3eNQZNpzbjW4kzggkskRyLhEtmRGOE/lqDojbPWGzFwmTVZcLrJoXvB6SbGpn/BMSdxeoXzTepI4FzT5EjgHNPsSOD4koPvguNLDr4Lji8ZX+1zlpCuygfXKcZh5iOYz+VJSTjK7KCccJSZ939ZO2OFMAaxspNwhtlhOOEM82WQqY7RknstHGZ2YE44zOzAnHCY2YE54TCzA3PCYWanFiccX4ljiYzzKg6vGedV2JHAMRUH04xjKk6pzTid4tCZcTrFoTOP6LzKtDXCvFmm1Tii4DrT1kZPb5VpNR5xUxt8JdNqPOJmJzFGT+MRNzuJ8mjpyRRijS9W3m4WBY1HIO00rVq3z5lyK5wXybOlKI1HYO0kx3GmXC0hl8KlwYq6xqNCuBMtj1pwaomWglViu0WhNCfBVjSNR7Vxp9qV18FAHtXInVQ6kDqqlTup3F4sB1uF1nneZr4PrkpH1fMiSuMCrIS7oM++t69IuBvsUTaUbZtyZgq6vSSHbORp2/wRbhEnCCvhjnCCsBLuACcIK+G8O0FYCYfbCcJKONROEFbCYaZxdlDG0WUHXcbRtUdvDbTTwtm9Db0tbp55LczJ1HBindirjBPLXQ3nOVg6vM3ByjjCTvxVxhF24q8yjrATf5VxhJ34q4wj3MffSpHFvVg3Oyo4y04OVsFZdnKwCo6wHOzbVHCAnTSrgsPmpFkVHDYnzaqMYOvTbI7xTU9oV5tNXas0zLOt2dOb5dm12TRodjdercmXRNuHp+3QlSXYaE4hpaabcd2OLJudKagFwHoAu8r1h8mvyxUOLRdojY4cXtJRf8JZdAnX1aFIaUGkplN5ef3+vPl13fKKZfCEgp1UTsHOnexgYql5JrZYswoLLjyfhhl6MIWrruK6S+swp1CZqDnMOpxDHfJpm7gJ101b52Kw3LhmRPIHYsaF84bp2tOKqb2Djc2aR+26EleER0YcC9d9TuXfpnEK82aS3JbUwG0dnXGnEe1GYq4j4WxmijBuNf8c+WqXwNOCO65P0GM13Gd9mB7CuuDuovGOz+4uldKpnrQWXdxctJsPqvNhpfQK1lYOFtxdFsGbsJ2Cc+gW76KGe6p27XrCOzHcR24unxbcM24ynxbcIM45dBHBzcDbhpK4HT1fpnDdW1qB0SCNjYT7geWiYolW2kJ0sv15XTyWVscS7oyXs+qr5FXuploh1vltHkm4R/rgPmhgVcOdYW98u5MYy+K+4GUoUsd0qndyK7UJNwinYW+bAet6GcyTRRf3Cuf/MQq4eyT2Iuct5IyFcUfJPtzZzbqo2xo5VVo5GKxFGHeZbC7ryR/2N+Me68/UW8qx38pLRMi4tWQLd1eTPioPqzDuLbmcbA7/v1TUcG/JZehSXWAtb7S026KCtFyQb8z16T8=",
BinaryEncoding.Base64), Compression.Deflate)),
let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Column0 = _t, Column1 = _t, Column2 = _t, Column3 = _t, Column4 = _t, Column5 = _t, Column6 = _t, Column8 = _t, Column11 = _t, Index = _t, Shopbrand = _t]),
#"Grouped Rows" = Table.Group(Source, {"Column1"},
{{"storetables",
each _, type table [Column0=nullable text, Column1=nullable text, Column2=nullable text, Column3=nullable text, Column4=nullable text, Column5=nullable text, Column6=nullable text, Column8=nullable text, Column11=nullable text, Index=nullable text, Shopbrand=nullable text]}})
in
#"Grouped Rows"
Basically I need a more efficient version, than what I already came up with. On smaller test-data I did already cobble together this code fragment inspired by https://community.powerbi.com/t5/Desktop/Power-Query-Help/m-p/381272 but on the real 40000+ rows table this did still run aufer 100 minutes without feedback, so I stopped it.
// Converts a list of records, into a table
Table.TransformRows(
// Creates a list by applying the transform operation to each row in table
#"table",
(row) =>
let
// Keep only Cols that need transforming => remove the following:
TransformTheseColumns = List.RemoveItems(
// Removes all occurrences of the given values in the list2 from list1
Record.FieldNames(row),
//Returns the names of the fields in the record
{"Column0","Column1","Column2","Index","Shopbrand"}
),
Transforms = List.Transform(
TransformTheseColumns,
(name) =>
{name,(cell) =>
if Text.Contains(row[Column0],"Product")
then
cell
& " : "
& Table.FirstValue(
Table.SelectColumns(
Table.SelectRows(
#"table-empty-rows-removed",
each
[Index] = row[Index] + 1
),
name
)
)
else
cell
}
)
in
Record.TransformFields(
// Returns a record after applying transformations specified
row, Transforms
)
)
)
Another idea I came up with was to combine just the row headers, which leaves me with a list, but I don't know how to re-add this as the first row into the table. But it might be a good start for a more efficient solution, i.e. running through all the grouped tables, extracting the first two rows, combining them and sticking them back into the table.
= List.Transform( List.Zip( Table.ToRows ( just_two_headers )), each Lines.ToText( _, " : "))
outcome should be something like this (for each shop):
To combine row1/row2 and make that the column names of all columns:
#"NewNames" = Table.AddColumn(Table.Transpose(Table.FirstN(Source,2)), "Custom", each Text.Trim([Column1]&":"&[Column2]))[Custom],
#"Rename"=Table.RenameColumns( Table.Skip(Source,2), List.Zip( { Table.ColumnNames( Source ), #"NewNames" } ) )
or same as above, but then specify the first 3 column names individually as special cases:
FirstFew={"A","B","C"},
#"NewNames1" = Table.AddColumn(Table.Transpose(Table.FirstN(Source,2)), "Custom", each Text.Trim([Column1]&":"&[Column2]))[Custom],
#"NewNames" = FirstFew & List.Skip(#"NewNames1",List.Count(FirstFew)),
#"Rename"=Table.RenameColumns( Table.Skip(Source,2), List.Zip( { Table.ColumnNames( Source ), #"NewNames" } ) )
updated answer
I don't understand why you are bothering with the grouping. It seems you can do it with one shot, then apply a filter to pull out the bad rows
<snip>
#"Reordered Columns" = Table.ReorderColumns(Source, {"Index","Shopbrand","Column0","Column1","Column2","Column3","Column4","Column5","Column6","Column8","Column11" }),
NewNames = Table.AddColumn(Table.Transpose(Table.FirstN( #"Reordered Columns", 2)),"Custom",each Text.Trim([Column1]& " : "& [Column2]))[Custom],
rename_headers = Table.RenameColumns(Table.Skip( #"Reordered Columns", 2),List.Zip({Table.ColumnNames( #"Reordered Columns"),{"Index","Shopbrand","delete-able","shop-nr","Week"}& List.Skip(NewNames, 5)})),
#"Duplicated Column" = Table.DuplicateColumn(rename_headers, "shop-nr", "shop-nr - Copy"),
#"Reordered Columns2" = Table.ReorderColumns(#"Duplicated Column",{"shop-nr", "Index", "Shopbrand", "delete-able", "shop-nr - Copy", "Week"}),
#"Filtered Rows" = Table.SelectRows(#"Reordered Columns2", each ([#"delete-able"] = "Week"))
in #"Filtered Rows"
Try
Transpose table
Concatenate first 2 columns
Transpose back
This is my final solution incorporating #horseyrides answer above.
let
Source = Table.FromRows(
Json.Document(
Binary.Decompress(
Binary.FromText(
"rZnNbtw2FIVfJZg1B+X9kUQuE/QBCgRoF0YWRuKiRdo4COxF3r7iJeURZ3itU8CAxxZk+ZAiv3N5SN/dnX778fjl+fPTKZxi/CVGSrxe/vHw8HX90X73/ujyg3spU16/f/zr8fuHH/ffvsR3p0/h7vTr/dP908/vD32j5cHP797/+/j87ale/37/z/NDvVw1//z7CXtE5jhotL3TrkGOzJFqy1dfMhMuUS50/RCFPLfrnRLjSrJeTEVJQy4/U68kuJL1Z/3MQcsMUJh6KcWlppHUzQ2ZJ1xy3nem3E0hl3eNQZNpzbjW4kzggkskRyLhEtmRGOE/lqDojbPWGzFwmTVZcLrJoXvB6SbGpn/BMSdxeoXzTepI4FzT5EjgHNPsSOD4koPvguNLDr4Lji8ZX+1zlpCuygfXKcZh5iOYz+VJSTjK7KCccJSZ939ZO2OFMAaxspNwhtlhOOEM82WQqY7RknstHGZ2YE44zOzAnHCY2YE54TCzA3PCYWanFiccX4ljiYzzKg6vGedV2JHAMRUH04xjKk6pzTid4tCZcTrFoTOP6LzKtDXCvFmm1Tii4DrT1kZPb5VpNR5xUxt8JdNqPOJmJzFGT+MRNzuJ8mjpyRRijS9W3m4WBY1HIO00rVq3z5lyK5wXybOlKI1HYO0kx3GmXC0hl8KlwYq6xqNCuBMtj1pwaomWglViu0WhNCfBVjSNR7Vxp9qV18FAHtXInVQ6kDqqlTup3F4sB1uF1nneZr4PrkpH1fMiSuMCrIS7oM++t69IuBvsUTaUbZtyZgq6vSSHbORp2/wRbhEnCCvhjnCCsBLuACcIK+G8O0FYCYfbCcJKONROEFbCYaZxdlDG0WUHXcbRtUdvDbTTwtm9Db0tbp55LczJ1HBindirjBPLXQ3nOVg6vM3ByjjCTvxVxhF24q8yjrATf5VxhJ34q4wj3MffSpHFvVg3Oyo4y04OVsFZdnKwCo6wHOzbVHCAnTSrgsPmpFkVHDYnzaqMYOvTbI7xTU9oV5tNXas0zLOt2dOb5dm12TRodjdercmXRNuHp+3QlSXYaE4hpaabcd2OLJudKagFwHoAu8r1h8mvyxUOLRdojY4cXtJRf8JZdAnX1aFIaUGkplN5ef3+vPl13fKKZfCEgp1UTsHOnexgYql5JrZYswoLLjyfhhl6MIWrruK6S+swp1CZqDnMOpxDHfJpm7gJ101b52Kw3LhmRPIHYsaF84bp2tOKqb2Djc2aR+26EleER0YcC9d9TuXfpnEK82aS3JbUwG0dnXGnEe1GYq4j4WxmijBuNf8c+WqXwNOCO65P0GM13Gd9mB7CuuDuovGOz+4uldKpnrQWXdxctJsPqvNhpfQK1lYOFtxdFsGbsJ2Cc+gW76KGe6p27XrCOzHcR24unxbcM24ynxbcIM45dBHBzcDbhpK4HT1fpnDdW1qB0SCNjYT7geWiYolW2kJ0sv15XTyWVscS7oyXs+qr5FXuploh1vltHkm4R/rgPmhgVcOdYW98u5MYy+K+4GUoUsd0qndyK7UJNwinYW+bAet6GcyTRRf3Cuf/MQq4eyT2Iuct5IyFcUfJPtzZzbqo2xo5VVo5GKxFGHeZbC7ryR/2N+Me68/UW8qx38pLRMi4tWQLd1eTPioPqzDuLbmcbA7/v1TUcG/JZehSXWAtb7S026KCtFyQb8z16T8=",
BinaryEncoding.Base64
),
Compression.Deflate
)
),
let
_t = ((type nullable text) meta [Serialized.Text = true] )
in
type table [Column0 = _t,Column1 = _t,Column2 = _t,Column3 = _t,Column4 = _t,Column5 = _t,Column6 = _t,Column8 = _t,Column11 = _t,Index = _t,
Shopbrand = _t
]
),
#"Reordered Columns" = Table.ReorderColumns(
Source, {"Index","Shopbrand","Column0","Column1","Column2","Column3","Column4","Column5","Column6","Column8","Column11" }
),
// Grouping and expanding based on this solution here https://stackoverflow.com/a/73800993/1440255
#"Grouped Rows" = Table.Group(
#"Reordered Columns",
{
"Column1"
},
{{
"storetables",
each
_,
type table [Column0 = nullable text,Column1 = nullable text,Column2 = nullable text,Column3 = nullable text,Column4 = nullable text,Column5 = nullable text,Column6 = nullable text,Column8 = nullable text,Column11 = nullable text,Index = nullable text,Shopbrand = nullable text]
}}
),
// helper only for debuging to see one of the stores tables
#"00/00182" = #"Grouped Rows"{[Column1 = "00/00182"]}[storetables],
// go through all tables in storetables and join the first two rows and use them as new headers
#"newshop" = Table.TransformColumns(
#"Grouped Rows",
{
{
"storetables",
each
let
// join the first two rows with " : "
NewNames = Table.AddColumn(
Table.Transpose(Table.FirstN(_, 2)),
"Custom",
each Text.Trim([Column1]& " : "& [Column2])
)[Custom],
// rename the headers, but keep the first 5 from the fixed list below
rename_headers = Table.RenameColumns(
Table.Skip(_, 2),
List.Zip(
{Table.ColumnNames(_),
{
"Index","Shopbrand","delete-able","shop-nr","Week"
}
& List.Skip(NewNames, 5)
}
)
)
in
rename_headers
}
}
),
// Epand all storetables back into one beautiful table
#"expand storetables" =
let
ColumnNames = Table.ColumnNames(
Table.Combine(#"newshop"[storetables])
),
ExpandColumns = Table.ExpandTableColumn(
#"newshop",
"storetables",
ColumnNames
)
in
ExpandColumns
in
#"expand storetables"
I have some users with different game_id.
for each user, I want to find differences between the 2 smallest different dates and write the result in another column.
in this example,
for user_id = 1, it should be 4 (difference between 13/2/2001 and 17/2/2001)
for user_id = 2, we don't have any result because we have just 1 date,
for user_id = 3, it should be 3
and for user_id = 4, it should be 3. (difference between 5/10/2003 and 8/10/2003)
How can I calculate it in Power BI?
This does the trick with M code if you feel like using
It groups on user_id, gets the 2 lowest dates from the unique values, and calculates the difference. It then merges back on original data into a new column
let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{{"account_charging_dates", type date}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"user_id"}, {{"Diff", each try Number.From( List.MinN(List.Distinct([account_charging_dates]),2){1} - List.MinN(List.Distinct([account_charging_dates]),2){0}) otherwise null}}),
#"Merged Queries" = Table.NestedJoin(#"Changed Type",{"user_id"},#"Grouped Rows",{"user_id"},"Table1",JoinKind.LeftOuter),
#"Expanded Table1" = Table.ExpandTableColumn(#"Merged Queries", "Table1", {"Diff"}, {"Diff"})
in #"Expanded Table1"
I've spent hours and hours on this, but no luck. Can anyone help?
Issue: I have two tables; one source data and one lookup data. I need to count the number of times the lookup data appears in the corresponding column on the main table. The column names could be anything, but they always have a counterpart in the other table.
In the attached example, There are four columns (Month/Shape/Animal/Mineral) in the first table, and the same columns in the lookup table. The lookup table contains items that do not belong in the respective columns (i.e. "Triangle" is not a month, and "Kitten" is not a shape).
I'm trying to use Power Query to count the number of invalid entries in each column. I'm thinking that "Group By" is the way to go, but I'm still stuck....
Input Data
-Month- -Shape- -Animal- -Mineral-
January Square Puppy Gold
February Circle Kitten Quartz
Square Triangle October Amber
Circle Kitten October Lead
Square Puppy Horse Gold
Circle Puppy Goldfish Silver
May Hexagon Pig Gold
Invalid Values
-Month- -Shape- -Animal- -Mineral-
Square Kitten July Kitten
Circle Puppy October Puppy
Triangle Pig January Pig
Trying to end-up with this table:
Field Invalid Entry Count
-Month- 4
-Shape- 3
-Animal- 2
-Mineral- 0
I have also attached a screenshot and sample workbook. Any help/guidance/assistance would be ~greatly~ appreciated.
https://drive.google.com/file/d/1VpgwySRra9-gStlmtjJum3w2JySPyrt5/view?usp=sharing
Thank you!
Load invalid Value, add index, click the index and unpivot other columns. Close and Load. Lets call this Table2
Load Input Data, add index, click the index and unpivot other columns. Merge in Table2 by matching Attribute and Value with left outer join. Expand the resulting column to extract index column from Table2. Filter out the nulls. Group and Count on Attribute.
Table2 = Invalid Values code
let Source = Excel.CurrentWorkbook(){[Name="Table2"]}[Content],
#"Added Index" = Table.AddIndexColumn(Source, "Index", 0, 1),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Added Index", {"Index"}, "Attribute", "Value")
in #"Unpivoted Other Columns"
Table 1 = Input Data code
let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Added Index" = Table.AddIndexColumn(Source, "Index", 0, 1),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Added Index", {"Index"}, "Attribute", "Value"),
#"Merged Queries" = Table.NestedJoin(#"Unpivoted Other Columns",{"Value", "Attribute"},Table2,{"Value", "Attribute"},"Table2",JoinKind.LeftOuter),
#"Expanded Table2" = Table.ExpandTableColumn(#"Merged Queries", "Table2", {"Index"}, {"Index.1"}),
#"Filtered Rows" = Table.SelectRows(#"Expanded Table2", each ([Index.1] <> null)),
#"Grouped Rows" = Table.Group(#"Filtered Rows", {"Attribute"}, {{"Invalid Entry Count", each Table.RowCount(Table.Distinct(_)), type number}})
in #"Grouped Rows"
Let's say I have a table like this:
Var1 Group
1 A
2 B
3 A
4 B
5 A
I wish to have a table like this:
A B
1 2
3 4
5
How can I achieve this specifically using Power Query in Excel?
Try below
Right-click Group column and do Group By ...
Use the default option then change code in the formula box to resemble the below, especially after the each
= Table.Group(Source, {"Group"}, {{"xx", each Table.AddIndexColumn(_, "Index", 1, 1), type table}})
click on arrows atop the new column and [x] expand Var1 and Index
Click on Group column, Transform ... pivot columns .. and choose Var1 as Values Column
Erase extra index column
Full sample code:
let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Grouped Rows" = Table.Group(Source, {"Group"}, {{"xx", each Table.AddIndexColumn(_, "Index", 1, 1), type table}}),
#"Expanded xx" = Table.ExpandTableColumn(#"Grouped Rows", "xx", {"Var1", "Index2"}, {"Var1", "Index"}),
#"Pivoted Column" = Table.Pivot(#"Expanded xx", List.Distinct(#"Expanded xx"[Group]), "Group", "Var1", List.Sum),
#"Removed Columns" = Table.RemoveColumns(#"Pivoted Column",{"Index2"})
in #"Removed Columns"
Appreciate for any helps and suggestions.
I have a table structure as following:-
Date, Product Code ,Result ,Schedule
Day1, A ,0 ,0
Day2, A ,20 ,100
Day3, A ,200 ,100
How can i add a new column [Different] which reset by product code
Date ,Product Code ,Result ,Schedule ,Different
Day1 ,A ,0 ,0 ,0
Day2 ,A ,20 ,100 ,-80
Day3 ,A ,200 ,100 ,20
Where different = previous's different + result - schedule
thank you.
Paste code below in Home ... Advanced Editor, save, and name it fnRunningSum
It will create a function to do a cumultive running total on column named Amount
(MyTable as table) =>
let Source = Table.Buffer(MyTable),
MyColumn="Amount",
TableType = Value.Type(Table.AddColumn(Source, "Cumul", each null, type number)),
Cumulative = List.Skip(List.Accumulate(Table.Column(Source,MyColumn),{0},(cumulative,MyColumn) => cumulative & {List.Last(cumulative) + MyColumn})),
Cumu = Table.FromColumns(Table.ToColumns(Source)&{Cumulative},TableType)
in Cumu
Load your data into powerquery, here assumed to be in range Table1. Paste code below in Home ... Advanced Editor...
What it does is (1) add a new column that is Result-Schedule (2) Group on Product Code and cumulative sum the new column (3) Expand to get the columns back
letSource = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Added Custom" = Table.AddColumn(Source, "Amount", each [Result]-[Schedule]),
#"Grouped Rows" = Table.Group(#"Added Custom" , {"Product Code"}, {{"AllData", fnRunningSum}}),
#"Expanded AllData" = Table.ExpandTableColumn(#"Grouped Rows", "AllData", {"Date", "Result", "Schedule", "Cumul"}, {"Date", "Result", "Schedule", "Different"})
in #"Expanded AllData"
Try to achieve that with a Column From Examples, and type what you want to see in the new column. It usually works fine for me.
Documentation in case you have never used it before:
https://learn.microsoft.com/en-us/power-bi/desktop-add-column-from-example