I'm having trouble with dynamic table sorting. Im reading a table via a dynamic field symbol. How can I sort this table by a certain field of that table (after the select). I know for a fact that this field is in the table, but since its dynamic I can't simply use "sort table by field".'
What are the alternatives?
You can sort
FIELD-SYMBOL <product_list> TYPE STANDARD TABLE.
by a single column with
CONSTANTS category TYPE char30 VALUE 'CATEGORY'.
SORT <product_list> BY (category).
and by multiple columns with
DATA(category_and_price) = VALUE abap_sortorder_tab( ( name = 'CATEGORY' )
( name = 'PRICE'
descending = abap_true ) ).
SORT <product_list> BY (category_and_price).
as described in the ABAP Keyword Documentation article SORT itab.
Related
I got field with jsonb tags: [{"value": "tag1"}]
I need to do something like this update table1 set tags = tags - '{"value": "tag1"}' - but this don't work
What query should I execute to delete element from array?
Assuming your table looks like
CREATE TABLE public.hasjsonb (
id INT8 NOT NULL,
hash JSONB NULL,
CONSTRAINT hasjsonb_pkey PRIMARY KEY (id ASC)
)
you can do this with the following statement:
INSERT INTO hasjsonb(id, hash)
(SELECT id,array_to_json(array_remove(array_agg(json_array_elements(hash->'tags')),'{"value": "tag1"}'))
FROM hasjsonb
GROUP BY id
)
ON CONFLICT(id) DO UPDATE SET hash = jsonb_set(hasjsonb.hash, array['tags'], excluded.hash);
The actual json operation here is straightforward, if longwinded. We're nesting the following functions:
hash->'tags' -- extract the json value for the "tags" key
json_array_elements -- treat the elements of this json array like rows in a table
array_agg -- just kidding, treat them like a regular SQL array
array_remove -- remove the problematic tag
array_to_json -- convert it back to a json array
What's tricky is that json_array_elements isn't allowed in the SET part of an UPDATE statement, so we can't just do SET hash = jsonb_set(hash, array['tags'], <that function chain>. Instead, my solution uses it in a SELECT statement, where it is allowed, then inserts the result of the select back into the table. Every attempted insert will hit the ON CONFLICT clause, so we get to do that UPDATE set using the already-computed json array.
Another approach here could be to use string manipulation, but that's fragile as you need to worry about commas appearing inside objects nested in your json.
You can use json_remove_path to remove the element if you know its index statically by passing an integer.
Otherwise, we can do a simpler subquery to filter array elements and then json_agg to build a new array.
create table t (tags jsonb);
insert into t values ('[{"value": "tag2"}, {"value": "tag1"}]');
Then we can remove the tag which has {"value": "tag1"} like:
UPDATE t
SET tags = (
SELECT json_agg(tag)
FROM (
SELECT *
FROM ROWS FROM (json_array_elements(tags)) AS d (tag)
)
WHERE tag != '{"value": "tag1"}'
);
I am dealing with a column which is having Text Value mixed with Table data.
Table value
I would like to operate on this column by
if value = Table : aggregate the Table by combining the value with a comma separator
if value = : keep the original value
The result would be
I use this function to aggregate the value for Table, but I got an error when the value is a text
= Table.AggregateTableColumn(#"Lignes filtrées1", "value.1", {{"Element:Text", each Text.Combine(List.Transform(_, (x) => Text.From(x)), ", "), "Desired Result"}})
Would you have some tips to help me with this problem ?
Thanks in advance
It is hard to tell what you have in your table
As an example if the embedded table looked like this, with a single column
then you could add a column like
#"Added Custom" = Table.AddColumn(#"PriorStepName", "Custom", each try Text.Combine([Column1][TableColumn1],",") otherwise [Column1])
and get
If your table is more complicated you'd probably have to unpivot it first or otherwise give us a hint how to transform it into a single cell
I have a question about the MAP data type. Say I have a column labels ( labels MAP(RECORD(value STRING, contentType STRING)) in myTable, which the “labels” column is MAP data type and the value is a RECORD data type .
I want to query the table which returns all the rows that the key of the "labels" "startsWith" particular value ("xxx.*"),
I've tried this but I am wondering if there is a better way to do
Select labels.keys($key >='xxx') as keys,
labels.values($key >='xxx') as values
from myTable where labels.keys() >=any ('xxx')
You can try
select * from myTableName t
where exists t.labels.keys(starts_with($key, 'xxx'));
or
select f.labels.keys(regex_like($key,'xxx.*')) as keys,
f.labels.values(regex_like($key,'xxx.*')) as values
from myTable f
I also suggest changing from MAP to ARRAY, which can support path filter to get the matched entries. In the previous examples, the order between the values and keys is not guaranteed
select labels[regex_like($element.label ,‘xxx.*’)] from myTable
How can I use a column value as a column name. I've tried this:
SELECT TableX.(
SELECT OdTable.columnamecell
from OdTable
where 1 =1
AND OdTable.KeyValue = TableX.SomeValue
) as MyValue
,TableX.OtherValue as OtherValue
, TableX.SomeValue
from TableX
WHERE 1 = 1
Or to say it another way: Can I use a table column value as a column name for another query or sub-query?
To clarify: The table: OdTable has a column with values that are the column name in another table.
No, and Yes. You can't do this with "standard" SQL; all table and column names must be known, as literals, when the query is compiled; they can't be provided at runtime. What you want is called "dynamic SQL"; sometimes it is the only solution to a problem, but most of the time it is used when it is not necessary. It has several disadvantages (security risk, performance penalty, difficulty to maintain, ...)
I have SSRS report and I need to filter a static table that I created inside the report based on parameter. There is no data source to this table and I'm entering the data manually.
The tablix contain 3 columns.
How can I filter the columns based on parameter?
I tried in the expression =#param1 for example but it doesn't work.
For now I only manage to filter if the expression is on data source fields.
Do you literally have a table with a number of values in it written directly into the report? If so I don't think you will be able to perform any filtering on it as effectively all you've done it write data into textboxes that are displayed.
I would imagine your best option would be to instead create a new dataset and populate this with your static data, e.g.
SELECT 'A' AS Letter, 'English' AS Language
UNION
SELECT 'B' AS Letter, 'French' AS Language
UNION
SELECT 'A' AS Letter, 'German' AS Language
To give you a table as follows
Letter | Language
-------+----------
A | English
B | French
A | German
That you could then filter on Letter = A
So essentially you have a Tablix that has 3 columns pre-populated with information you have manually entered into the text boxes themselves? Since you've already entered that data, I don't believe there is a way to filter that at run time. That data is hard coded in essence. The Filter ability in SSRS is used as a WHERE clause so it restricts what is brought forth into the report from the query.
I would create a data source connection to a dummy database, create a DataSet, and create a query that fills a temporary table will all the information that you've manually entered. Once you create the temporary table and inserted values into it, you can then perform a SELECT with a parameter. Your Tablix will only be populated with information that matches the parameter. Something to the effect of this:
CREATE TABLE #TempTable (
ID INT
,Name VARCHAR(MAX)
,Email VARCHAR(MAX)
)
INSERT INTO #TempTable (
ID
,Name
,Email
)
VALUES (
1
,'Bob'
,'bob#email.com'
)
,(
2
,'Frank'
,'frank#email.com'
)
,(
3
,'Jim'
,'jim#email.com'
)
SELECT
*
FROM
#TempTable
WHERE
ID = #ID
DROP TABLE #TempTable