Delete column and Add column with default value Amplify graphql DynamoDB Appsync - graphql

I want to delete the whole column at once if there 1000's of data deleting one by one is time consuming so is there any best way to do it
Deleting this column
And my next question is I need to add new column with default value aftering deleting
i.e :
id | x | y | z
| A | B | C
| A | B | C
| A | B | C
| A | B | C
If above is the the table with thousands of data , I want to delete "z" and add "newColumn" with default value of "D" as below
id | x | y | newColumn
| A | B | D
| A | B | D
| A | B | D
| A | B | D

Related

Merge concept in Oracle

**Before Merge:**
***Source table as SRC***
-------------
| A | B | C |
-------------
| a | b | 10|
| c | d | 20|
| w | x | 30|
| w | y | 40|
| w | z | 50|
-------------
***Target Table as TGT***
--------------
| D | E | F |
--------------
| a | b | null|
| c | e | null|
| w | m | null|
| w | n | null|
| w | o | null|
-------------
***After Merge:***
***Target table as TGT***
-----------
| D | E | F |
------------
| a | b | 10|
| c | e | 20|
| w | m | 50|
-----------
I have mentioned 2 tables above: one as source table and other as target table
I want to merge above two tables and store result in target in oracle
Logic:
1st logic: (find out the matching values of A and to D and B column to E column and only 1 match found)
Ex: A.a = D.a and B.b = E.b then update C column value to F column i.e. F=10
2nd logic: If 1st logic not found then find out the matching values of A column to the D column and B column value don’t match with E column and only 1 match found)
Ex: A.c = D.c then update C column value to F column i.e. F=20
3rd logic: If 1st logic and 2nd logic not found then find out the matching values of A column to the D column and B column value don’t match with E column and multiple matches found) then update the C column value of highest number in F column.
Ex: A.w = D.w -> we have 3 rows, out of those we select the value which has high value i.e. 50 and store this value in F column i.e. F = 50 in one row and remove other two rows
After merging, no. of rows are reduced. How to write a program for this using MERGE concept in oracle
You can use the corelated sub-query as follows:
Update tgt t
Set t.f = (Select Coalesce(Max(case when s.b = t.e then s.c end), Max(s.c))
From src s
Where s.a = t.d)

How to split a row where there's 2 data in each cells separated by a carriage return?

Someone gives me a file with, sometimes, inadequate data.
Data should be like this :
+---------+-----------+--------+
| Name | Initial | Age |
+---------+-----------+--------+
| Jack | J | 43 |
+---------+-----------+--------+
| Nicole | N | 12 |
+---------+-----------+--------+
| Mark | M | 22 |
+---------+-----------+--------+
| Karine | K | 25 |
+---------+-----------+--------+
Sometimes it comes like this tho :
+---------+-----------+--------+
| Name | Initial | Age |
+---------+-----------+--------+
| Jack | J | 43 |
+---------+-----------+--------+
| Nicole | N | 12 |
| Mark | M | 22 |
+---------+-----------+--------+
| Karine | K | 25 |
+---------+-----------+--------+
As you can see, Nicole and Mark are put in the same row, but the data are separated by a carriage return.
I can do split by row, but it demultiply the data :
+---------+-----------+--------+
| Nicole | N | 12 |
| | M | 22 |
+---------+-----------+--------+
| Mark | N | 12 |
| | M | 22 |
+---------+-----------+--------+
Which make me lose that Mark is associated with the "2nd row" of data.
(The data here is purely an example)
One way to do this is to transform each cell into a list by doing a Text.Split on the line feed / carriage return symbol.
TextSplit = Table.TransformColumns(Source,
{
{"Name", each Text.Split(_,"#(lf)"), type text},
{"Initial", each Text.Split(_,"#(lf)"), type text},
{"Age", each Text.Split(_,"#(lf)"), type text}
}
)
Now each column is a list of lists which you can combine into one long list using List.Combine and you can glue these columns together to make table with Table.FromColumns.
= Table.FromColumns(
{
List.Combine(TextSplit[Name]),
List.Combine(TextSplit[Initial]),
List.Combine(TextSplit[Age])
},
{"Name", "Initial", "Age"}
)
Putting this together, the whole query looks like this:
let
Source = <Your data source>
TextSplit = Table.TransformColumns(Source,{{"Name", each Text.Split(_,"#(lf)"), type text},{"Initial", each Text.Split(_,"#(lf)"), type text},{"Age", each Text.Split(_,"#(lf)"), type text}}),
FromColumns = Table.FromColumns({List.Combine(TextSplit[Name]),List.Combine(TextSplit[Initial]),List.Combine(TextSplit[Age])},{"Name","Initial","Age"})
in
FromColumns

Spring Data Pagination GroupBy

I need the solution for the below question using Spring Data / Spring Boot.
I have a record in my MySQL table which is something like:
---------------------------------
| id | Postcontent | postdate |
------+-------------+------------
| 1 | A | 2013-01-31|
| 2 | B | 2013-01-31|
| 3 | C | 2013-01-30|
| 4 | D | 2013-01-30|
| 5 | E | 2013-01-29|
| 6 | F | 2013-01-29|
and I would like to show it in something like
2013-01-31
A
B
2013-01-30
C
D
2013-01-29
E
F
i can able to show like above with group by, but not able to add pagination in group by
if anyone can help me on how to add pagination in group by.

MDX - filter empty outside of selected range

Cube is populated with data divided into time dimension ( period ) which represents a month.
Following query:
select non empty {[Measures].[a], [Measures].[b], [Measures].[c]} on columns,
{[Period].[Period].ALLMEMEMBERS} on rows
from MyCube
returns:
+--------+----+---+--------+
| Period | a | b | c |
+--------+----+---+--------+
| 2 | 3 | 2 | (null) |
| 3 | 5 | 3 | 1 |
| 5 | 23 | 2 | 2 |
+--------+----+---+--------+
Removing non empty
select {[Measures].[a], [Measures].[b], [Measures].[c]} on columns,
{[Period].[Period].ALLMEMEMBERS} on rows
from MyCube
Renders:
+--------+--------+--------+--------+
| Period | a | b | c |
+--------+--------+--------+--------+
| 1 | (null) | (null) | (null) |
| 2 | 3 | 2 | (null) |
| 3 | 5 | 3 | 1 |
| 4 | (null) | (null) | (null) |
| 5 | 23 | 2 | 2 |
| 6 | (null) | (null) | (null) |
+--------+--------+--------+--------+
What i would like to get, is all records from period 2 to period 5, first occurance of values in measure "a" denotes start of range, last occurance - end of range.
This works - but i need this to be dynamically calculated during runtime by mdx:
select non empty {[Measures].[a], [Measures].[b], [Measures].[c]} on columns,
{[Period].[Period].&[2] :[Period].[Period].&[5]} on rows
from MyCube
desired output:
+--------+--------+--------+--------+
| Period | a | b | c |
+--------+--------+--------+--------+
| 2 | 3 | 2 | (null) |
| 3 | 5 | 3 | 1 |
| 4 | (null) | (null) | (null) |
| 5 | 23 | 2 | 2 |
+--------+--------+--------+--------+
I tried looking for first/last values but just couldn't compose them into the query properly. Anyone has this issue before ? This should be pretty common seeing as I want to get a continuous financial report without skipping months where nothing is going on. Thanks.
Maybe try playing with NonEmpty / Tail function in a WITH clause:
WITH
SET [First] AS
{HEAD(NONEMPTY([Period].[Period].MEMBERS, [Measures].[a]))}
SET [Last] AS
{TAIL(NONEMPTY([Period].[Period].MEMBERS, [Measures].[a]))}
SELECT
{
[Measures].[a]
, [Measures].[b]
, [Measures].[c]
} on columns,
[First].ITEM(0).ITEM(0)
:[Last].ITEM(0).ITEM(0) on rows
FROM MyCube;
to debug a custom set, to see what members it is returning you can do something like this:
WITH
SET [First] AS
{HEAD(NONEMPTY([Period].[Period].MEMBERS, [Measures].[a]))}
SELECT
{
[Measures].[a]
, [Measures].[b]
, [Measures].[c]
} on columns,
[First] on rows
FROM MyCube;
I think reading your comment about Children means that this is also an alternative - to add an extra [Period]:
WITH
SET [First] AS
{HEAD(NONEMPTY([Period].[Period].[Period].MEMBERS
, [Measures].[a]))}
SET [Last] AS
{TAIL(NONEMPTY([Period].[Period].[Period].MEMBERS
, [Measures].[a]))}
SELECT
{
[Measures].[a]
, [Measures].[b]
, [Measures].[c]
} on columns,
[First].ITEM(0).ITEM(0)
:[Last].ITEM(0).ITEM(0) on rows
FROM MyCube;

hive rows preceding unexpected behavior

Given this ridiculously simple data set:
+--------+-----+
| Bucket | Foo |
+--------+-----+
| 1 | A |
| 1 | B |
| 1 | C |
| 1 | D |
+--------+-----+
I want to see the value of Foo in the previous row:
select
foo,
max(foo) over (partition by bucket order by foo rows between 1 preceding and 1 preceding) as prev_foo
from
...
Which gives me:
+--------+-----+----------+
| Bucket | Foo | Prev_Foo |
+--------+-----+----------+
| 1 | A | A |
| 1 | B | A |
| 1 | C | B |
| 1 | D | C |
+--------+-----+----------+
Why do I get 'A' back for the first row? I would expect it to be be null. It's throwing off calculations where I'm looking for that null. I can work around it by throwing a row_number() in there, but I'd prefer to handle it with fewer calcs.
use the LAG function to get previous row:
LAG(foo) OVER(partition by bucket order by foo) as Prev_Foo

Resources