how do you use a liquibase parameter in an insert statement? - spring-boot

There is a liquibase parameter in Spring boot, let's say:
spring.liquibase.parameters.val1 = value1
I want to use this parameter in a sql file like this:
insert into table1 (name, value) values ("nameOfValue", ${val1});
unfortunately, the only combination that so far worked was putting 3 single quotes - '''${val1}''' (which gives 'value1') and substring removing the first and last single quote.
Is there a more clean way of using liquibase parameters in an INSERT statement in SQL changeset files?

It looks like you don't have to do anything special to insert a parameter from the properties no matter the chosen format of the changeset.
All of the following will result in valid insert statements.
SQL changeset
--changeset me:2
insert into test1 (id, name) values (1, 'name 1');
insert into test1 (id, name) values (3, '${val1}');
YAML changeset
- changeSet:
id: 2
author: me
changes:
- sql:
endDelimiter: ;
sql: >-
insert into test1 (id, name) values (1, 'name 1');
insert into test1 (id, name) values (3, '${val1}');
XML changeset:
<changeSet id="2" author="me">
<sql endDelimiter=";">
insert into test1 (id, name) values (1, 'name 1');
insert into test1 (id, name) values (3, '${val1}');
</sql>
</changeSet>

Assuming your inserts are done in a xxx.sql file then it is IMPORTANT you tell liquibase your SQL is formatted, you can do that by adding
--liquibase formatted sql at the top of your file
example: inserts.sql
--liquibase formatted SQL
--changeset Greg:1
insert into table1 (name, value) values ('nameOfValue', '${val1}');
References:
Example Changelogs: SQL Format and Liquibase Works with Plain Old SQL
Github demo liquibase-jpa-parameters

Related

Add the data to the tables by using Oracle APEX

Q: Add the data to the tables. Be sure to use the sequences for the PKs.
I need to add data into the table that I had made but there is some error that said "SQL command not properly ended "
Codes :
INSERT INTO actors(actor_id, stage_name, first_name, last_name, birth_date)
VALUES(actor_id_seq.NEXTVAL, 'Brad Pitt', 'William', 'Pitt', TO_DATE('18-DEC-1963','DD-MON-YYYY'));
INSERT INTO actors(actor_id, stage_name, first_name, last_name, birth_date)
VALUES(actor_id_seq.NEXTVAL, 'Amitabh Bachchan', 'Amit', 'Srivastav', TO_DATE('11-10-1942','DD-MM-YYYY'));
INSERT INTO actors(actor_id, stage_name, first_name, last_name, birth_date)
VALUES(actor_id_seq.NEXTVAL, 'Aamir Khan', 'Aamir', 'Hussain Khan', TO_DATE('14 March 1965','DD Month YYYY'));
INSERT INTO actors(actor_id, stage_name, first_name, last_name, birth_date)
VALUES(actor_id_seq.NEXTVAL, 'Akshay Kumar', 'Rajiv', 'Bhatia', TO_DATE('09/09/1967','DD/MM/YYYY'));
I tested code you posted (on my database, using SQL*Plus); it is correctly written, there's nothing wrong with it.
I presume you're using SQL Workshop. If so, then: it runs statement-by-statement, so you'd have to highlight one INSERT and run it. Then highlight another, run it. And so forth.
Or, enclose the whole code you wrote into BEGIN-END (and make it an anonymous PL/SQL block) and then run it at once, e.g.
begin
insert into actors (actor_id, ...) values (actor_id_seq.nextval, ...);
insert into actors ...
insert into actors ...
end;
/

How to insert large amount of data into a ClickHouse DB?

I have an instance of a ClickHouse server running and I have successfully connected to it through a client. I'm using Tabix.io to run my queries. I have created a DB and a table called "names". I want to input a lot of randomly generated names inside that table. I know that running multiple commands like this:
insert into names (id, first_name, last_name) values (1, 'Stephana', 'Bromell');
insert into names (id, first_name, last_name) values (2, 'Babita', 'Leroux');
insert into names (id, first_name, last_name) values (3, 'Pace', 'Christofides');
...
insert into names (id, first_name, last_name) values (999, 'Ralph', 'Jackson');
is not supported and therefore it is only the first query that is executed. In other words only Stephana Bromell appear in the "names" table.
What is the ClickHouse alternative for inserting larger amounts of data?
multiple values in a single insert.
insert into names (id, first_name, last_name) values (1, 'Stephana', 'Bromell') (2, 'Babita', 'Leroux') (3, 'Pace', 'Christofides') (999, 'Ralph', 'Jackson');
How about batch inserting using http client with CSV
create csv file (names.csv) with content:
1,Stephana,Bromell
2,Babita,Leroux
3,Pace,Christofides
...
999,Ralph,Jackson
call HTTP API:
curl -i -X POST \
-T "./names.csv" \
'http://localhost:8123/?query=INSERT%20INTO%20names%20FORMAT%20CSV'

execute SQL query stored in a table

I have a table having one of the columns that stores SQL query returning ids or it stores comma separated ids.
create table to store query or ids(separated by ,)
create table test1
(
name varchar(20) primary key,
stmt_or_value varchar(500),
type varchar(50)
);
insert into test1 (name, stmt_or_value, type)
values ('first', 'select id from data where id = 1;','SQL_QUERY')
insert into test1 (name, stmt_or_value, type)
values ('second', '1,2,3,4','VALUE')
data table is as follows
create table data
(
id number,
subject varchar(500)
);
insert into data (id, subject) values (1, 'test subject1');
insert into data (id, subject) values (2, 'test subject2');
insert into data (id, subject) values (3, 'test subject2');
I am not able to formulate query that will return values after either executing stored sql or parsing stored ids based on the value of name.
select id, subject
from data
where id in( EXECUTE IMMEDIATE stmt_or_value
where type='SQL_QUERY'
and name = 'first') or
( parse and return ids
from stmt_or_value
where type='VALUE'
and name = 'second')
Could you please help me in this.
Parsing comma separated value is done, I basically need help in below first part of the query:
( EXECUTE IMMEDIATE stmt_or_value
where type='SQL_QUERY'
and name = 'first')
This seems a very peculiar requirement, and one which will be difficult to solve in a robust fashion. STMT_OR_VALUE is the embodiment of the One Column Two Usages anti-pattern. Furthermore, resolving STMT_OR_VALUE requires flow control logic and the use of dynamic SQL. Consequently it cannot be a pure SQL solution: you need to use PL/SQL to assemble and execute the dynamic query.
Here is a proof of concept for a solution. I have opted for a function which you can call from SQL. It depends on one assumption: every query string you insert into TEST1.STMT_OR_VALUE has a projection of a single numeric column and every value string is a CSV of numeric data only. With this proviso it is simple to construct a function which either executes a dynamic query or tokenizes the string into a series of numbers; both of which are bulk collected into a nested table:
create or replace function get_ids (p_name in test1.name%type)
return sys.odcinumberlist
is
l_rec test1%rowtype;
return_value sys.odcinumberlist;
begin
select * into l_rec
from test1
where name = p_name;
if l_rec.type = 'SQL_QUERY' then
-- execute a query
execute immediate l_rec.stmt_or_value
bulk collect into return_value;
else
-- tokenize a string
select xmltab.tkn
bulk collect into return_value
from ( select l_rec.stmt_or_value from dual) t
, xmltable( 'for $text in ora:tokenize($in, ",") return $text'
passing stmt_or_value as "in"
columns tkn number path '.'
) xmltab;
end if;
return return_value;
end;
/
Note there is more than one way of executing a dynamic SQL statement and a multiplicity of ways to tokenize a CSV into a series of numbers. My decisions are arbitrary: feel free to substitute your preferred methods here.
This function can be invoked with a table() call:
select *
from data
where id in ( select * from table(get_ids('first'))) -- execute query
or id in ( select * from table(get_ids('second'))) -- get string of values
/
The big benefit of this approach is it encapsulates the logic around the evaluation of STMT_OR_VALUE and hides use of Dynamic SQL. Consequently it is easy to employ it in any SQL statement whilst retaining readability, or to add further mechanisms for generating a set of IDs.
However, this solution is brittle. It will only work if the values in the test1 table obey the rules. That is, not only must they be convertible to a stream of single numbers but the SQL statements must be valid and executable by EXECUTE IMMEDIATE. For instance, the trailing semi-colon in the question's sample data is invalid and would cause EXECUTE IMMEDIATE to hurl. Dynamic SQL is hard not least because it converts compilation errors into runtime errors.
Following is the set up data used for this example:
create table test1
(
test_id number primary key,
stmt_or_value varchar(500),
test_type varchar(50)
);
insert into test1 (test_id, stmt_or_value, test_type)
values (1, 'select id from data where id = 1','SQL_QUERY');
insert into test1 (test_id, stmt_or_value, test_type)
values (2, '1,2,3,4','VALUE');
insert into test1 (test_id, stmt_or_value, test_type)
values (3, 'select id from data where id = 5','SQL_QUERY');
insert into test1 (test_id, stmt_or_value, test_type)
values (4, '3,4,5,6','VALUE');
select * from test1;
TEST_ID STMT_OR_VALUE TEST_TYPE
1 select id from data where id = 1 SQL_QUERY
2 1,2,3,4 VALUE
3 select id from data where id = 5 SQL_QUERY
4 3,4,5,6 VALUE
create table data
(
id number,
subject varchar(500)
);
insert into data (id, subject) values (1, 'test subject1');
insert into data (id, subject) values (2, 'test subject2');
insert into data (id, subject) values (3, 'test subject3');
insert into data (id, subject) values (4, 'test subject4');
insert into data (id, subject) values (5, 'test subject5');
select * from data;
ID SUBJECT
1 test subject1
2 test subject2
3 test subject3
4 test subject4
5 test subject5
Below is the solution:
declare
sql_stmt clob; --to store the dynamic sql
type o_rec_typ is record(id data.id%type, subject data.subject%type);
type o_tab_typ is table of o_rec_typ;
o_tab o_tab_typ; --to store the output records
begin
--The below SELECT query generates the required dynamic SQL
with stmts as (
select (listagg(stmt_or_value, ' union all ') within group(order by stmt_or_value))||' union all ' s
from test1 t
where test_type = 'SQL_QUERY')
select
q'{select id, subject
from data
where id in (}'||
nullif(s,' union all ')||q'{
select distinct to_number(regexp_substr(s, '[^,]+', 1, l)) id
from (
select level l,
s
from (select listagg(stmt_or_value,',') within group(order by stmt_or_value) s
from test1
where test_type = 'VALUE') inp
connect by level <= length (regexp_replace(s, '[^,]+')) + 1))}' stmt into sql_stmt
from stmts; -- Create the dynamic SQL and store it into the clob variable
--execute the statement, fetch and display the output
execute immediate sql_stmt bulk collect into o_tab;
for i in o_tab.first..o_tab.last
loop
dbms_output.put_line('id: '||o_tab(i).id||' subject: '||o_tab(i).subject);
end loop;
end;
Output:
id: 1 subject: test subject1
id: 2 subject: test subject2
id: 3 subject: test subject3
id: 4 subject: test subject4
id: 5 subject: test subject5
Learnings:
Avoid using key words for table and column names.
Design application tables effectively to serve current and reasonable future requirements.
The above SQL will work. Still it is wise to consider reviewing the table design because, the complexity of the code will keep increasing with changes in requirements in future.
Learned how to convert comma separated values into records. "https://asktom.oracle.com/pls/apex/f?p=100:11:::NO::P11_QUESTION_ID:9538583800346706523"
declare
my_sql varchar2(1000);
v_num number;
v_num1 number;
begin
select stmt_or_value into my_sql from test1 where ttype='SQL_QUERY';
execute immediate my_sql into v_num;
select id into v_num1 from data where id=v_num;
dbms_output.put_line(v_num1);
end;
Answer for part 1.Please check.

Inner query not throwing error in postgres

There is a scenario in which we are retrieving some result from inner query and using the result to perform some operation
create table test1(key integer,value varchar)
insert into test1 values(1,'value 1');
insert into test1 values(2,'value 2');
insert into test1 values(3,'value 3');
second table as
create table test2(key1 integer, valuein varchar);
insert into test2 values (2,'value inside is 2');
insert into test2 values (4,'value inside is 4');
insert into test2 values (5,'value inside is 5');
the below query is giving result but in my view it should give an error
select * from test1 where key in
(select key from test2)
because key column does not exist in test2 table.
but it is giving result in postgres
but when run in oracle it is giving error as
ORA-00904: "KEY": invalid identifier
00904. 00000 - "%s: invalid identifier"
This is the correct behavior as specified in the SQL standard. The inner query has access to all columns of the outer query - and because test1 has a column named key (which, btw is a horrible name for a column) the inner select is valid.
See these discussions on the Postgres mailing list:
http://postgresql.nabble.com/BUG-13336-Unexpected-result-from-invalid-query-td5850684.html

creating SQL script

I'm trying to create an SQL script to automatize inserting of values into a table.
I have a table table1 with 2 columns: key, value.
I want to insert a few rows:
INSERT INTO table1 (key, value) VALUES ("tomato1","random_value_1")
INSERT INTO table1 (key, value) VALUES ("tomato2","random_value_2")
INSERT INTO table1 (key, value) VALUES ("tomato3","random_value_3")
INSERT INTO table1 (key, value) VALUES ("tomato4","random_value_4")
How can I put this into a shell script that I can execute from command line.
Thanks
save as a file with .sql extension.
then run from the command line with a sql connection tool like SQLPLus (you don't indicate which database you are on)
You should also combine inserts to the same table into one, as it is much faster, like so:
INSERT INTO table1
(key, value) VALUES
("tomato1","$1"),
("tomato2","$2"),
("tomato3","$3"),
("tomato4","$4")

Resources