I have an instance of a ClickHouse server running and I have successfully connected to it through a client. I'm using Tabix.io to run my queries. I have created a DB and a table called "names". I want to input a lot of randomly generated names inside that table. I know that running multiple commands like this:
insert into names (id, first_name, last_name) values (1, 'Stephana', 'Bromell');
insert into names (id, first_name, last_name) values (2, 'Babita', 'Leroux');
insert into names (id, first_name, last_name) values (3, 'Pace', 'Christofides');
...
insert into names (id, first_name, last_name) values (999, 'Ralph', 'Jackson');
is not supported and therefore it is only the first query that is executed. In other words only Stephana Bromell appear in the "names" table.
What is the ClickHouse alternative for inserting larger amounts of data?
multiple values in a single insert.
insert into names (id, first_name, last_name) values (1, 'Stephana', 'Bromell') (2, 'Babita', 'Leroux') (3, 'Pace', 'Christofides') (999, 'Ralph', 'Jackson');
How about batch inserting using http client with CSV
create csv file (names.csv) with content:
1,Stephana,Bromell
2,Babita,Leroux
3,Pace,Christofides
...
999,Ralph,Jackson
call HTTP API:
curl -i -X POST \
-T "./names.csv" \
'http://localhost:8123/?query=INSERT%20INTO%20names%20FORMAT%20CSV'
Related
There is a liquibase parameter in Spring boot, let's say:
spring.liquibase.parameters.val1 = value1
I want to use this parameter in a sql file like this:
insert into table1 (name, value) values ("nameOfValue", ${val1});
unfortunately, the only combination that so far worked was putting 3 single quotes - '''${val1}''' (which gives 'value1') and substring removing the first and last single quote.
Is there a more clean way of using liquibase parameters in an INSERT statement in SQL changeset files?
It looks like you don't have to do anything special to insert a parameter from the properties no matter the chosen format of the changeset.
All of the following will result in valid insert statements.
SQL changeset
--changeset me:2
insert into test1 (id, name) values (1, 'name 1');
insert into test1 (id, name) values (3, '${val1}');
YAML changeset
- changeSet:
id: 2
author: me
changes:
- sql:
endDelimiter: ;
sql: >-
insert into test1 (id, name) values (1, 'name 1');
insert into test1 (id, name) values (3, '${val1}');
XML changeset:
<changeSet id="2" author="me">
<sql endDelimiter=";">
insert into test1 (id, name) values (1, 'name 1');
insert into test1 (id, name) values (3, '${val1}');
</sql>
</changeSet>
Assuming your inserts are done in a xxx.sql file then it is IMPORTANT you tell liquibase your SQL is formatted, you can do that by adding
--liquibase formatted sql at the top of your file
example: inserts.sql
--liquibase formatted SQL
--changeset Greg:1
insert into table1 (name, value) values ('nameOfValue', '${val1}');
References:
Example Changelogs: SQL Format and Liquibase Works with Plain Old SQL
Github demo liquibase-jpa-parameters
Q: Add the data to the tables. Be sure to use the sequences for the PKs.
I need to add data into the table that I had made but there is some error that said "SQL command not properly ended "
Codes :
INSERT INTO actors(actor_id, stage_name, first_name, last_name, birth_date)
VALUES(actor_id_seq.NEXTVAL, 'Brad Pitt', 'William', 'Pitt', TO_DATE('18-DEC-1963','DD-MON-YYYY'));
INSERT INTO actors(actor_id, stage_name, first_name, last_name, birth_date)
VALUES(actor_id_seq.NEXTVAL, 'Amitabh Bachchan', 'Amit', 'Srivastav', TO_DATE('11-10-1942','DD-MM-YYYY'));
INSERT INTO actors(actor_id, stage_name, first_name, last_name, birth_date)
VALUES(actor_id_seq.NEXTVAL, 'Aamir Khan', 'Aamir', 'Hussain Khan', TO_DATE('14 March 1965','DD Month YYYY'));
INSERT INTO actors(actor_id, stage_name, first_name, last_name, birth_date)
VALUES(actor_id_seq.NEXTVAL, 'Akshay Kumar', 'Rajiv', 'Bhatia', TO_DATE('09/09/1967','DD/MM/YYYY'));
I tested code you posted (on my database, using SQL*Plus); it is correctly written, there's nothing wrong with it.
I presume you're using SQL Workshop. If so, then: it runs statement-by-statement, so you'd have to highlight one INSERT and run it. Then highlight another, run it. And so forth.
Or, enclose the whole code you wrote into BEGIN-END (and make it an anonymous PL/SQL block) and then run it at once, e.g.
begin
insert into actors (actor_id, ...) values (actor_id_seq.nextval, ...);
insert into actors ...
insert into actors ...
end;
/
I'm trying to copy data from a table called accounts into an empty table called accounts_by_area_code. I have the following fields in accounts_by_area_code: acct_num INT, first_name STRING, last_name STRING, phone_number STRING. The table is partitioned by areacode (the first 3 digits of phone_number.
I need to use a SELECT statement to extract the area code into an INSERT INTO TABLE command to copy the speciļ¬ed columns to the new table, dynamically partitioning by area code.
This is my last attempt:
impala-shell -q "INSERT INTO TABLE accounts_by_areacode (acct_num, first_name, last_name, phone_number, areacode) PARTITION (areacode) SELECT STRLEFT (phone_number,3) AS areacode FROM accounts;"
This generates ERROR: AnalysisException: Column permutation and PARTITION clause mention more columns (5) than the SELECT / VALUES clause and PARTITION clause return (1). I'm not convinced I have even the basic syntax correct so any help would be great as I'm new to Impala.
Impala creates partitions dynamically based on data. So not sure why you want to create an empty table with partitions because it will be auto created while inserting new data.
Still, I think you can create empty table with partitions like this-
impala-shell -q "INSERT INTO TABLE accounts_by_areacode (acct_num) PARTITION (areacode)
SELECT CAST(NULL as STRING), STRLEFT (phone_number,3) AS areacode FROM accounts;"
I'm trying to do a bulk insert (SQL Server 2008) into a table but the insert must ignore any duplicate already in the table.
The simplified table will look like this with existing values.
TBL_STOCK
id | Stock
---------------
1 | S1
2 | S2
3 | S3
Now I want to do a bulk insert that looks like
INSERT INTO TBL_STOCK (Id, Stock)
VALUES
(3, S3),
(4, S4),
(5, S5)
This works but will cause duplicate entries
How do I go about ignoring duplicate entries in the Stock column?
By "ignoring duplicate entries", you mean avoiding them in TBL_STOCK, right ?
I might be a bit late, but have you tried the following:
INSERT INTO #TempStock (Id, Stock) -- temporary table
VALUES
(3, S3),
(4, S4),
(5, S5)
INSERT INTO TBL_STOCK
SELECT * FROM #TempStock
WHERE NOT EXISTS (SELECT Stock FROM #TempStock WHERE #TempStock.Stock = TBL_STOCK.Stock)
DROP TABLE #TempStock
I'm trying to create an SQL script to automatize inserting of values into a table.
I have a table table1 with 2 columns: key, value.
I want to insert a few rows:
INSERT INTO table1 (key, value) VALUES ("tomato1","random_value_1")
INSERT INTO table1 (key, value) VALUES ("tomato2","random_value_2")
INSERT INTO table1 (key, value) VALUES ("tomato3","random_value_3")
INSERT INTO table1 (key, value) VALUES ("tomato4","random_value_4")
How can I put this into a shell script that I can execute from command line.
Thanks
save as a file with .sql extension.
then run from the command line with a sql connection tool like SQLPLus (you don't indicate which database you are on)
You should also combine inserts to the same table into one, as it is much faster, like so:
INSERT INTO table1
(key, value) VALUES
("tomato1","$1"),
("tomato2","$2"),
("tomato3","$3"),
("tomato4","$4")