I want to run a query using join on two very large tables.
What is the equivalent rethinkdb syntax for this sql?
SELECT t1.uuid,t1.timestamp,t2.name
FROM t1
JOIN t2 ON t1.uuid=t2.uuid AND t1.timestamp=t2.timestamp
For the example reference, this is the sql for the tables:
CREATE TABLE t1(
id INT NOT NULL AUTO_INCREMENT,
uuid CHAR(30) NOT NULL,
timestamp CHAR(30) NOT NULL,
PRIMARY KEY(id)) ENGINE=INNODB;
CREATE TABLE t2(
id INT NOT NULL AUTO_INCREMENT,
uuid CHAR(30) NOT NULL,
timestamp CHAR(30) NOT NULL,
name CHAR(30) NOT NULL,
PRIMARY KEY(id));
The quick and dirty solution would be:
r.table("t1").innerJoin(
r.table("t2"),
function (doc1, doc2) {
return doc1("uuid").eq(doc2("uuid"))
.and(doc1("timestamp").eq(doc2("timestamp")));
}).zip()
But you may want to create a compound index on those fields
r.table("t1").indexCreate(
"myIndex", [r.row("uuid"), r.row("timestamp")])
r.table("t2").indexCreate(
"myIndex", [r.row("uuid"), r.row("timestamp")])
r.table("t1").eqJoin(
"myIndex",
r.table("t2"),
{index: "myIndex"}
).zip()
Related
For Example ,
I have created a table ,
CREATE DATABASE es_db;
USE es_db;
DROP TABLE IF EXISTS es_table;
CREATE TABLE es_table (
id BIGINT(20) UNSIGNED NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY unique_id (id),
client_name VARCHAR(32) NOT NULL,
modification_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
insertion_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
Now assume i have to select those data which are greater than time i give as input .
consider this query for example,
SELECT *, UNIX_TIMESTAMP(modification_time) AS unix_ts_in_secs FROM es_table WHERE (UNIX_TIMESTAMP(modification_time) > :sql_last_modifiedvalue AND modification_time < NOW()) ORDER BY modification_time ASC
Is there away to translate the same to native query ? i can achieve the same with jdbctemplate but would like to know if this is possible with native query?
I have a spring boot application and I trying to initialize some data on application startup.
This is my application properties:
#Database connection
spring.datasource.url=jdbc:h2:mem:test_db
spring.datasource.username=...
spring.datasource.password=...
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.initialize=true
spring.datasource.schema=schema.sql
spring.datasource.data=schema.sql
#Hibernate configuration
#spring.jpa.hibernate.ddl-auto = none
This is schema.sql:
CREATE TABLE IF NOT EXISTS `Person` (
`id` INTEGER PRIMARY KEY AUTO_INCREMENT,
`first_name` VARCHAR(50) NOT NULL,
`age` INTEGER NOT NULL,
PRIMARY KEY(`id`)
);
and data.sql
INSERT INTO `Person` (
`id`,
`first_name`,
`age`
) VALUES (
1,
'John',
20
);
But I got 'Syntax error in SQL statement' on application startup:
19:08:45.642 6474 [main] INFO o.h.tool.hbm2ddl.SchemaExport - HHH000476: Executing import script '/import.sql'
19:08:45.643 6475 [main] ERROR o.h.tool.hbm2ddl.SchemaExport - HHH000388: Unsuccessful: CREATE TABLE Person (
19:08:45.643 6475 [main] ERROR o.h.tool.hbm2ddl.SchemaExport - Syntax error in SQL statement "CREATE TABLE PERSON ( [*]"; expected "identifier"
Syntax error in SQL statement "CREATE TABLE PERSON ( [*]"; expected "identifier"; SQL statement:
I can't understand, what's wrong with this SQL.
Try this code. Remove PRIMARY KEY(id) and execute it.
CREATE TABLE IF NOT EXISTS `Person` (
`id` INTEGER PRIMARY KEY AUTO_INCREMENT,
`first_name` VARCHAR(50) NOT NULL,
`age` INTEGER NOT NULL
);
This error results from the structure of the CREATE TABLE declaration.
It will be the result when you have an extra comma in the end of your SQL declaration--no column declaration following the comma. For example:
CREATE TABLE IF NOT EXISTS `Person` (
`id` INTEGER PRIMARY KEY AUTO_INCREMENT,
`first_name` VARCHAR(50) NOT NULL,
`age` INTEGER NOT NULL, --note this line has a comma in the end
);
That's because CREATE TABLE expects a list of the columns that will be created along with the table, and the first parameter of the column is the identifier. As you check here, the column declaration follows the structure:
identifier datatype <constraints> <autoincrement> <functions>
Thus, in your case, as #budthapa and #Vishwanath Mataphati have mentioned, you could simply remove the PRIMARY KEY(id) line from the CREATE TABLE declaration. Moreover, you have already stated that id is a primary key on the first line of the column definitions.
In case you do not have a statement as the PRIMARY KEY declaration, be sure to check for the extra comma following your last column declaration.
Try this, as you have used Table_name
CREATE TABLE IF NOT EXISTS Person (
id INTEGER PRIMARY KEY AUTO_INCREMENT,
first_name VARCHAR(50) NOT NULL,
age INTEGER NOT NULL
);
I was add below in to application.properties and it work for me
spring.jpa.properties.hibernate.globally_quoted_identifiers=true
spring.jpa.properties.hibernate.globally_quoted_identifiers_skip_column_definitions = true
What helped in my case was removing single quotes from the table name in my insert query
I had to change this:
INSERT INTO 'translator' (name, email) VALUES ('John Smith', 'john#mail.com');
to this:
INSERT INTO translator (name, email) VALUES ('John Smith', 'john#mail.com');
You set auto increment id, so you can't insert new record with id.
Try INSERT INTO `Person` (
`first_name`,
`age`
) VALUES (
'John',
20
);
I ran into same issue. I fixed that with these application.properties:
spring.jpa.properties.hibernate.connection.charSet=UTF-8
spring.jpa.properties.hibernate.hbm2ddl.import_files_sql_extractor=org.hibernate.tool.hbm2ddl.MultipleLinesSqlCommandExtractor
Some issue with multi-line and default encoding.
I'm new to Postgres and even newer to understanding how explain works. I have a query below which is typical, I just replace the date:
explain
select account_id,
security_id,
market_value_date,
sum(market_value) market_value
from market_value_history mvh
inner join holding_cust hc on hc.id = mvh.owning_object_id
where
hc.account_id = 24766
and market_value_date = '2015-07-02'
and mvh.created_by = 'HoldingLoad'
group by account_id, security_id, market_value_date
order by security_id, market_value_date;
Attached is a screenshot of explain
The count for holding_cust table is 2 million rows and market_value_history table has 163 million rows
Below are the table definitions and indexes for market_value_history and holding_cust:
I'd appreciate any advice you may be able to give me on tuning this query.
CREATE TABLE public.market_value_history
(
id integer NOT NULL DEFAULT nextval('market_value_id_seq'::regclass),
market_value numeric(18,6) NOT NULL,
market_value_date date,
holding_type character varying(25) NOT NULL,
owning_object_type character varying(25) NOT NULL,
owning_object_id integer NOT NULL,
created_by character varying(50) NOT NULL,
created_dt timestamp without time zone NOT NULL,
last_modified_dt timestamp without time zone NOT NULL,
CONSTRAINT market_value_history_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.market_value_history
OWNER TO postgres;
-- Index: public.ix_market_value_history_id
-- DROP INDEX public.ix_market_value_history_id;
CREATE INDEX ix_market_value_history_id
ON public.market_value_history
USING btree
(owning_object_type COLLATE pg_catalog."default", owning_object_id);
-- Index: public.ix_market_value_history_object_type_date
-- DROP INDEX public.ix_market_value_history_object_type_date;
CREATE UNIQUE INDEX ix_market_value_history_object_type_date
ON public.market_value_history
USING btree
(owning_object_type COLLATE pg_catalog."default", owning_object_id, holding_type COLLATE pg_catalog."default", market_value_date);
CREATE TABLE public.holding_cust
(
id integer NOT NULL DEFAULT nextval('holding_cust_id_seq'::regclass),
account_id integer NOT NULL,
security_id integer NOT NULL,
subaccount_type integer,
trade_date date,
purchase_date date,
quantity numeric(18,6),
net_cost numeric(18,2),
adjusted_net_cost numeric(18,2),
open_date date,
close_date date,
created_by character varying(50) NOT NULL,
created_dt timestamp without time zone NOT NULL,
last_modified_dt timestamp without time zone NOT NULL,
CONSTRAINT holding_cust_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.holding_cust
OWNER TO postgres;
-- Index: public.ix_holding_cust_account_id
-- DROP INDEX public.ix_holding_cust_account_id;
CREATE INDEX ix_holding_cust_account_id
ON public.holding_cust
USING btree
(account_id);
-- Index: public.ix_holding_cust_acctid_secid_asofdt
-- DROP INDEX public.ix_holding_cust_acctid_secid_asofdt;
CREATE INDEX ix_holding_cust_acctid_secid_asofdt
ON public.holding_cust
USING btree
(account_id, security_id, trade_date DESC);
-- Index: public.ix_holding_cust_security_id
-- DROP INDEX public.ix_holding_cust_security_id;
CREATE INDEX ix_holding_cust_security_id
ON public.holding_cust
USING btree
(security_id);
-- Index: public.ix_holding_cust_trade_date
-- DROP INDEX public.ix_holding_cust_trade_date;
CREATE INDEX ix_holding_cust_trade_date
ON public.holding_cust
USING btree
(trade_date);
Two things:
As Dmitry pointed out, you should look at creating an Index on market_value_date field. Its possible that post that you have a completely different query plan, which may or may not bring up other bottlenecks, but it should certainly remove this seq-Scan.
Minor (since I doubt if it affects performance), but secondly, if you aren't enforcing field length by design, you may want to change createdby field to TEXT. As can be seen in the query, its trying to cast all createdby fields to TEXT for this query.
I don't have so many experience in this..I've got this error when I tried to insert into table.
Here's code:
CREATE TABLE factory
(idfactory INT NOT NULL,
location_id INT NOT NULL,
owner INT NOT NULL,
CONSTRAINT factory_id_pk PRIMARY KEY(idfactory),
CONSTRAINT f_location_id_fk FOREIGN KEY(location_id) REFERENCES location(idLocation),
CONSTRAINT s_owner_id_fk FOREIGN KEY(owner) REFERENCES employees(idEmployee));
CREATE TABLE location
(idLocation INT NOT NULL,
Name VARCHAR(45),
region_id INT NOT NULL,
CONSTRAINT location_id_pk PRIMARY KEY(idLocation),
CONSTRAINT p_location_id_fk FOREIGN KEY(region_id) REFERENCES region(idRegion));
CREATE TABLE employees
(idEmployee INT NOT NULL,
Name VARCHAR(20) NOT NULL,
location_id INT NOT NULL,
email VARCHAR(45),
CONSTRAINT emp_id_pk PRIMARY KEY(idEmployee),
CONSTRAINT emp_loc_fk FOREIGN KEY(location_id) REFERENCES location(IdLocation);
Insert:
INSERT INTO factory(factory_id_sequence.NEXTVAL,43,23);
And i got this error..I can't see what's mistake.
Thanks a lot!
You need to have the VALUES keyword in the insert statement:
INSERT INTO factory VALUES (factory_id_sequence.NEXTVAL,43,23);
I've used the CREATE VIEW command to create a view (obviously), and join multiple tables. The CREATE VIEW command works perfectly, but when I try to update the VIEW RentalInfoOct, I receive error "ORA-01779: cannot modify a column which maps to a non key-preserved table"
CREATE VIEW RentalInfoOct
(branch_no, branch_name, customer_no, customer_name, item_no, rental_date)
AS
SELECT i.branchNo, b.branchName, r.customerNo, c.customerName, i.itemNo, r.dateFrom
FROM item i
INNER JOIN rental r
ON i.itemNo = r.itemNo
INNER JOIN branch b
ON i.branchNo = b.branchNo
INNER JOIN customer c
ON r.customerNo = c.customerNo
WHERE r.dateFrom
BETWEEN to_date('10-01-2009','MM-DD-YYYY')
AND to_date('10-31-2009','MM-DD-YYYY')
My update command.
UPDATE RentalInfoOct
SET item_no = '3'
WHERE customer_name = 'April Alister'
AND branch_name = 'Kingsway'
AND rental_date = '10/28/2009'
I'm not sure if this will help in solving the problem, but here are my CREATE TABLE commands
CREATE TABLE Branch
(
branchNo SMALLINT NOT NULL,
branchName VARCHAR(20) NOT NULL,
branchAddress VARCHAR(40) NOT NULL,
PRIMARY KEY (BranchNo)
);
--Item Table Definition
CREATE TABLE Item
(
branchNo SMALLINT NOT NULL,
itemNo SMALLINT NOT NULL,
itemSize VARCHAR(8) NOT NULL,
price DECIMAL(6,2) NOT NULL,
PRIMARY KEY (ItemNo, BranchNo),
FOREIGN KEY (BranchNo) REFERENCES Branch ON DELETE CASCADE,
CONSTRAINT VALIDAMT
CHECK (price > 0)
);
-- Customer Table Definition
CREATE TABLE Customer
(
customerNo SMALLINT NOT NULL,
customerName VARCHAR(15) NOT NULL,
customerAddress VARCHAR(40) NOT NULL,
customerTel VARCHAR(10),
PRIMARY KEY (CustomerNo)
);
-- Rental Table Definition
CREATE TABLE Rental
(
branchNo SMALLINT NOT NULL,
customerNo SMALLINT NOT NULL,
dateFrom DATE NOT NULL,
dateTo DATE,
itemNo SMALLINT NOT NULL,
PRIMARY KEY (BranchNo, CustomerNo, dateFrom),
FOREIGN KEY (BranchNo) REFERENCES Branch(BranchNo) ON DELETE CASCADE,
FOREIGN KEY (CustomerNo) REFERENCES Customer(CustomerNo) ON DELETE CASCADE,
CONSTRAINT CORRECTDATES CHECK (dateTo > dateFrom OR dateTo IS NULL)
);
See: Oracle: multiple table updates => ORA-01779: cannot modify a column which maps to a non key-preserved table
You're attempting to update a view with joins, but the join conditions are not based on a uniqueness constraint, which creates the possibility of multiple rows that are created from a single row in one table.
It seems like you need a Unique Key - Foreign Key relationship between the columns your join condition is based on.
EDIT: I just saw your edit. Changing r.branchNo = b.branchNo to i.branchNo = b.branchNo should go a long way. Not sure how well r.customerNo = c.customerNo will work out.