I am having problem with sequence in postgres and jpa
Caused by: javax.persistence.EntityExistsException:
Exception Description: The sequence named [shp_users_seq] is setup incorrectly. Its increment does not match its pre-allocation size.
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.persist(EntityManagerImpl.java:443)
at com.sun.enterprise.container.common.impl.EntityManagerWrapper.persist(EntityManagerWrapper.java:269)
at base.data.provider.beans.session.DAOImpl.createUser(DAOImpl.java:18)
The rule is : the increment size of the sequence is the value of the allocationSize in jpa:
I had this setting:which is wrong:
#SequenceGenerator(name = "User_Seq_Gen",
sequenceName = "shp_users_seq", allocationSize=999)
corrected according to this:
#SequenceGenerator(name = "User_Seq_Gen",
sequenceName = "shp_users_seq" ,allocationSize=1)
because the increment size of the sequence is 1:
shopper=> \d shp_users_seq;
Sequence "public.shp_users_seq"
Column | Type | Value
---------------+---------+---------------------
sequence_name | name | shp_users_seq
last_value | bigint | 1
start_value | bigint | 1
increment_by | bigint | 1
max_value | bigint | 9223372036854775807
min_value | bigint | 1
cache_value | bigint | 1
log_cnt | bigint | 0
is_cycled | boolean | f
is_called | boolean | t
Try to set the start value for the sequence to at least the same size as you use for the attribute allocationSize of the #SequenceGenerator annotation.
CREATE SEQUENCE seq_name
...
START WITH 100;
See http://dev.eclipse.org/mhonarc/lists/eclipselink-users/msg03461.html for details on this.
Related
I am running a docker image of Vertica on windows. I have created a table in vertica with this schema (student_id is primary key)
dbadmin#d1f942c8c1e0(*)=> \d testschema.student;
List of Fields by Tables
Schema | Table | Column | Type | Size | Default | Not Null | Primary Key | Foreign Key
------------+---------+------------+-------------+------+---------+----------+-------------+-------------
testschema | student | student_id | int | 8 | | t | t |
testschema | student | name | varchar(20) | 20 | | f | f |
testschema | student | major | varchar(20) | 20 | | f | f |
(3 rows)
student_id is a primary key. I am testing loading data from csv file using copy command.
First I used insert - insert into testschema.student values (1,'Jack','Biology');
Then I created a csv file at /home/dbadmin/vertica_test directory -
vi student.csv
2,Kate,Sociology
3,Claire,English
4,Jack,Biology
5,Mike,Comp. Sci
Then I ran this command
copy testschema.students from '/home/dbadmin/vertica_test/student.csv' delimiter ',' rejected data as table students_rejected;
I tested the result
select * from testschema.student - shows 5 rows
select * from students_rejected; - no rows
Then I creates another csv file with bad data at /home/dbadmin/vertica_test directory
vi student_bad.csv
bad_data_type_for_student_id,UnaddedStudent, UnaddedSubject
6,Cassey,Physical Education
I added data from bad csv file
copy testschema.students from '/home/dbadmin/vertica_test/student.csv' delimiter ',' rejected data as table students_rejected;
Then I tested the output
select * from testschema.student - shows 6 rows <-- only one row got added. all ok
select * from students_rejected; - shows 1 row <-- bad row's entry is here. all ok
all looks good
Then I added the bad data again without the rejected data option
copy testschema.students from '/home/dbadmin/vertica_test/student_bad.csv' delimiter ',' ;
But now the entry with student id 6 got added again!!
student_id | name | major
------------+--------+--------------------
1 | Jack | Biology
2 | Kate | Sociology
3 | Claire | English
4 | Jack | Biology
5 | Mike | Comp. Sci
6 | Cassey | Physical Education <--
6 | Cassey | Physical Education <--
Shouldn't this have got rejected?
If you created your students with a command of this type:
DROP TABLE IF EXISTS students;
CREATE TABLE students (
student_id int
, name varchar(20)
, major varchar(20)
, CONSTRAINT pk_students PRIMARY KEY(student_id)
);
that is, without the explicit keyword ENABLED, then the primary key constraint is disabled. That is, you can happily insert duplicates, but will run into an error if you later want to join to the students table via the primary key column.
With the primary key constraint enabled ...
[...]
, CONSTRAINT pk_students PRIMARY KEY(student_id) ENABLED
[...]
I think you get the desired effect.
The whole scenario:
DROP TABLE IF EXISTS students;
CREATE TABLE students (
student_id int
, name varchar(20)
, major varchar(20)
, CONSTRAINT pk_students PRIMARY KEY(student_id) ENABLED
);
INSERT INTO students
SELECT 1,'Jack' ,'Biology'
UNION ALL SELECT 2,'Kate' ,'Sociology'
UNION ALL SELECT 3,'Claire','English'
UNION ALL SELECT 4,'Jack' ,'Biology'
UNION ALL SELECT 5,'Mike' ,'Comp. Sci'
UNION ALL SELECT 6,'Cassey','Physical Education'
;
-- out OUTPUT
-- out --------
-- out 6
COMMIT;
COPY students FROM STDIN DELIMITER ','
REJECTED DATA AS TABLE students_rejected;
6,Cassey,Physical Education
\.
-- out vsql:/home/gessnerm/._vfv.sql:4: ERROR 6745:
-- out Duplicate key values: 'student_id=6'
-- out -- violates constraint 'dbadmin.students.pk_students'
SELECT * FROM students;
-- out student_id | name | major
-- out ------------+--------+--------------------
-- out 1 | Jack | Biology
-- out 2 | Kate | Sociology
-- out 3 | Claire | English
-- out 4 | Jack | Biology
-- out 5 | Mike | Comp. Sci
-- out 6 | Cassey | Physical Education
SELECT * FROM students_rejected;
-- out node_name | file_name | session_id | transaction_id | statement_id | batch_number | row_number | rejected_data | rejected_data_orig_length | rejected_reason
-- out -----------+-----------+------------+----------------+--------------+--------------+------------+---------------+---------------------------+-----------------
-- out (0 rows)
And the only reliable check seems to be the ANALYZE_CONSTRAINTS() call ...
ALTER TABLE students ALTER CONSTRAINT pk_students DISABLED;
-- out Time: First fetch (0 rows): 7.618 ms. All rows formatted: 7.632 ms
COPY students FROM STDIN DELIMITER ','
REJECTED DATA AS TABLE students_rejected;
6,Cassey,Physical Education
\.
-- out Time: First fetch (0 rows): 31.790 ms. All rows formatted: 31.791 ms
SELECT * FROM students;
-- out student_id | name | major
-- out ------------+--------+--------------------
-- out 1 | Jack | Biology
-- out 2 | Kate | Sociology
-- out 3 | Claire | English
-- out 4 | Jack | Biology
-- out 5 | Mike | Comp. Sci
-- out 6 | Cassey | Physical Education
-- out 6 | Cassey | Physical Education
SELECT * FROM students_rejected;
-- out node_name | file_name | session_id | transaction_id | statement_id | batch_number | row_number | rejected_data | rejected_data_orig_length | rejected_reason
-- out -----------+-----------+------------+----------------+--------------+--------------+------------+---------------+---------------------------+-----------------
-- out (0 rows)
SELECT ANALYZE_CONSTRAINTS('students');
-- out Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values
-- out -------------+------------+--------------+-----------------+-----------------+---------------
-- out dbadmin | students | student_id | pk_students | PRIMARY | ('6')
-- out (1 row)
I'm testing Greenplum (which is based of Postegres) with a table of this form:
CREATE TABLE whiteglove (bigint BIGINT,varbinary bytea,boolean BOOLEAN,date DATE,decimal DECIMAL,double float,real REAL,integer INTEGER,smallint SMALLINT,timestamp TIMESTAMP,tinyint smallint,varchar VARCHAR)
Then I trying to insert this row using Postegres JDBC driver
INSERT INTO whiteglove VALUES (100000,'68656c6c6f',TRUE,'10/10/2020',0.5,1.234567,1.234,10,2,'4/14/2015 7:32:33PM',2,'hello')
which fails with the following error
org.postgresql.util.PSQLException: ERROR: date/time field value out of range: "10/10/2020"
Hint: Perhaps you need a different "datestyle" setting.
Position: 57
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2532)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2267)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:312)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:448)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:369)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:310)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:296)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:273)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:268)
If I take that same query and execute it from terminal using psql it passes without problems
dev=# select * from whiteglove ;
bigint | varbinary | boolean | date | decimal | double | real | integer | smallint | timestamp | tinyint | varchar
--------+-----------+---------+------+---------+--------+------+---------+----------+-----------+---------+---------
(0 rows)
dev=# INSERT INTO whiteglove VALUES (100000,'68656c6c6f',TRUE,'10/10/2020',0.5,1.234567,1.234,10,2,'4/14/2015 7:32:33PM',2,'hello');
INSERT 0 1
dev=# select * from whiteglove ;
bigint | varbinary | boolean | date | decimal | double | real | integer | smallint | timestamp | tinyint | varchar
--------+------------+---------+------------+---------+----------+-------+---------+----------+---------------------+---------+---------
100000 | 68656c6c6f | t | 2020-10-10 | 0.5 | 1.234567 | 1.234 | 10 | 2 | 2015-04-14 19:32:33 | 2 | hello
(1 row)
Any pointers on why I'm getting this out of range error??
I have got a table with name table_listnames whose structure is given below
mysql> desc table_listnames;
+-------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
+-------+--------------+------+-----+---------+----------------+
2 rows in set (0.04 sec)
It has got sample data as shown
mysql> select * from table_listnames;
+----+------------+
| id | name |
+----+------------+
| 6 | WWW |
| 7 | WWWwww |
| 8 | WWWwwws |
| 9 | WWWwwwsSSS |
| 10 | asdsda |
+----+------------+
5 rows in set (0.00 sec)
I have a requirement where if name not found under the table , i need to insert or else do nothing
I am achieving it this way
String sql = "INSERT INTO table_listnames (name) SELECT name FROM (SELECT ?) AS tmp WHERE NOT EXISTS (SELECT name FROM table_listnames WHERE name = ?) LIMIT 1";
pst = dbConnection.prepareStatement(sql);
pst.setString(1, salesName);
pst.setString(2, salesName);
pst.executeUpdate();
Is it possible to know the id of the record of the given name in this case
I want to create random numbers between 1 and 99,999,999.
I am using the following code:
SELECT CAST(RAND() * 100000000 AS INT) AS [RandomNumber]
However my results are always between the length of 7 and 8, which means that I never saw a value lower then 1,000,000.
Is there any way to generate random numbers between a defined range?
RAND Returns a pseudo-random float value from 0 through 1, exclusive.
So RAND() * 100000000 does exactly what you need. However assuming that every number between 1 and 99,999,999 does have equal probability then 99% of the numbers will likely be between the length of 7 and 8 as these numbers are simply more common.
+--------+-------------------+----------+------------+
| Length | Range | Count | Percent |
+--------+-------------------+----------+------------+
| 1 | 1-9 | 9 | 0.000009 |
| 2 | 10-99 | 90 | 0.000090 |
| 3 | 100-999 | 900 | 0.000900 |
| 4 | 1000-9999 | 9000 | 0.009000 |
| 5 | 10000-99999 | 90000 | 0.090000 |
| 6 | 100000-999999 | 900000 | 0.900000 |
| 7 | 1000000-9999999 | 9000000 | 9.000000 |
| 8 | 10000000-99999999 | 90000000 | 90.000001 |
+--------+-------------------+----------+------------+
I created a function that might help. You will need to send it the Rand() function for it to work.
CREATE FUNCTION [dbo].[RangedRand]
(
#MyRand float
,#Lower bigint = 0
,#Upper bigint = 999
)
RETURNS bigint
AS
BEGIN
DECLARE #Random BIGINT
SELECT #Random = ROUND(((#Upper - #Lower) * #MyRand + #Lower), 0)
RETURN #Random
END
GO
--Here is how it works.
--Create a test table for Random values
CREATE TABLE #MySample
(
RID INT IDENTITY(1,1) Primary Key
,MyValue bigint
)
GO
-- Lets use the function to populate the value column
INSERT INTO #MySample
(MyValue)
SELECT dbo.RangedRand(RAND(), 0, 100)
GO 1000
-- Lets look at what we get.
SELECT RID, MyValue
FROM #MySample
--ORDER BY MyValue -- Use this "Order By" to see the distribution of the random values
-- Lets use the function again to get a random row from the table
DECLARE #MyMAXID int
SELECT #MyMAXID = MAX(RID)
FROM #MySample
SELECT RID, MyValue
FROM #MySample
WHERE RID = dbo.RangedRand(RAND(), 1, #MyMAXID)
DROP TABLE #MySample
--I hope this helps.
I have a field that is defined as follows:
class Subcategory extends BaseSubcategory {}
abstract class BaseSubcategory extends Doctrine_Record
{
public function setTableDefinition()
{
// ...
$this->hasColumn('meta_description', 'string', 255);
// ...
}
// ...
}
Here's what the table looks like:
mysql> DESCRIBE subcategory;
+----------------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
[...]
| meta_description | varchar(255) | YES | | NULL | |
[...]
+----------------------+------------------+------+-----+---------+----------------+
10 rows in set (0.00 sec)
Here's my code to save a record
$m = new Subcategory;
// ...
$m->meta_description = null;
$m->save();
I'm getting the following validation error
* 1 validator failed on meta_description (length)
Why is this happening?
The code samples above do not tell the whole story. I was being misled by an earlier save, in which the meta_description field was being overloaded with over 255 characters. False alarm!