Update unique values into Postgres Array - spring

I have a situation where I update a Postgres table column which is of type bigint[]. This array should have unique numbers inside it whenever an update query is fired.
The query is as given below
UPDATE book_shelf
SET book_id = book_id || array[CAST(:bookID AS BIGINT)], updated_at = now()
WHERE user_id = :userID AND shelf_name = :shelfName
When the above query is fired, it simply adds the number into the array which I don't want to happen. It should hold only unique values. Please help me.

You can check if it exists in the array before adding it:
UPDATE book_shelf
SET book_id = CASE WHEN CAST(:bookID AS BIGINT) = ANY(book_id) THEN book_id ELSE ARRAY_APPEND(book_id, CAST(:bookID AS BIGINT)) END, updated_at = now()
WHERE user_id = :userID AND shelf_name = :shelfName
Of course if updated_at should only be set if book_id is actually updated, then put the check in the WHERE clause so it's not updated unnecessarily:
UPDATE book_shelf
SET book_id = ARRAY_APPEND(book_id, CAST(:bookID AS BIGINT)), updated_at = now()
WHERE user_id = :userID
AND shelf_name = :shelfName
AND NOT CAST(:bookID AS BIGINT) = ANY(book_id)

Related

Getting ORA-00001 unique constraint violated error when calling a trigger

create or replace TRIGGER "DB"."TRIG_PERIOD_TRUANCY_INS_UPD"
AFTER UPDATE OR INSERT
ON AT_PERIOD_ATTENDANCE_RECORDS
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
BEGIN
IF UPDATING THEN
delete at_period_truancy where period_attendance_records_id = :old.period_attendance_records_id;
END IF;
insert into at_period_truancy (period_attendance_records_id, district_number, school_id, student_id, calendar_date, school_year, minutes)
select :new.period_attendance_records_id, :new.district_number, :new.school_id, :new.student_id, :new.calendar_date, :new.school_year,
(case when :new.attendance_status = 'A' then period.end_time - period.begin_time
when coalesce(:new.tardy_time_in_time, period.begin_time) - period.begin_time >
period.end_time - coalesce(:new.tardy_time_out_time, period.end_time)
then coalesce(:new.tardy_time_in_time, period.begin_time) - period.begin_time
else period.end_time - coalesce(:new.tardy_time_out_time, period.end_time) end)*24*60
from ca_calendar cal
inner join ca_school_calendar calendar
on (cal.district_number = calendar.district_number
and cal.calendar_id = calendar.calendar_id )
inner join sc_class_meeting_pattern meeting
on (calendar.cycle_day_cd = meeting.cycle_day_cd)
inner join sc_class class
on (class.school_scheduling_param_id = meeting.school_scheduling_param_id
and class.class_id = meeting.class_id)
inner join sc_period_info period
on (meeting.school_scheduling_param_id = period.school_scheduling_param_id
and meeting.period = period.period)
where :new.district_number = cal.district_number
and cal.is_active_ind = 1
and :new.school_id = cal.school_id
and :new.school_year = cal.school_year
and :new.calendar_type_cd = cal.calendar_type_cd
and :new.track_number = cal.track_number
and :new.calendar_date = calendar.calendar_date
and :new.school_id = class.school_id
and :new.class_id = class.class_id
and 1 in (select use_in_truancy_report_ind
from enum_at_absence_reason_code
where district_number = :new.district_number
and school_id = :new.school_id
and value = :new.absence_reason_code
union all
select use_in_truancy_report_ind
from enum_at_tardy_reason_code
where district_number = :new.district_number
and school_id = :new.school_id
and value = :new.tardy_reason_code);
END TRIG_PERIOD_TRUANCY_INS_UPD;
This is the trigger that I am using. When calling the update statement this trigger is getting invoked and when I pass tardy_reason_code as UN this error is happening. It executes without any issues if I pass tardy_reason_code with different values.
Trigger is inserting into at_period_truancy tables.
As Oracle raises ORA-00001 (unique constraint violated), it means that you're trying to insert primary key value which already exists in the table.
You didn't post create table statement so it is difficult to guess which columns make the primary key, but - you should know it so check which values you already have in there, compare that to values currently being inserted and you'll know what to do.
Maybe you'll have to modify primary key (add other columns? Abandon idea of current primary key and using a sequence (or identity column)) or the way you're inserting values into the table.

using raw query in where methods makes error in query generation

I have an eloquent query as
Role::where("(company_id = Auth::user()->company_id or name = 'admin')and id in(2,3)")->pluck('name');
According to my eloquent the sql should be as
select `name` from `roles` where ( company_id = 1 or name = admin ) and id IN (2, 3) and `roles`.`deleted_at` is null
But it executes as
select `name` from `roles` where ( company_id = 1 or name = admin ) and id IN (2, 3) is null and `roles`.`deleted_at` is null
Can anyone help me concluding why extra is null condition is applied in the query?
Note: I am using soft deletes
You should use whereRaw instead of where
Role::whereRaw("(company_id = Auth::user()->company_id or name = 'admin')and id in(2,3)")->pluck('name');

How to get records from a table if it does not exist in another related table laravel query builder

I have two tables
tb_teachers
-----------
id
name
school_id
tb_class_to_teachers
--------------------
id
teacher_id
class_id
assigned_status
school_id
How to get records from tb_teachers if it(teacher_id and school_id) does not exist in another related table tb_class_to_teachers using laravel query builder
I assume you set up your relationship correctly, then u may use doesntHave
$teachers = App\Teachers::doesntHave('class')->get();
try this
$classToTeachers = DB::table('tb_class_to_teachers')->get();
if($classToTeachers->teacher_id == null && $classToTeachers->school_id == null)
{
$teacher = DB::table('tb_teachers')->get();
}

How can I optimize these Postgres queries and DB performance?

I need some help in optimizing the following two queries which are almost similar but data selection is a little different. Here is my table definition
CREATE TABLE public.rates (
rate_id bigserial NOT NULL PRIMARY KEY,
prefix varchar(50),
rate_name varchar(30),
rate numeric(8,6),
intrastate_cost numeric(8,6),
interstate_cost numeric(8,6),
status char(3) DEFAULT 'act'::bpchar,
min_duration integer,
call_increment integer,
connection_cost numeric(8,6),
rate_type varchar(3) DEFAULT 'lcr'::character varying,
owner_type varchar(10),
start_date timestamp WITHOUT TIME ZONE,
end_date timestamp WITHOUT TIME ZONE,
rev integer,
ratecard_id integer,
/* Keys */
CONSTRAINT rates_pkey
PRIMARY KEY (id)
) WITH (
OIDS = FALSE
);
and two queries here I am using,
SELECT
rates.* ,
rc.ratecard_prefix ,
rc.default_lrn ,
rc.lrn_lookup_method ,
customers.customer_id ,
customers.balance ,
customers.channels AS customer_channels ,
customers.cps AS customer_cps ,
customers.balance AS customer_balance
FROM
rates
JOIN ratecards rc
ON rc.card_type = 'customer' AND
rc.ratecard_id = rates.ratecard_id
JOIN customers
ON rc.customer_id = customers.customer_id
WHERE
customers.status = 'act' AND
rc.status = 'act' AND
rc.customer_id = 'AB8KA191' AND
owner_type = 'customer' AND
'17606109973' LIKE concat (rc.ratecard_prefix, rates.prefix, '%') AND
rates.status = 'act' AND
now() BETWEEN rates. start_date AND
rates.end_date AND
customers.balance > 0
ORDER BY
LENGTH(PREFIX) DESC LIMIT 1;
and the second one,
SELECT
*
FROM
rates
JOIN ratecards rc
ON rc.card_type = 'carrier' AND
rc.ratecard_id = rates.ratecard_id
JOIN carriers
ON rc.carrier_id = carriers.carrier_id
JOIN carrier_switches cswitch
ON carriers.carrier_id = cswitch.carrier_id
WHERE
rates.intrastate_cost < 0.011648 AND
owner_type = 'carrier' AND
'16093960411' LIKE concat (rates.prefix, '%') AND
rates.status = 'act' AND
carriers.status = 'act' AND
now() BETWEEN rates.start_date AND
rates.end_date AND
rates.intrastate_cost <> -1 AND
cswitch.enabled = 't' AND
rates.rate_type = 'lrn' AND
rates.min_duration >= 6
ORDER BY
rates.intrastate_cost ASC,
LENGTH(rates.prefix) DESC,
cswitch.priority DESC
I created an index on field owner_type (not shown in schema above) but the query performance is not really what is expected. CPU usage becomes too high for the DB server and everything starts to slow down. Explain output for first query is here and the second one is here.
When the number of records are less in the table, things work fine, naturally, but when the number of records increases the CPU goes higher. I currently have around 341821 records in the table.
How can I improve the query execution or possibly change the query in order to speed things up?
I have set enable_bitmapscan = off because I think this gives me better performance. If set to on, every index scan is followed up with a Bitmap heap scan.
Things did ease up a little bit by changing the query to
SELECT
rates.*,
rc.ratecard_prefix,
rc.default_lrn,
rc.lrn_lookup_method,
customers.customer_id,
customers.balance,
customers.channels AS customer_channels,
customers.cps AS customer_cps,
customers.balance AS customer_balance
FROM
rates
JOIN ratecards rc
ON rc.card_type = 'customer' AND
rc.ratecard_id = rates.ratecard_id
JOIN customers
ON rc.customer_id = customers.customer_id
WHERE
customers.status = 'act' AND
rc.status = 'act' AND
rc.customer_id = 'AB8KA191' AND
owner_type = 'customer' AND
(CONCAT (rc.ratecard_prefix, rates.prefix) IN ('16026813306',
'1602681330',
'160268133',
'16026813',
'1602681',
'160268',
'16026',
'1602',
'160',
'16',
'1')) AND
rates.status = 'act' AND
now() BETWEEN rates.start_date AND
rates.end_date AND
customers.balance > 0
ORDER BY
LENGTH(PREFIX) DESC LIMIT 1
Postgres.conf is here
But still each Postgres process takes around 25%+ CPU. I now also using pgbouncer to utilize connection pooling but still not helping.

Oracle Update statement with if conditions

I'm trying to merge three update statements into one.
"UPDATE DOT_WORKS SET START_DATE = :StartDate WHERE ID = :WorksId and END_DATE IS NULL;"
"UPDATE DOT_WORKS SET WORKS_TYPE = :WorksType WHERE ID = WorksId and WORKS_GROUP = :WorksGroup;"
"UPDATE DOT_WORKS SET WORKS_CONNECTION = :WorksConn WHERE ID = WorksId and WORKS_PLACE = :WorksPlace;"
I'm wondering whether there is a way to do that.
The reason why I'm trying to do so is to save the calls to database. It's more efficient to call db once instead of three.
Thanks!
UPDATE DOT_WORKS
SET START_DATE = case when END_DATE IS NULL then :StartDate else START_DATE end,
WORKS_TYPE = case when WORKS_GROUP = :WorksGroup then :WorksType else WORKS_TYPE end,
WORKS_CONNECTION = case when WORKS_PLACE = :WorksPlace then :WorksConn else WORKS_CONNECTION end
WHERE ID = :WorksId
and
(
END_DATE IS NULL OR
WORKS_GROUP = :WorksGroup OR
WORKS_PLACE = :WorksPlace
)

Resources