class Defect < ApplicationRecord
has_many :work_orders, dependent: :destroy
end
class WorkOrder < ApplicationRecord
belongs_to :defect
before_save :default_values
def default_values
self.running_number = self.defect.work_orders.maximum(:running_number).to_i + 1 if self.new_record?
end
end
ideally the code works like this
Defect A
- Work Order running_number 1
- Work Order running_number 2
- Work Order running_number 3
Defect B
- Work Order running_number 1
- Work Order running_number 2
- Work Order running_number 3
however when multiple users concurrently saving different WorkOrder object that belongs to the same defect, the running_number will go haywire because the maximum_running_number is based on only saved data.
how do i make the running_number save properly?
The issue is that your concurrent saves get the same count of work orders, so you get duplicate running_numbers for the work order.
You can solve it two ways:
Setting a unique constraint on running_number and defect_id
Acquire a lock on the work order table until you've committed the new work order.
To set a unique constraint in a rails migration: add_index :work_orders, [:defect_id, :running_number], unique: true. Then just retry the save if there is an error when you call save.
assuming you're using Postgres
begin
# .. create the work order
work_order.save
rescue PG::UniqueViolation
retry
end
Using retry will retry the block until no unique violation is raised. This could cause a deadlock if there was some other unique violation error on the record, so make sure that the error is caused by the running_number and nothing else.
The other way is to acquire a lock to prevent the race condition. As its a database table that is the shared resource, you acquire a table lock to ensure no other process is using the work order table while you are calculating the number of work orders and saving the record.
assuming your using Postgres explicit-locking docs
ActiveRecord::Base.transaction do
# create order
ActiveRecord::Base.connection.execute('LOCK work_orders IN ACCESS EXCLUSIVE MODE')
work_order.save
end
Acquiring a table lock with this mode will prevent all access to the table from other connections to the database. It will be released when the transaction is committed, but again could cause deadlocks if for whatever reason the ruby process is killed before it has a chance to complete the transaction block.
Related
In our application, A user can be entered only once or multiple times to a campaign based on the configuration(single time or multiple times).
A user model, campaigns and campaigns_users model are there. If the configuration is single time, the campaigns_users should have only one record for a user. and if the campaign is configured with multiple times, there can be many records for the same user for the same campaign.
Due to concurrent processing, the record is being inserted twice. We have done the application level check to ensure whether the user entered into the campaign or not. In some cases, two processes run at the same time and check for the subscription to the campaign and the user got subscribed twice even if the configuration given as single time.
class User < ApplicationRecord
def already_subscribed?(campaign)
campaign.campaigns_users.find_by(user_id: id).present?
end
end
In job
def perform(user_id, campaign_id)
campaign = Campaign.find_by(id: campaign_id)
user = User.find_by(id: user_id)
return if campaign.config == 'single' && user.already_subscribed?
# Other Logics
end
I have checked for the solution for avoiding the two entries for specific cases and the result I got is to add the UNIQUE constraint. But in my case, the user can be entered multiple times/single time based on the config. What can be the best solution for avoiding the creation of the record?
I ended up using Advisory lock of Postgres(https://vladmihalcea.com/how-do-postgresql-advisory-locks-work/). Please have a look over https://github.com/ClosureTree/with_advisory_lock gem
def perform(user_id, campaign_id)
campaign = Campaign.find_by(id: campaign_id)
user = User.find_by(id: user_id)
# I have used id's as lock name to ensure reducing the time that other threads waiting for accessing this critical section
CampaignUser.with_advisory_lock("#{user.id}-#{campaign.id}")
return if campaign.config == 'single' && user.already_subscribed?
# Other Logics
end
end
I am updating an existing table with something like this:
def upload_airports_mariadb(city,...)
airport=Airport.find_or_create(City: city)
airport.AirportID=airportid
airport.TimeZone=timezone
airport.save
end
While airport is a simple model:
require 'sequel'
class Airport < Sequel::Model
set_primary_key [:City]
# Allow to set primary key using mass assignment.
unrestrict_primary_key
end
Used in a webui for a single access per click this is fine but when using same code for a huge number of updates for the table in a loop this is terrible slow.
Is this the price for not reading the whole table, adjusting it ans writing it back in a single blop?
To be honest I enjoy the object-like feeling and would not really like to deal with tables.
The observation that makes me feel hopeful is that the speed is depending on my network connection type.
Is there a way to tell sequel to keep the connection open? Updating 7k values takes up to 40 minutes.. this is horrible slow
currently I just call
#db = Sequel.connect("mysql2://path)
Why is it so slow?
class User
include Mongoid::Document
field :email, type: String
validates_uniqueness_of email
end
Although Mongoid supports atomic operations, I do not see one for insert.
Since User.create is not atomic, it seems that 2 Users could be created with the same email address simultaneously.
So, what is a good way to ensure that 2 users do not register the same email address simultaneously?
I can see one solution is to use a unique DB index, but are there any other good ways of doing this?
I've 2 model, User and Client with the following relationship.
User has_many :clients
Client belongs_to :user
How can I make all the registered users have their first :client_id => "1", by default?
So, you want all new users to default to the first client. You don't want to set a default id number, what you want to do is set it to the id of the first Client in your clients table.
So, in your users_controller#new action, all you have to do is set the client_id field to the id of the first client, like so:
class UsersController < ApplicationController
...
def new
#user = User.new(:client_id => Client.first.id)
end
end
This makes it so that when the new User record is saved, unless the user has explicitly changed the value themselves, it will always point to the first Client record in the database.
The reason you don't want to default it to 1 is because if you do, and you ever destroy that client from the table, then a client_id of 1 will point to a non-existent record, and your relationships will break for all new users after that happens. Even if you think that situation isn't going to happen, it's better to write your code in such a way that it can handle this situation, than to think it would never happen.
according to this (older) post these Rails 3 finders have race conditions. Something like
User.find_or_create_by_username(:username => 'uuu', :password => 'xxx')
could possibly create two records under some conditions according to the post.
Is this still relevant for Rails 3.0+ ? Thanks
Yes, it is. In the amount of time the first statement is executed and the object created, a second statement can be executed in parallel.
There's no exclusive lock.
The best way to prevent this is to add an unique validation in your model and an unique index in your database. In this way, the database will raise an error if you try to create two records with the same fields.