Using existing entries in an Ecto changeset function - validation

I have a business_hours table (using MySQL) which contains the following fields:
location_id (foreign key)
starting_time (float [e.g. 17 = 5pm, 17.5 = 5:30pm])
ending_time (float)
day (integer [0 = monday, 6 = sunday])
My question is how would I go about creating an Ecto changeset function to validate that a new business_hours entry does not overlap any existing entry? I know how I would check for this in general (see below), but I am unsure on how to integrate this into a changeset function to validate this serverside.
(a.starting_time > b.ending_time or a.ending_time < b.starting_time) and a.day == b.day and a.location_id == b.location_id

Remember that a changeset is a struct like any other, and the built-in validation functions accept an %Ecto.Changeset{} struct as input. You can write your own functions that also accept this as input and you can add errors or values by modifying the changeset manually or by using the provided helper functions (e.g. Ecto.Changeset.add_error/2).
For example, in your Ecto schema module, the convention is to do validation within a changeset/2 function (but you could put this logic anywhere):
def changeset(existing_data, attrs) do
existing_data
# force_changes to true is a helpful option if you need to have all
# values preset for evaluation even if they are unchanged from what's
# already in the database
|> cast(attrs, [:location_id, :starting_time, :ending_time, :day], force_changes: true)
|> validate_required([:location_id, :starting_time, :ending_time, :day])
|> foreign_key_constraint(:location_id)
|> custom_business_logic()
end
# This function may require force_changes: true
defp custom_business_logic(%Ecto.Changeset{changes: %{starting_time: starting_time, ending_time: ending_time, day: day}} = changeset) do
case check_for_overlap(starting_time, ending_time, day) do
:ok -> changeset
_ -> add_error(changeset, :starting_time, "Uh oh... this time overlapped with some other location")
end
end
defp check_for_overlap(starting_time, ending_time, day) do
# do custom logic check here, e.g. query database
# return :ok or {:error, "Something"}
end
From your description, you may need to have your custom function query the database, so that query could be housed in one of the custom validation functions.

Related

influxDB: How to convert field to tag in influxDB v2.0

We need to convert field to tag in influxDB v2.0 but not able to find any proper solution.
Can someone help me out to achieve the same ?
Solution we found was to create new measurement by altering fields and tags of existing measurement but not able achieve it using Flux language.
Using below flux query we can copy the data from one measurement to another but not able to change the field to tag while adding data in new measurement.
from(bucket: "bucket_name")
|> range(start: -10y)
|> filter(fn: (r) => r._measurement == "cu_om")
|> aggregateWindow(every: 5s, fn: last, createEmpty: false)
|> yield(name: "last")
|> set(key: "_measurement", value: "cu_om_new1")
|> to(org: "org_name", bucket: "bucket_name")
Any help appreciated.
You're almost there with your original code, there are extra fields with the to() function that allow this.
If you have a set of data already where you have a tag name as value, you can specify it as a tagColumn in to().
Also, the new tag(s) must be string(s).
|> to(bucket: "NewBucketName",
tagColumns: ["NewTagName"],
fieldFn: (r) => ({"SomeValue": r._value })
)
Have a look at writing pivoted data to InfluxDB, maybe that's what you need. Using this method, you have control over which columns are written as fields and which as tags:
Use experimental.to() to write pivoted data to InfluxDB. Input data must have the following columns:
_time
_measurement
All columns in the group key other than _time and _measurement are written to InfluxDB as tags. Columns not in the group key are written to InfluxDB as fields.

How to update attributes while doing a duplicate Rails

We are creating a scheduling app and attempting to create a copy/paste function for schedules from week to week. I am trying to figure out how to duplicate a schedule for certain period of time, while updating the attributes upon paste. Right now I can copy the schedule, but when running it on postman, the dates and times stay the exact same (as we would expect with a .dup.) I believe it would be best to set the start/end times to nil and then upon paste maybe the attributes get updated at that time?
Here is the function I have so far:
def copy
set_calendar
if params["start_date"] && params["end_date"]
start_date = params["start_date"].to_date
end_date = params["end_date"].to_date
if #calendar.users.owners.include?(current_user) || #calendar.users.managers.include?(current_user)
#past_shifts = Shift.where(calendar_id: #calendar.id, start_time: start_date.beginning_of_day .. end_date.end_of_day).to_a
if
#past_shifts.each do |past_shift|
shift = past_shift.dup
shift.users = past_shift.users
shift.update(shift_params)
shift.save
end
render json: "copied", status: :ok
else
render json: #usershift.errors, status: :uprocessable_entity
end
else
render json: ("You do not have access to copy shifts"), status: :unauthorized
end
end
end
The shift.update(shift_params) is the part that needs to update the start and end times. Here are the shift params:
def shift_params
params.permit(:start_time, :end_time, :calendar_id, :capacity, :published)
end
As far as relationship set ups, this current method is being created in the shifts controller. Shift has many users through usershifts, user has many shifts through usershifts, and usershift model belongs to both.
Just curious - are you sure params for your copy method contains values for start_time and end_time? If so, why not to use them directly:
shift.start_time = params['start_time']
shift.end_time = params['end_time']
With use shift_params you will also update other 3 attributes: :calendar_id, :capacity, :published. Not sure if this is necessary in this case.
Using shift.update method, in this case, is not reasonable. It works with existing record and saves updated attributes to the database. In your case the record is new, and you save all the changes with calling shift.save later.

Rails 4: custom validations with external records

Hi I have a model called PurchasingGroup, a purchasing group has many Goals.
Goal model has 2 attributes: no_of_users and discount
I need to validate that the goals are consecutive, for example if I create a Goal with no_of_users = 10 and discount = 15 then the next goal I create must have greater values, otherwise I have to show the error to the user, right now Im making the validation in the create action of the controller, I know it is a bad practice so I want to know how to create this validation, I could not achieved it using custom validations in the model level.
I need to access the purchasing group and then check if the last group goal values are greater than or equal to the values of the new goal:
Below is the validation I have in the controller, it works but I want to do it right:
def create
respond_to do |format|
#purchasing_group = PurchasingGroup.find params[:purchasing_group_id]
#goal = Goal.new goal_params
#error_messages = ""
if not #purchasing_group.goals.empty?
if #purchasing_group.goals.last.no_of_users >= #goal.no_of_users
#error_messages = "The goals are consecutive! No. Users: must be greater than the previous goal value"
end
if #purchasing_group.goals.last.discount >= #goal.discount
#error_messages = "#{#error_messages}\nThe goals are consecutive! discount: must be greater than the previous goal value"
end
end
#if there are no errors then we save the object
if #error_messages.empty?
if #goal.save
#goal.update_attributes purchasing_group_id: params[:purchasing_group_id]
end
end
#In a js template I handle the errors, that is not relevant for this question.
format.js
end
end
if I understood you right, then:
validate :count_no_of_users
private
def count_no_of_users
last_goal = PurchasingGroup.find(self.purchasing_group_id).goals.last
error(:count_no_of_user, "Should be more than #{last_goal.no_of_user}") if self.no_of_user < last_goal.no_of_user
end
and same for discount
you can validate it in single or separate validations.

Execution Order In a Method with Rails

I have a question regarding the execution order in a Rails method : Here what I do in my model :
def update_fields
FillMethods.fill_info("BadgeUser", "Badge", self.id, self.badge_id, "badge_")
self.reload
self.update_attributes(:completed => true) if self.badge_goal == 0
end
I have an object, then I apply the FillMethods info on it to fill some fields in this object and save it. Then I want to check if badge_goal == 0 to mark it as completed.
The issue I have is that if I don't put self.reload the self item will not be updated, the object will be in the database but the self item will be the same as before the FillMethods. If I do self.reload the self item is correct and it can be marked as completed correctly
My question is will ruby wait for the FIllMethods to be done to reload the self or it will reload the self before the FillMethods is done ?
Why does the self item is not correct if I don't do self.reload ?
The FillMethods is a module in my lib directory.
Thanks for your help,
Thats my fill method :
module FillMethods
def self.fill_info(model1, model2, id1, id2, string)
parameters1 = model1.constantize.attr_accessible[:default].to_a.map {|s| s.dup}
parameters1.delete_if {|s| !s.start_with?(string)}
parameters2 = parameters1.map {|s| s = s.split(string)[1]}
h = Hash[parameters1.zip parameters2]
object1 = model1.constantize.find(id1)
object2 = model2.constantize.find(id2)
h.each do |parameter1, parameter2|
object1.update_attribute(parameter1.to_sym , object2.send(parameter2))
end
return object1
end
end
The goal of the method is to fill the BadgeUser table with all the badge info.
For each column in my Badge table (like name) I have in my BadgeUser table badge_name
Thanks,
I cannot be sure without seeing the code, but judging from the parameters you pass, I guess that FillMethods.fill_info retrieves the record from db again, using the third parameter id. It then changes the record and stores it back.
Your object (self) has no way, under ActiveRecord or similar, to know that the db was modified somewhere somehow.
Note that you are retrieving the same record from db some three times instead than once.
If you change FillMethods.fill_info to instead accept a record (self itself), and modify it, then self would be in the new state.
Addendum
Ruby code is executed sequentially in a single thread unless you explicitly start a new thread, so yes, .fill_info is executed before continuing with the rest of update_fields.

What is the best way pre filter user access for sqlalchemy queries?

I have been looking at the sqlalchemy recipes on their wiki, but don't know which one is best to implement what I am trying to do.
Every row on in my tables have an user_id associated with it. Right now, for every query, I queried by the id of the user that's currently logged in, then query by the criteria I am interested in. My concern is that the developers might forget to add this filter to the query (a huge security risk). Therefore, I would like to set a global filter based on the current user's admin rights to filter what the logged in user could see.
Appreciate your help. Thanks.
Below is simplified redefined query constructor to filter all model queries (including relations). You can pass it to as query_cls parameter to sessionmaker. User ID parameter don't need to be global as far as session is constructed when it's already available.
class HackedQuery(Query):
def get(self, ident):
# Use default implementation when there is no condition
if not self._criterion:
return Query.get(self, ident)
# Copied from Query implementation with some changes.
if hasattr(ident, '__composite_values__'):
ident = ident.__composite_values__()
mapper = self._only_mapper_zero(
"get() can only be used against a single mapped class.")
key = mapper.identity_key_from_primary_key(ident)
if ident is None:
if key is not None:
ident = key[1]
else:
from sqlalchemy import util
ident = util.to_list(ident)
if ident is not None:
columns = list(mapper.primary_key)
if len(columns)!=len(ident):
raise TypeError("Number of values doen't match number "
'of columns in primary key')
params = {}
for column, value in zip(columns, ident):
params[column.key] = value
return self.filter_by(**params).first()
def QueryPublic(entities, session=None):
# It's not directly related to the problem, but is useful too.
query = HackedQuery(entities, session).with_polymorphic('*')
# Version for several entities needs thorough testing, so we
# don't use it yet.
assert len(entities)==1, entities
cls = _class_to_mapper(entities[0]).class_
public_condition = getattr(cls, 'public_condition', None)
if public_condition is not None:
query = query.filter(public_condition)
return query
It works for single model queries only, and there is a lot of work to make it suitable for other cases. I'd like to see an elaborated version since it's MUST HAVE functionality for most web applications. It uses fixed condition stored in each model class, so you have to modify it to your needs.
Here is a very naive implementation that assumes there is the attribute/property self.current_user logged in user has stored.
class YourBaseRequestHandler(object):
#property
def current_user(self):
"""The current user logged in."""
pass
def query(self, session, entities):
"""Use this method instead of :method:`Session.query()
<sqlalchemy.orm.session.Session.query>`.
"""
return session.query(entities).filter_by(user_id=self.current_user.id)
I wrote an SQLAlchemy extension that I think does what you are describing: https://github.com/mwhite/multialchemy
It does this by proxying changes to the Query._from_obj and QueryContext._froms properties, which is where the tables to select from ultimately get set.

Resources