Execution Order In a Method with Rails - ruby

I have a question regarding the execution order in a Rails method : Here what I do in my model :
def update_fields
FillMethods.fill_info("BadgeUser", "Badge", self.id, self.badge_id, "badge_")
self.reload
self.update_attributes(:completed => true) if self.badge_goal == 0
end
I have an object, then I apply the FillMethods info on it to fill some fields in this object and save it. Then I want to check if badge_goal == 0 to mark it as completed.
The issue I have is that if I don't put self.reload the self item will not be updated, the object will be in the database but the self item will be the same as before the FillMethods. If I do self.reload the self item is correct and it can be marked as completed correctly
My question is will ruby wait for the FIllMethods to be done to reload the self or it will reload the self before the FillMethods is done ?
Why does the self item is not correct if I don't do self.reload ?
The FillMethods is a module in my lib directory.
Thanks for your help,
Thats my fill method :
module FillMethods
def self.fill_info(model1, model2, id1, id2, string)
parameters1 = model1.constantize.attr_accessible[:default].to_a.map {|s| s.dup}
parameters1.delete_if {|s| !s.start_with?(string)}
parameters2 = parameters1.map {|s| s = s.split(string)[1]}
h = Hash[parameters1.zip parameters2]
object1 = model1.constantize.find(id1)
object2 = model2.constantize.find(id2)
h.each do |parameter1, parameter2|
object1.update_attribute(parameter1.to_sym , object2.send(parameter2))
end
return object1
end
end
The goal of the method is to fill the BadgeUser table with all the badge info.
For each column in my Badge table (like name) I have in my BadgeUser table badge_name
Thanks,

I cannot be sure without seeing the code, but judging from the parameters you pass, I guess that FillMethods.fill_info retrieves the record from db again, using the third parameter id. It then changes the record and stores it back.
Your object (self) has no way, under ActiveRecord or similar, to know that the db was modified somewhere somehow.
Note that you are retrieving the same record from db some three times instead than once.
If you change FillMethods.fill_info to instead accept a record (self itself), and modify it, then self would be in the new state.
Addendum
Ruby code is executed sequentially in a single thread unless you explicitly start a new thread, so yes, .fill_info is executed before continuing with the rest of update_fields.

Related

How to update attributes while doing a duplicate Rails

We are creating a scheduling app and attempting to create a copy/paste function for schedules from week to week. I am trying to figure out how to duplicate a schedule for certain period of time, while updating the attributes upon paste. Right now I can copy the schedule, but when running it on postman, the dates and times stay the exact same (as we would expect with a .dup.) I believe it would be best to set the start/end times to nil and then upon paste maybe the attributes get updated at that time?
Here is the function I have so far:
def copy
set_calendar
if params["start_date"] && params["end_date"]
start_date = params["start_date"].to_date
end_date = params["end_date"].to_date
if #calendar.users.owners.include?(current_user) || #calendar.users.managers.include?(current_user)
#past_shifts = Shift.where(calendar_id: #calendar.id, start_time: start_date.beginning_of_day .. end_date.end_of_day).to_a
if
#past_shifts.each do |past_shift|
shift = past_shift.dup
shift.users = past_shift.users
shift.update(shift_params)
shift.save
end
render json: "copied", status: :ok
else
render json: #usershift.errors, status: :uprocessable_entity
end
else
render json: ("You do not have access to copy shifts"), status: :unauthorized
end
end
end
The shift.update(shift_params) is the part that needs to update the start and end times. Here are the shift params:
def shift_params
params.permit(:start_time, :end_time, :calendar_id, :capacity, :published)
end
As far as relationship set ups, this current method is being created in the shifts controller. Shift has many users through usershifts, user has many shifts through usershifts, and usershift model belongs to both.
Just curious - are you sure params for your copy method contains values for start_time and end_time? If so, why not to use them directly:
shift.start_time = params['start_time']
shift.end_time = params['end_time']
With use shift_params you will also update other 3 attributes: :calendar_id, :capacity, :published. Not sure if this is necessary in this case.
Using shift.update method, in this case, is not reasonable. It works with existing record and saves updated attributes to the database. In your case the record is new, and you save all the changes with calling shift.save later.

Ruby check if record exist in mongodb

Just simple check with ruby driver if record exist in database.
Like
Main = db.collection(main)
Record = main.find("record" => name)
If record?
Puts record exist
Else
Dont exist
End
This doesnt work can someone tell me how to do it
The following will put true or false whether the record exists
puts main.record.where(record: name).exists?
This can be done with the collection method "find" as you did, but the selector must be a hash.
col = db.collection(main)
record = col.find({:property => value})
Find also accepts an optional hash of options.
Take a look at the documentation. http://api.mongodb.org/ruby/current/Mongo/Collection.html#find-instance_method

Exposing "virtual" field in a tastypie view?

I want to create a view using tastypie to expose certain objects of the same type, but with the following two three twists:
I need to get the objects using three separate queries;
I need to add a field which doesn't exist in the underlying model, and the value of that field depends on which of the queries it came from; and
The data will be per-user (so I need to hook in to one of the methods that gets a request).
I'm not clear on how to hook into the tastypie lifecycle to accomplish this. The recommended way for adding a "virtual" field is in the dehydrate method, which only knows about the bundle it's operating on.
Even worse, there's no official way to join querysets.
My problem would go away if I could get tastypie to accept something other than a queryset. In that case I could pass it a list of subclasses of my object, with the additional field added.
I'm open to any other sensible solution.
Edit: Added twist 3 - per-user data.
In the last version you should override the dehydrate method, e.g.
def dehydrate(self, bundle):
bundle.data['full_name'] = bundle.obj.get_full_name()
return bundle
Stumbled over similar problem here. In my case, items in the list could be "checked" by user.
When an item is retrieved by AJAX, its checked status is returned with the resource as a normal field.
When an item is saved to the server, "checked" field from the resource is stored in user's session.
First I thought hydrate() and dehydrate() methods to be the best match for this job, but turned out there are problems with accessing request object in these. So I went with alter_data_to_serialize() and obj_update(). I think there's no need to override obj_create(), since item can't be checked when it's first created, I think.
Here is the code, but note that it hasn't been properly tested yet.
class ItemResource(ModelResource):
def get_object_checked_status(self, obj, request):
if hasattr(request, 'session'):
session = request.session
session_data = session.get(get_item_session_key(obj), dict())
return session_data.get('checked', False)
return False
def save_object_checked_status(self, obj, data, request):
if hasattr(request, 'session'):
session_key = get_item_session_key(obj)
session_data = request.session.get(session_key, dict())
session_data['checked'] = data.pop('checked', False)
request.session[session_key] = session_data
# Overridden methods
def alter_detail_data_to_serialize(self, request, bundle):
# object > resource
bundle.data['checked'] = \
self.get_object_checked_status(bundle.obj, request)
return bundle
def alter_list_data_to_serialize(self, request, to_be_serialized):
# objects > resource
for bundle in to_be_serialized['objects']:
bundle.data['checked'] = \
self.get_object_checked_status(bundle.obj, request)
return to_be_serialized
def obj_update(self, bundle, request=None, **kwargs):
# resource > object
save_object_checked_status(bundle.obj, bundle.data, request)
return super(ItemResource, self)\
.obj_update(bundle, request, **kwargs)
def get_item_session_key(obj): return 'item-%s' % obj.id
OK, so this is my solution. Code is below.
Points to note:
The work is basically all done in obj_get_list. That's where I run my queries, having access to the request.
I can return a list from obj_get_list.
I would probably have to override all of the other obj_* methods corresponding to the other operations (like obj_get, obj_create, etc) if I wanted them to be available.
Because I don't have a queryset in Meta, I need to provide an object_class to tell tastypie's introspection what fields to offer.
To expose my "virtual" attribute (which I create in obj_get_list), I need to add a field declaration for it.
I've commented out the filters and authorisation limits because I don't need them right now. I'd need to implement them myself if I needed them.
Code:
from tastypie.resources import ModelResource
from tastypie import fields
from models import *
import logging
logger = logging.getLogger(__name__)
class CompanyResource(ModelResource):
role = fields.CharField(attribute='role')
class Meta:
allowed_methods = ['get']
resource_name = 'companies'
object_class = CompanyUK
# should probably have some sort of authentication here quite soon
#filters does nothing. If it matters, hook them up
def obj_get_list(self, request=None, **kwargs):
# filters = {}
# if hasattr(request, 'GET'):
# # Grab a mutable copy.
# filters = request.GET.copy()
# # Update with the provided kwargs.
# filters.update(kwargs)
# applicable_filters = self.build_filters(filters=filters)
try:
#base_object_list = self.get_object_list(request).filter(**applicable_filters)
def add_role(role):
def add_role_company(link):
company = link.company
company.role = role
return company
return add_role_company
director_of = map(add_role('director'), DirectorsIndividual.objects.filter(individual__user=request.user))
member_of = map(add_role('member'), MembersIndividual.objects.filter(individual__user=request.user))
manager_of = map(add_role('manager'), CompanyManager.objects.filter(user=request.user))
base_object_list = director_of + member_of + manager_of
return base_object_list #self.apply_authorization_limits(request, base_object_list)
except ValueError, e:
raise BadRequest("Invalid resource lookup data provided (mismatched type).")
You can do something like this (not tested):
def alter_list_data_to_serialize(self, request, data):
for index, row in enumerate(data['objects']):
foo = Foo.objects.filter(baz=row.data['foo']).values()
bar = Bar.objects.all().values()
data['objects'][index].data['virtual_field'] = bar
return data

Interacting With Class Objects in Ruby

How can I interact with objects I've created based on their given attributes in Ruby?
To give some context, I'm parsing a text file that might have several hundred entries like the following:
ASIN: B00137RNIQ
-------------------------Status Info-------------------------
Upload created: 2010-04-09 09:33:45
Upload state: Imported
Upload state id: 3
I can parse the above with regular expressions and use the data to create new objects in a "Product" class:
class Product
attr_reader :asin, :creation_date, :upload_state, :upload_state_id
def initialize(asin, creation_date, upload_state, upload_state_id)
#asin = asin
#creation_date = creation_date
#upload_state = upload_state
#upload_state_id = upload_state_id
end
end
After parsing, the raw text from above will be stored in an object that look like this:
[#<Product:0x00000101006ef8 #asin="B00137RNIQ", #creation_date="2010-04-09 09:33:45 ", #upload_state="Imported ", #upload_state_id="3">]
How can I then interact with the newly created class objects? For example, how might I pull all the creation dates for objects with an upload_state_id of 3? I get the feeling I'm going to have to write class methods, but I'm a bit stuck on where to start.
You would need to store the Product objects in a collection. I'll use an array
product_collection = []
# keep adding parse products into the collection as many as they are
product_collection << parsed_product_obj
#next select the subset where upload_state_ud = 3
state_3_products = product_collection.select{|product| product.upload_state_id == 3}
attr reader is a declarative way of defining properties/attributes on your product class. So you can access each value as obj.attribute like I have done for upload_state_id above.
select selects the elements in the target collection, which meet a specific criteria. Each element is assigned to product, and if the criteria evaluates to true is placed in the output collection.

How to remember results from querying SQLAlchemy relations (to implement caching)?

Suppose I have a mapped class Article. It has a relation category that does a query each time I access it (article.category would issue a query to get the category or article).
How do I proxy the article.category call so that the result is queried from the database, then remembered and then returned?
Thanks, Boda Cydo.
Does SA really issue a query every time you access the relation in the same session? IMO, it should not happen, as the result should get cached automatically within the session.
In order to check that no SQL is issued, just turn on logging and see for yourself:
metadata.bind.echo = 'debug'
c = session.query(Child).first() # issues SELECT on your child table(s)
print "child: ", c
print "c.parent: ", c.parent # issues SELECT on your parent table, then prints
print "c.parent: ", c.parent # just prints (no SQL)
print "c.parent: ", c.parent # just prints (no SQL)
Shall your code work otherwise by default, please provide code snippet.
In case you really just need to cache the result, see below (very similar solution to another question you posted):
class MyChild(Base):
__tablename__ = 'MyChild'
id = Column(Integer, primary_key=True)
parent = relation('Parent')
# ... other mapped properties
def __init__(self):
_parent_cached = None
#property
def parent_cached(self):
if self._parent_cached is None:
self._parent_cached = self.parent
But in order to have the result when your object is detached from the session you must call this property before detaching. (Also it does not handle situation when the parent is None. Do you always have parent?).
The option with eager load is simplier and once you load the object, you should have the relation loaded already (key is to have lazy=False):
class MyChild(Base):
__tablename__ = 'MyChild'
id = Column(Integer, primary_key=True)
parent = relation('Parent', lazy=False)
# ... other mapped properties
...

Resources