We are trying to optimize our views and on a page where 40 pictures are loaded with the following code :
= image_tag(product.pictures.first.data.url(:gallery))
We have a load time of 840ms, if we change it to the following code :
= image_tag("http://bucketname.s3.amazonaws.com/products/#{product.pictures.first.id}/gallery.jpg?1325844462"
We become a load time of 220ms.
It means the interpolation of s3_path_url is very slow. Somebody else is expecting the same problems? For the moment I created a helper that generates my urls :
def picture_url(picture, style)
"http://bucketname.s3.amazonaws.com/products/#{picture.id}/#{style}.jpg"
end
Only problem I have here is that the cache key is not there and also the extension not.
Is there always just one image of each product shown on the gallery page?
What about a cache column in your database. Whenever you create or update an image you could save this image_url as gallery_picture_url in your database and call it directly like
= image_tag(product.gallery_picture_url)
class Product < ActiveRecord::Base
after_commit: :update_gallery_picture_url
def update_gallery_picture_url
self.update(gallery_picture_url: self.pictures.first.data.url(:gallery)) if self.gallery_picture_present?
end
def gallery_picture_present?
(self.pictures.first.data.url(:gallery) rescue false).present?
end
end
Related
We are creating a scheduling app and attempting to create a copy/paste function for schedules from week to week. I am trying to figure out how to duplicate a schedule for certain period of time, while updating the attributes upon paste. Right now I can copy the schedule, but when running it on postman, the dates and times stay the exact same (as we would expect with a .dup.) I believe it would be best to set the start/end times to nil and then upon paste maybe the attributes get updated at that time?
Here is the function I have so far:
def copy
set_calendar
if params["start_date"] && params["end_date"]
start_date = params["start_date"].to_date
end_date = params["end_date"].to_date
if #calendar.users.owners.include?(current_user) || #calendar.users.managers.include?(current_user)
#past_shifts = Shift.where(calendar_id: #calendar.id, start_time: start_date.beginning_of_day .. end_date.end_of_day).to_a
if
#past_shifts.each do |past_shift|
shift = past_shift.dup
shift.users = past_shift.users
shift.update(shift_params)
shift.save
end
render json: "copied", status: :ok
else
render json: #usershift.errors, status: :uprocessable_entity
end
else
render json: ("You do not have access to copy shifts"), status: :unauthorized
end
end
end
The shift.update(shift_params) is the part that needs to update the start and end times. Here are the shift params:
def shift_params
params.permit(:start_time, :end_time, :calendar_id, :capacity, :published)
end
As far as relationship set ups, this current method is being created in the shifts controller. Shift has many users through usershifts, user has many shifts through usershifts, and usershift model belongs to both.
Just curious - are you sure params for your copy method contains values for start_time and end_time? If so, why not to use them directly:
shift.start_time = params['start_time']
shift.end_time = params['end_time']
With use shift_params you will also update other 3 attributes: :calendar_id, :capacity, :published. Not sure if this is necessary in this case.
Using shift.update method, in this case, is not reasonable. It works with existing record and saves updated attributes to the database. In your case the record is new, and you save all the changes with calling shift.save later.
I have been struggling for hours with this: I just can't figure a proper way to cache an object queryset result (object = queryset.get()) in order to avoid re-hitting the database on each view request.
This is my current (simplified) code, and as you can see, I override get_object() to add some extra data (not only the today variable), check if object is in sessions and add object to session.
views.py
from myapp import MyModel
from django.core.cache.utils import make_template_fragment_key
from django.views.generic import DetailView
class myClassView(DetailView):
model = MyModel
def get_object(self,queryset=None):
if queryset is None:
queryset = self.get_queryset()
pk = self.kwargs.get(self.pk_url_kwarg, None)
if pk is not None:
queryset = queryset.filter(pk=pk)
else:
raise AttributeError("My error message.")
try:
today = datetime.today().strftime('%Y%m%d')
cache_key = make_template_fragment_key('some_name', [pk, today])
if cache.has_key(cache_key):
object = self.request.session[cache_key]
return object
else:
object = queryset.get()
object.id = my_id
object.today = today
# Add object to session
self.request.session[cache_key] = object
except queryset.model.DoesNotExist:
raise Http404("Error 404")
return object
The above only works if I add the following:
settings.py
SESSION_SERIALIZER = 'django.contrib.sessions.serializers.PickleSerializer'
But I don't like this hack since it is not secure for Django 1.6 and newer versions because, according to How To Use Sessions (Django 1.7 documents):
If the SECRET_KEY is not kept secret and you are using the PickleSerializer, this can lead to arbitrary remote code execution
If I don't add the SESSIONS_SERIALIZER line I get a "django object is not JSON serializable" error. However, elsewhere my code breaks and I get KeyError errors when trying to pull data from session. This issue is solved converting my string keys into integers. Before changing the settings file Django was converting the str keys into integers automatically when session data was getting requested.
So considering this session serializer security issue I'd prefer other option. So I read here and here about caching get_object(), but I just don't get how to fit that into my get_object() bit. I tried..
if cache.has_key(cache_key):
self._object = super(myClassView,self).get_object(queryset=None)
return self._object
...but it fails. This seems the best solution so far. But how do I implement this into my code? Or, is there a better idea? I'm all ears. Thanks!
You should step back and reassess the situation. What are you trying to achieve?
The get_object is a method that get called in the detailed view to access one specific object from the database.
If you access this method the first time the object gets invalidated and cached in the Queryset.
In order to cache the get_queryset method you need a good cache backend like Redis or Memcached in place so that you can do a simple Write-through Cache operation:
if cache.has_key(cache_key):
object = cache.get(cache_key)
return object
else:
object = queryset.get(pk=pk)
cache.set(cache_key,object)
return object
Note that the django objects are serialized in the cache backend and retrieved as objects when deserialised.
That approach is the just a starting point. You cache the object the first time it misses.
You can also add a post_save,post_update signal to save the object in the cache every time the model is saved or updated:
#receiver(post_save, sender=MyModel)
#receiver(post_delete, sender=MyModel)
def add_MyModel_to_cache(sender, **kwargs):
object = kwargs['instance']
cache.set(cache_key,object)
You have to carefully review what you want to cache and when as it is very easy to misjudge requests
I just have a simple index page which shows all the items of an ActiveRecord.
What I'd like to have is that the table containing the items gets automatically refreshed every X seconds (i.e. loaded from the DB and rendered).
I already redefined the index action as a partial rendering
[app/admin/item.rb]
ActiveAdmin.register Item do
index do
render :partial => "items_list"
end
end
And then I have
[app/views/admin/items/_items_list.html.erb]
(I don't mind using ERB or ARB to write the partial)
The list table is rendered correctly when I first load the page.
I'm not sure which Javascript I should include in the page to refresh the list every X seconds. More specifically, which URL should be called by the Javascript command?
Do I need to define any custom action in the controller?
Thank you for any advice.
Thomas
I finally managed it.
[app/admin/item.rb]
ActiveAdmin.register Item do
...
index do
# do nothing; table will be filled with a partial via Javascript
end
collection_action :items_list do
#items = Item.all
render :partial => "items_list"
end
end
[app/views/admin/_items_list.html.arb]
table_for items do
column "attr_1"
column "attr_2"
column "attr_3"
end
[app/assets/javascripts/items.js]
$(document).ready( function() {
setInterval(function(){
$('#index_table_items').load('items/items_list');},1000);
})
finally append to app/assets/javascripts/active_admin.js the following
//= require inbox_files
This way I get the list table updated every second. Also the CSS doesn't get distorted.
I have a question regarding the execution order in a Rails method : Here what I do in my model :
def update_fields
FillMethods.fill_info("BadgeUser", "Badge", self.id, self.badge_id, "badge_")
self.reload
self.update_attributes(:completed => true) if self.badge_goal == 0
end
I have an object, then I apply the FillMethods info on it to fill some fields in this object and save it. Then I want to check if badge_goal == 0 to mark it as completed.
The issue I have is that if I don't put self.reload the self item will not be updated, the object will be in the database but the self item will be the same as before the FillMethods. If I do self.reload the self item is correct and it can be marked as completed correctly
My question is will ruby wait for the FIllMethods to be done to reload the self or it will reload the self before the FillMethods is done ?
Why does the self item is not correct if I don't do self.reload ?
The FillMethods is a module in my lib directory.
Thanks for your help,
Thats my fill method :
module FillMethods
def self.fill_info(model1, model2, id1, id2, string)
parameters1 = model1.constantize.attr_accessible[:default].to_a.map {|s| s.dup}
parameters1.delete_if {|s| !s.start_with?(string)}
parameters2 = parameters1.map {|s| s = s.split(string)[1]}
h = Hash[parameters1.zip parameters2]
object1 = model1.constantize.find(id1)
object2 = model2.constantize.find(id2)
h.each do |parameter1, parameter2|
object1.update_attribute(parameter1.to_sym , object2.send(parameter2))
end
return object1
end
end
The goal of the method is to fill the BadgeUser table with all the badge info.
For each column in my Badge table (like name) I have in my BadgeUser table badge_name
Thanks,
I cannot be sure without seeing the code, but judging from the parameters you pass, I guess that FillMethods.fill_info retrieves the record from db again, using the third parameter id. It then changes the record and stores it back.
Your object (self) has no way, under ActiveRecord or similar, to know that the db was modified somewhere somehow.
Note that you are retrieving the same record from db some three times instead than once.
If you change FillMethods.fill_info to instead accept a record (self itself), and modify it, then self would be in the new state.
Addendum
Ruby code is executed sequentially in a single thread unless you explicitly start a new thread, so yes, .fill_info is executed before continuing with the rest of update_fields.
How can I interact with objects I've created based on their given attributes in Ruby?
To give some context, I'm parsing a text file that might have several hundred entries like the following:
ASIN: B00137RNIQ
-------------------------Status Info-------------------------
Upload created: 2010-04-09 09:33:45
Upload state: Imported
Upload state id: 3
I can parse the above with regular expressions and use the data to create new objects in a "Product" class:
class Product
attr_reader :asin, :creation_date, :upload_state, :upload_state_id
def initialize(asin, creation_date, upload_state, upload_state_id)
#asin = asin
#creation_date = creation_date
#upload_state = upload_state
#upload_state_id = upload_state_id
end
end
After parsing, the raw text from above will be stored in an object that look like this:
[#<Product:0x00000101006ef8 #asin="B00137RNIQ", #creation_date="2010-04-09 09:33:45 ", #upload_state="Imported ", #upload_state_id="3">]
How can I then interact with the newly created class objects? For example, how might I pull all the creation dates for objects with an upload_state_id of 3? I get the feeling I'm going to have to write class methods, but I'm a bit stuck on where to start.
You would need to store the Product objects in a collection. I'll use an array
product_collection = []
# keep adding parse products into the collection as many as they are
product_collection << parsed_product_obj
#next select the subset where upload_state_ud = 3
state_3_products = product_collection.select{|product| product.upload_state_id == 3}
attr reader is a declarative way of defining properties/attributes on your product class. So you can access each value as obj.attribute like I have done for upload_state_id above.
select selects the elements in the target collection, which meet a specific criteria. Each element is assigned to product, and if the criteria evaluates to true is placed in the output collection.