Odoo 10 - Duplicate supplier info for a given product_template - odoo-10

I have:
a) given product_template_id (i.e. id 100) and
b) a duplicated product_template_id (i.e. id 200) created using copy() method
copy() method copies only product.template model, so suppliers for that specific product are not copied.
I would like to duplicate all suppliers for that model, but now I am wondering which is the right way to do it in Odoo.
If I understood the model properly suppliers prices for a given product are stored in product_supplierinfo table, where each record that points to a given product_tmpl_id specifices a supplier price/qty for a given product_template.
Which would be the way in Odoo to search for all records that point to a given product_tmpl_id (i.e. 100), duplicate them changing product_tmpl_id to the new one (i.e. 200)?

Excerpt from the ORM Documentation:
copy (bool) -- whether the field value should be copied when the record is duplicated (default: True for normal fields, False for One2many and computed fields, including property fields and related fields)
The field you're referring to is seller_ids, whose field definition is below:
seller_ids = fields.One2many('product.supplierinfo', 'product_tmpl_id', 'Vendors')
The copy attribute is not explicitly defined, so it is False by default (as explained in the documentation above). If you want this field to copy along with the other values during the standard product "Duplicate" (copy method), you can do this:
class ProductTemplate(models.Model):
_inherit = 'product.template'
# This only changes the copy attribute of the existing seller_ids field.
# All other attributes (string, comodel_name, etc.) remain as they are defined in core.
seller_ids = fields.One2many(copy=True)
Alternatively
If you want to only have the field copied sometimes, you can extend the copy method to look for a specific context value and only copy based on that.
# This may take some tweaking, but here's the general idea
#api.multi
def copy(self, vals):
new_product = super(YourClass, self).copy(vals)
if vals.get('copy_sellers'):
new_product.seller_ids = self.seller_ids.copy({'product_id': new_product.id})
return new_product
# Whatever you have calling the copy method will need to include copy_sellers in vals
vals.update({'copy_sellers': True})
product.copy(vals)

Related

SOLVED: Looking for a smarter way to sync and order entries in Laravel/Eloquent pivot table

In my Laravel 5.1 app, I have classes Page (models a webpage) and Media (models an image). A Page contains a collection of Media objects and this relationship is maintained in a "media_page" pivot table. The pivot table has columns for page_id, media_id and sort_order.
A utility form on the site allows an Admin to manually associate one or more Media items to a Page and specify the order in which the Media items render in the view. When the form submits, the Controller receives a sorted list of media ids. The association is saved in the Controller store() and update() methods as follows:
[STORE] $page->media()->attach($mediaIds);
[UPDATE] $page->media()->sync($mediaIds);
This works fine but doesn't allow me to save the sort_order specified in the mediaIds request param. As such, Media items are always returned to the view in the order in which they appear in the database, regardless of how the Admin manually ordered them. I know how to attach extra data for the pivot table when saving a single record, but don't know how to do this (or if it's even possible) when passing an array to attach() or sync(), as shown above.
The only ways I can see to do it are:
loop over the array, calling attach() once for each entry and passing along the current counter index as sort_order.
first detach() all associations and then pass mediaIds array to attach() or sync(). A side benefit would be that it eliminates the need for a sort_order column at all.
I'm hoping there is an easier solution that requires fewer trips to the database. Or am I just overthinking it and, in reality, doing the loop myself is really no different than letting Laravel do it further down the line when it receives the array?
[SOLUTION] I got it working by reshaping the array as follows. It explodes the comma-delimited 'mediaIds' request param and loops over the resulting array, assigning each media id as the key in the $mediaIds array, setting the sort_order value equal to the key's position within the array.
$rawMediaIds = explode(',', request('mediaIds'));
foreach($rawMediaIds as $mediaId) {
$mediaIds[$mediaId] = ['sort_order' => array_search($mediaId, $rawMediaIds)];
}
And then sorted by sort_order when retrieving the Page's associated media:
public function media() {
return $this->belongsToMany(Media::class)->orderBy('sort_order', 'asc');
}
You can add data to the pivot table while attaching or syncing, like so:
$mediaIds = [
1 => ['sort_order' => 'order_for_1'],
3 => ['sort_order' => 'order_for_3']
];
//[STORE]
$page->media()->attach($mediaIds;
//[UPDATE]
$page->media()->sync($mediaIds);

Django rest framework mongoengine update new field with default value

I'm using Django rest framework mongoengine after created few documents, if i want add a new field with default value. Is there any way to do that orelse i need to update with few custom function.
Note: I want to fetch the data with filter having a new field name. That time the field is not there. So i'm getting empty.
From what I understand, you are modifying a MongoEngine model (adding a field with a default value) after documents were inserted. And you are having issue when filtering your collection on that new field.
Basically you have the following confusing situation:
from mongoengine import *
conn = connect()
conn.test.test_person.insert({'age': 5}) # Simulate an old object
class TestPerson(Document):
name = StringField(default='John') # the new field
age = IntField()
person = TestPerson.objects().first()
assert person.name == "John"
assert Test.objects(name='John').count() == 0
In fact, MongoEngine dynamically applies the default value when the field of the underlying pymongo document is empty but it doesn't account for that when filtering.
The only reliable way to guarantee that filtering will work is to migrate your existing documents.
If its only adding a field with a default value, you could do this with MongoEngine: TestPerson.objects().update(name='John')
If you did more important/complicated changes to your document structure, then the best option is to get down to pymongo.
coll = TestPerson._get_collection()
coll.update({}, {'$set': {'name': 'John'}})

Removing attributes from an activerecord got via .includes

I am having a really weird problem while attempting to do a very simple thing. I am doing an .includes on a model to get a row of data from the database. On the return object I need to remove certain attributes conditionally. And the final aim is to reinsert this row as a new record based on the changes I make on the attributes using my conditions.
def myUpdate
dbObj = Obj.includes(:name,
:addr1,
:addr2,
:state,
:description).find(params[:id])
#dbObjective.attributes().except('description')
#dbObjective.description = nil
#dbObjective.attributes().delete('description')
# After setting more attributes, persist this object
end
I tried all possibilities that I could think of, but the attribute is just not getting removed. What am I missing? I am on Ruby on Rails 4.2.
includes is used to include associated tables in your query for join queries and eager loading, not for table attributes. You do not need to do anything special to access an object's attributes.
attributes returns a Hash instance containing the record's attributes as key-value pairs, and operating on it will change only the Hash instance itself, not the record.
There are several ways to update attributes. One of the easiest ways is using the built in setter methods given to you by ActiveRecord. If you really want to change attributes using the Hash API you can store the attributes hash in a variable, manipulate the hash, and pass it as an argument to update, which accepts an attributes hash as it's argument.
Using setter methods
def myUpdate
dbObj = Obj.find(params[:id])
dbObj.description = 'new_description'
dbObj.name = 'new_name
dbObj.save
end
Using update
def myUpdate
dbObj = Obj.find(params[:id])
attributes = dbObj.attributes # This is how you would update the object by manipulating the attributes hash
attributes.delete(:description) # this will NOT end up changing the attribute in the DB
attributes[:name] = nil # this will successfully set name to NULL in the DB
dbObj.update(attributes) # pass the manipulated hash to the `update` method to persist the changes
end
deleteing fields from the hash will not have an effect on the persisted object. update only performs an insert on fields present in the hash that have changed.

What is the best way pre filter user access for sqlalchemy queries?

I have been looking at the sqlalchemy recipes on their wiki, but don't know which one is best to implement what I am trying to do.
Every row on in my tables have an user_id associated with it. Right now, for every query, I queried by the id of the user that's currently logged in, then query by the criteria I am interested in. My concern is that the developers might forget to add this filter to the query (a huge security risk). Therefore, I would like to set a global filter based on the current user's admin rights to filter what the logged in user could see.
Appreciate your help. Thanks.
Below is simplified redefined query constructor to filter all model queries (including relations). You can pass it to as query_cls parameter to sessionmaker. User ID parameter don't need to be global as far as session is constructed when it's already available.
class HackedQuery(Query):
def get(self, ident):
# Use default implementation when there is no condition
if not self._criterion:
return Query.get(self, ident)
# Copied from Query implementation with some changes.
if hasattr(ident, '__composite_values__'):
ident = ident.__composite_values__()
mapper = self._only_mapper_zero(
"get() can only be used against a single mapped class.")
key = mapper.identity_key_from_primary_key(ident)
if ident is None:
if key is not None:
ident = key[1]
else:
from sqlalchemy import util
ident = util.to_list(ident)
if ident is not None:
columns = list(mapper.primary_key)
if len(columns)!=len(ident):
raise TypeError("Number of values doen't match number "
'of columns in primary key')
params = {}
for column, value in zip(columns, ident):
params[column.key] = value
return self.filter_by(**params).first()
def QueryPublic(entities, session=None):
# It's not directly related to the problem, but is useful too.
query = HackedQuery(entities, session).with_polymorphic('*')
# Version for several entities needs thorough testing, so we
# don't use it yet.
assert len(entities)==1, entities
cls = _class_to_mapper(entities[0]).class_
public_condition = getattr(cls, 'public_condition', None)
if public_condition is not None:
query = query.filter(public_condition)
return query
It works for single model queries only, and there is a lot of work to make it suitable for other cases. I'd like to see an elaborated version since it's MUST HAVE functionality for most web applications. It uses fixed condition stored in each model class, so you have to modify it to your needs.
Here is a very naive implementation that assumes there is the attribute/property self.current_user logged in user has stored.
class YourBaseRequestHandler(object):
#property
def current_user(self):
"""The current user logged in."""
pass
def query(self, session, entities):
"""Use this method instead of :method:`Session.query()
<sqlalchemy.orm.session.Session.query>`.
"""
return session.query(entities).filter_by(user_id=self.current_user.id)
I wrote an SQLAlchemy extension that I think does what you are describing: https://github.com/mwhite/multialchemy
It does this by proxying changes to the Query._from_obj and QueryContext._froms properties, which is where the tables to select from ultimately get set.

Interacting With Class Objects in Ruby

How can I interact with objects I've created based on their given attributes in Ruby?
To give some context, I'm parsing a text file that might have several hundred entries like the following:
ASIN: B00137RNIQ
-------------------------Status Info-------------------------
Upload created: 2010-04-09 09:33:45
Upload state: Imported
Upload state id: 3
I can parse the above with regular expressions and use the data to create new objects in a "Product" class:
class Product
attr_reader :asin, :creation_date, :upload_state, :upload_state_id
def initialize(asin, creation_date, upload_state, upload_state_id)
#asin = asin
#creation_date = creation_date
#upload_state = upload_state
#upload_state_id = upload_state_id
end
end
After parsing, the raw text from above will be stored in an object that look like this:
[#<Product:0x00000101006ef8 #asin="B00137RNIQ", #creation_date="2010-04-09 09:33:45 ", #upload_state="Imported ", #upload_state_id="3">]
How can I then interact with the newly created class objects? For example, how might I pull all the creation dates for objects with an upload_state_id of 3? I get the feeling I'm going to have to write class methods, but I'm a bit stuck on where to start.
You would need to store the Product objects in a collection. I'll use an array
product_collection = []
# keep adding parse products into the collection as many as they are
product_collection << parsed_product_obj
#next select the subset where upload_state_ud = 3
state_3_products = product_collection.select{|product| product.upload_state_id == 3}
attr reader is a declarative way of defining properties/attributes on your product class. So you can access each value as obj.attribute like I have done for upload_state_id above.
select selects the elements in the target collection, which meet a specific criteria. Each element is assigned to product, and if the criteria evaluates to true is placed in the output collection.

Resources