How to find Dependent Rowset in ZF2 - validation

I have two tables (just an example): Cars and Colors.
One Car have a Color, so I can't delete the color red if exists a car who is red. Easy.
With ZF1 I could verify easily this dependency before delete a color, by using findDependentRowset() method.
But how can I do this in ZF2?
It's a bad practice if I just let the delete method fail and than grab the exception and print a message?
Thanks!

There is no direct implementation of findDependantRowset() anymore in ZF2. ZF2 went back a step of providing a full ORM and instead simply provided functionality for easier Query-Management.
And exactly this would be your approach. You'd either do two queries Query for CAR then Query Colords for CarColor or you'd do a single query, where you query for both simultaneously. The later one being the faster approach, the first one being pretty much what findDependantRowset() did.
If you want more 'magic'-functionality, you'd be best advised to check out one of the many good ORMs out there. Doctrine 2 for example has a pretty neat ZF2 implementation already and appears to be a community standard as far as ZF2 is concerned. You may want to check out https://github.com/doctrine/DoctrineORMModule

Related

How do I access h2o xgb model input features after saving a model to disk and reloading it?

I'm using h2o's xgboost implementation in Python. I've saved a model to disk and I'm trying to load it later on for analysis and predicting. I'm trying to access the input features list or, even better, the feature list used by the model which does not include the features it decided not to use. The way people advise doing this is to use varimp function to get the variable importance and while this does remove features that aren't used in the model this actually gives you the variable importance of intermediate features created by OHE the categorical features, not the original categorical feature names.
I've searched for how to do this and so far I've found the following but no concrete way to do this:
Someone asking something very similar to this and being told the feature has been requested in Jira
Said Jira ticket which has been marked resolved but I believe says this was implemented but not customer visible.
A similar ticket requesting this feature (original categorical feature importance) for variable importance heatmaps but it is still open.
Someone else who found an unofficial way to access the columns with model._model_json['output']['names'] but that doesn't give the features that weren't used by the model and they are told to use a different method that doesn't work if you have saved the model to disk and reloaded it (which I am doing).
The only option I see is to just use the varimp features, split on period character to break the OHE feature names, select the first part of all the splits, and then run a set over everything to get the unique column names. But I'm hoping there's a better way to do this.

How can I query kendo trees and MVVM models?

According to the docs I should be able to do this ...
$("#tree").data("kendoTreeView").expand(".k-item");
Great if i want to expand everything, but what if i only want to expand nodes where the property "expanded" in my model items is set to true?
Is there a way i can query the tree based on something in the model then perform an action on all results?
The real answer here is quite long, the short version being as with everything kendo, spend hours with support to be given half the solution and told to write the rest yourself.
I got round this problem by using another library (jslinq) to query the model data.
This is yet another frustrating issue with kendo that really should at the very least be offered as a core part of the heirarchy data source at some basic level (essentially an incomplete implementation).

Joomla 2.5 component

Is it possible to make a 2.5 component without using TableHelloWorld class and all that field type stuff like from here. Or is it compulsory?
http://docs.joomla.org/Developing_a_Model-View-Controller_Component/2.5/Using_the_database
The system will function without it fairly well actually. All you actually need to get something running is a base file named after your component, a controller.php file, and the view as outlined in this section: http://docs.joomla.org/Developing_a_Model-View-Controller_Component/2.5/Adding_a_view_to_the_site_part
From that you will get something that runs and loads. And if you choose you can just make raw sql queries to the database.
That being said, the framework is there to help you, not to hinder you. I've cut a lot of corners over the years, and almost always you end up regretting it later. Feel free to play around with skipping the pieces, but just remember that there are pieces out there that can help you with all kinds of important things that you may not think you need right now. (Binding input, table row hierarchies, and check-in/check-out functionality are just a few that come to mind that I'm glad I didn't have to make myself.)

Solution for Magento mass actions and large number of records problem?

Currently Magento has a problem with the way it handles mass actions. It returns a bit of JS that contains EVERY db id for the current collection and filter, regardless of pagination. This is to support the 'Select All' vs. 'Select All Visible' option in the grid header. This isn't such a problem when you have a smaller number of records, but if you have 850k records (orders in this case) it becomes a serious problem.
My question is, does anyone have an elegant solution to this problem?
I can think of several solutions, each with its own drawbacks, but I'm hoping someone has solved this in a simple manner that works as an add-on module. Paid or Open-Source solutions are both welcome suggestions.
Clarification:
I'm looking for an elegant/drop-in solution to the problem of 850k+ records using the grid widget in Magento. The stock Magento code makes the bone headed decision to return the id for every record that is matched by the current filter, even if they are not being displayed. This is not about offline processing of records, it's about using the grid widget for daily admin tasks.
One possible solution would be to store the results of the filtered search in a temp table and return a reference to the search result. Then you could change it from using the actual ids on a 'Select All' to using a specific callback for the action using the reference. This would preserve the current behavior.
So, to ask again, does anyone have a good solution to this already created?
I'm running heavy operations from within a shell script. I have a generic iterator (in my case products, but can be done with everything else), and I only implement a class that does the action on the product. My product_iterator shell script takes care of looping over the products, while processing only x products at once (to avoid memory leaks).
Well, the least invasive solution to the problem is to turn off the 'Select All' option for grids with large numbers of records. This is easily accomplished by extending the grid class and adding the following code:
protected function _prepareMassaction()
{
$this->getMassactionBlock()->setUseSelectAll(false);
return parent::_prepareMassaction();
}
I think that fbmc has the right general approach. Clearly, even if you did find a way to send back all 850k records, the framework would have a heart attack attempting to deal with them all. Run the logic in a shell script if this is what you need. For some jobs, you may even need to run them in batches or move down to actual SQL logic.
Unfortunately, some parts of the framework were not made to handle this kind of scale. You've found one of them.
Hope that helps!
Thanks,
Joe
Recently i have written an article about 'Adding new mass action to admin grid in Magento',
Hopefully you will like it:
http://www.blog.magepsycho.com/adding-new-mass-action-to-admin-grid-in-magento/
It describes the two way for adding Mass Action
1> Extending Grid Layouts _prepareMassaction() method
2> Using event: core_block_abstract_prepare_layout_before (more upgrade proof way)
Thanks
Regards

LINQ Conflict Detection: Setting UpdateCheck attribute

I've been reading up on LINQ lately to start implementing it, and there's a particular thing as to how it generates UPDATE queries that bothers me.
Creating the entities code automatically using SQLMetal or the Object Relational Designer, apparently all fields for all tables will get attribute UpdateCheck.Always, which means that for every UPDATE and DELETE query, i'll get SQL statement like this:
UPDATE table SET a = 'a' WHERE a='x' AND b='x' ... AND z='x', ad infinitum
Now, call me a purist, but this seems EXTREMELY inefficient to me, and it feels like a bad idea anyway, even if it weren't inefficient. I know the fetch will be done by the clustered primary key, so that's not slow, but SQL still needs to check every field after that to make sure it matches.
Granted, in some very sensitive applications something like this can be useful, but for the typical web app (think Stack Overflow), it seems like UpdateCheck.WhenChanged would be a more appropriate default, and I'd personally prefer UpdateCheck.Never, since LINQ will only update the actual fields that changed, not all fields, and in most real cases, the second person editing something wins anyway.
It does mean that if two people manage to edit the same field of the same row in the small time between reading that row and firing the UPDATE, then the conflict that would be found won't be fired. But in reality that's a very rare case. The one thing we may want to guard against when two people change the same thing won't be caught by this, because they won't click Submit at the exact same time anyway, so there will be no conflict at the time the second DataContext reads and updates the record (unless the DataContext is left open and stored in Session when the page is shown, or some other seriously bad idea like that).
However, as rare as the case is, i'd really like to not be getting exceptions in my code every now and then if this happens.
So my first question is, am I wrong in believing this? (again, for "typical" web apps, not for banking applications)
Am I missing some reason why having UpdateCheck.Always as default is a sane idea?
My second question is, can I change this civilizedly? Is there a way to tell SQLMetal or the ORD which UpdateCheck attribute to set?
I'm trying to avoid the situation where I have to remember to run a tool I'll have to make that'll take some regexes and edit all the attributes in the file directly, because it's evident that at some point we'll run SQLMetal after an update to the DB, we won't run this tool, and all our code will break in very subtle ways that we probably won't find while testing in dev.
Any suggestions?
War stories are more than welcome, i'd love to learn from other people's experiences on this.
Thank you very much!
Well, to answer the first question - I agree with you. I'm not a big fan of this "built in" optimistic concurrency, especially if you have timestamp columns or any fields which are not guaranteed to be the same after an update occurs.
To address the second question - I don't know of any way to override SqlMetal's default approach (UpdateCheck = Always), we ended up writing a tool which sets UpdateCheck = Never for appropriate columns. We're using a batch file to call SqlMetal and afterwards running the tool).
Oh, while I think of it - it was also a treat to find that SqlMetal also models relationships to set a foreign key to null instead of "Delete On Null" (for join tables in particular). We had to use the same post-generation tool to set these appropriately too.

Resources