Here's what I'd like to do with Lift: I want to build a dynamic shopping cart, with lines able to be added and removed via AJAX calls. The total needs to be wired to the specific lines. Each line would include a number, the length of time for a lease, and a calculated price based on that, so I would have to add wired cells on each addable/removable line as well. So it would look something like this:
Number Length of Lease Price Remove?
(AJAX Textbox) (AJAX Dropdown Select) (Plain Updateable Text) (Ajax Checkbox)
(Another Row)...
+ Add
Total: ______
The problem I'm running into is that I can find resources to build a static page that displays all of this via Wiring. Using the Lift Demo site, I can pull up code that will let me add new lines, but it doesn't seem to me to be conducive to removing lines (this in general is one of my frustrations with Lift at the moment: a "little extra detail" to change from a tutorial ends up requiring me to completely change tacks and spend hours more at work and research, and I want to figure out how I'm probably approaching these problems wrongly!). Alternatively, I can use CSS selectors to dynamically create content, but I don't know how to effectively wire these together.
In addition, all of my attempts end up creating 2-3 times the amount of code I would have written to simply do some JQuery updates on the page, so I suspect that I'm doing something wrong and overcomplicating everything.
What resources would people recommend to set me on the right path?
These are your best resources for learning Lift:
Simply Lift
Lift Cookbook
Lift in Action
For any specific questions, I highly recommend you join us at the Lift Community Google Group. It is the official support channel for Lift. Although a few of us occasionally help out here at Stackoverflow, the best Lift help can be found there.
Related
I need to create a universal web scraper to parse articles on the different websites. Of course, I know about XPath, but I want to try to make it universal for any website despite the HTML markup of a page.
I need to determine whether there is an article on the page and if it is - parse a text of title, body and tags (if exists).
Frankly speaking, my knowledge in DS is not very huge, but I assume this task (determine whether it is article, and parsing only needed parts) is possible to solve.
What tools should I use? Any help?
Actually, for the second task, I need to implement something similar that google chrome mobile does. When page is not optimised for mobile, then propose to show the page in adaptive mode (just title, and main content).
If you are using Python, some libraries to look at are:
scrapy, which scrapes data and can extract some of the results) and,
BeautifulSoup, which is more geared towards the extraction part itself.
It is possible to request a version of a website (e.g. for Chrome, Safari, Mobile, old-school systems) by creating a custom header for your scraper.
HAve a look at the relevant documentation, and you can get an idea of how to use headers in scrapy here.
I do not know of any more specialised tools. Your tasks are more analytical and are typically not performed with the use of models for estimating e.g. what content is where on a webpage. This might be an intersting research direction though; to see if you can create a model that generalises across many websites to extract the desired content.
That leads me on to my last point, which is to say that creating a single scraper that works for any website *containing your artile type) is not usually possible. People create websites differently, however they see fit, which means they also change them. This usually leads to a good scraper requiring constant updates as time (and developers) moves on.
EDIT:
Then if you have lots of labelled examples, it might be possible to train a model. The challenge might be the look-back range of the model. For example, a typical LSTM model is given a parameter that tells it how far to look back into the past. It is stored within its memory internally. In your case, you might be looking for a start and end HTML tag of an article, to then extract just that part. These tahs could be thousands of words apart. Something a standard LSTM might not be fit to retain and use.
If you could pose your problem a little differently, then there are other approaches that might be plausible. E.g., you could make it a "question-answer" problem, by saying: I have this HTML, where is the article content? If that sounds ok for your use-case, have a look here for some model based approaches.
Is it possible to make a 2.5 component without using TableHelloWorld class and all that field type stuff like from here. Or is it compulsory?
http://docs.joomla.org/Developing_a_Model-View-Controller_Component/2.5/Using_the_database
The system will function without it fairly well actually. All you actually need to get something running is a base file named after your component, a controller.php file, and the view as outlined in this section: http://docs.joomla.org/Developing_a_Model-View-Controller_Component/2.5/Adding_a_view_to_the_site_part
From that you will get something that runs and loads. And if you choose you can just make raw sql queries to the database.
That being said, the framework is there to help you, not to hinder you. I've cut a lot of corners over the years, and almost always you end up regretting it later. Feel free to play around with skipping the pieces, but just remember that there are pieces out there that can help you with all kinds of important things that you may not think you need right now. (Binding input, table row hierarchies, and check-in/check-out functionality are just a few that come to mind that I'm glad I didn't have to make myself.)
I am creating an application with a lot of links. Because the links are contained in cells in a table, the urls that are generated by Wicket tend to get long, making the page slower to load.
For example:
2011-06-09 00:00:00.0
I try to figure out where to start exploring the encoding / decoding of URLs, but it is rather complex material. My first approach was to just use 'short' names for components (like "t", "f" etc). I can imagine there is a better approach.
I can image it would be possible just to 'number' the links; as the page still exists, so I would end up with something like this:
2011-06-09 00:00:00.0
Are there solutions for my problem already out there, or can anyone point me to the right direction?
If a Javascript solution is acceptable, you can use a single event listener on the whole table instead of many links in the table.
See this example for an inspiration:
https://github.com/svenmeier/apachecon-wicket/tree/master/src/main/java/eu/apachecon/base/ui/performance
Notice how the Ajax behavior transports dynamic extra parameters to the server. It looks for rows only though. if you need to distinguish between table cells being clicked, you'll have to expand on the idea.
The solution suggested by Sven is the better solution.
Here is a solution which you may call fundamental: register your own root IRequestMapper that will compress/uncompress the generated urls by the real mappers. See CryptoMapper and HttpsMapper for example of custom root mapper.
I need two paginations on one page, is it possible to do this with codeigniter?!? Of course they must operate independently of one another.
Yes and no. If you want two different pagination visuals (customized renderings of the library) then sure. The problem you'll run into is by default the pagination library will pull the current page out of your $ci->uri->segments() list automatically to determine which page to mark as "active".
I do not know of a way to explicitly override this. Perhaps if you made a MY_Pagination that took an additional $config value for current page you could get it to behave like that. I haven't looked at the library's code in a while so you'd have to do some digging.
Honestly though, I'd suggest you build your own, it's not incredibly hard to do some simple math to determine what numbers to link.
Also you'll run into issues with CI's Pagination Library if you want the "current page" part to be NOT the last segment in your url. This may have been fixed lately but last time I looked it was the stop-gap for me using the library all together.
Bottom Line Invest the time in making your own if you want more than it's basic functionality, it's simple enough, just make yours reusable if you can.
Currently Magento has a problem with the way it handles mass actions. It returns a bit of JS that contains EVERY db id for the current collection and filter, regardless of pagination. This is to support the 'Select All' vs. 'Select All Visible' option in the grid header. This isn't such a problem when you have a smaller number of records, but if you have 850k records (orders in this case) it becomes a serious problem.
My question is, does anyone have an elegant solution to this problem?
I can think of several solutions, each with its own drawbacks, but I'm hoping someone has solved this in a simple manner that works as an add-on module. Paid or Open-Source solutions are both welcome suggestions.
Clarification:
I'm looking for an elegant/drop-in solution to the problem of 850k+ records using the grid widget in Magento. The stock Magento code makes the bone headed decision to return the id for every record that is matched by the current filter, even if they are not being displayed. This is not about offline processing of records, it's about using the grid widget for daily admin tasks.
One possible solution would be to store the results of the filtered search in a temp table and return a reference to the search result. Then you could change it from using the actual ids on a 'Select All' to using a specific callback for the action using the reference. This would preserve the current behavior.
So, to ask again, does anyone have a good solution to this already created?
I'm running heavy operations from within a shell script. I have a generic iterator (in my case products, but can be done with everything else), and I only implement a class that does the action on the product. My product_iterator shell script takes care of looping over the products, while processing only x products at once (to avoid memory leaks).
Well, the least invasive solution to the problem is to turn off the 'Select All' option for grids with large numbers of records. This is easily accomplished by extending the grid class and adding the following code:
protected function _prepareMassaction()
{
$this->getMassactionBlock()->setUseSelectAll(false);
return parent::_prepareMassaction();
}
I think that fbmc has the right general approach. Clearly, even if you did find a way to send back all 850k records, the framework would have a heart attack attempting to deal with them all. Run the logic in a shell script if this is what you need. For some jobs, you may even need to run them in batches or move down to actual SQL logic.
Unfortunately, some parts of the framework were not made to handle this kind of scale. You've found one of them.
Hope that helps!
Thanks,
Joe
Recently i have written an article about 'Adding new mass action to admin grid in Magento',
Hopefully you will like it:
http://www.blog.magepsycho.com/adding-new-mass-action-to-admin-grid-in-magento/
It describes the two way for adding Mass Action
1> Extending Grid Layouts _prepareMassaction() method
2> Using event: core_block_abstract_prepare_layout_before (more upgrade proof way)
Thanks
Regards