How can I import a block of RTML into a yahoo merchant store? - yahoo

I've put together a block of RTML that i would like to add to a product page. Now that i've got that, I'm trying to figure out how to place that whole block of RTML into the product's template. That may not be the right way to do it.
my desired result is that the html of the product page is rendered PLUS the html that would be rendered by my added block of RTML.'
What I have tried: as far as I can tell, i can only use the editor to add RTML 1 line / update at a time. which means anything complicated would take forever.

Yes Kristian, you're absolutely right. The operators in RTML take a very, very long time to insert because it requires a page refresh for each operator in the store editor. If you have a lot of HTML to update, you can use the RTML template uploader located here http://www.yourstorewizards.com/rtml-transfer-utility.html . While it is free to "download" templates, you are charged for "uploading" templates. This is because you aren't truly uploading or downloading templates. What the program does is move your mouse around the screen very quickly doing all the clicking for you.
At the very least, you can use the MULTI command to group other commands together. That way you can quickly duplicate operators/HTML. You can take a bunch of TEXT operators and quickly duplicate them inside a MULTI, and then duplicate the MULTI. This is probably one of the most efficient ways to do large blocks of HTML.
Another idea... Print out a script tag using a TEXT operator, and insert your HTML with JavaScript. Obviously you won't want to do this all the time, but if you're looking to insert a very large piece of HTML this may be your best bet.

Related

Comic navigation in joomla?

I have a joomla site and would like to integrate some old unfinished webcomics to it, so I can pick them up where I stopped in a CMS that won't leave me in an absolute frothing rage (thanks, wordpress).
I've got some experience with Joomla and I believe it would be a pretty good platform for managing multiple comics... except for the small issue of horrid navigation between pages/articles. Joomla's integrated article navigation is a humble but passable start, but if you intend to use categories to organize chapters then getting from the end of one to the beginning of the next is... yeah. This is a pity, as Joomla's category and article management options are beautiful for archiving and presentation, and adding gantry 5 to it means a great deal of control over the reading experience. Basically, joomla has pretty much everything I want, except for the navigation.
Ideally, what I'd like to be able to accomplish for comic navigation in joomla is:
Clickable full-article-image leading to next article/page
Prev/next article buttons (already available)
Prev/next category buttons (do we have those?)
The latter two in a module I can choose where to publish (optional)
And this is it, basically. I understand that implementing the first could be very hard without some major template customization, in which case I'd be willing to insert the image as a link in the article body... but only if there was one single code I could use, like the one that generates the next category article button. Because I'm not willing to create hundreds of menu items to generate links page-by-page.
So is any of this doable?
This is a quick answer but too much for a comment. I'm assuming since you are on SO that you don't mind coding (as opposed to just configuring).
I think you need to do two things. First you need to create a pagination.php for your template. This will let you really super control what the pagination looks like. You can have images, special css and js, whatever you want. You can also add the "last" and "first" options.
I think you need to make a new plugin to replace the core pagenavigation plugin and that also generates the previous/next category links. (Or I guess you could make one just to do categorynavigation depending on what you want.) HOWEVER, it seems to me that there is data on the sibling links that is already being generated in the content category model so you might be able to use that. (Check the code; I think there was never a UI for it, but it is there. Even if it isn't there, siblings are very easy to obtain in nested sets)
The other thing you can really think about if you go that route is changing the whole thing somewhat to use a module that gets the current ID and category ID from JInput. You might also be able to use JPagination. The important thing, however is that you make sure to do the caching the way the pagination that is there does it. In other words you really want to cache the whole list in the order you want so you are not running so many queries and slowing your site down. You may want to look at the categories and category modules to get some ideas about the queries to do.
Hope that gets you started, but it is definitely something you can do without too much trouble.

Best way to group scripts or styles in the Head without a div?

Twofold question.
Using a web template that utilizes AJAX for loading new pages, rather than traditional point-the-browser-to-the-url style, how would one go about adding new script or style tags to the page head as necessary?
I think using jQuery $('head').append(script); works just fine, but (and here's the second half), if I wanted to group all of the scripts that were page-dependent away from those that are universal across the site - how would I do that?
Using a div sounds so enticing since I could print it at the initial page render and just call
$('head div.external').append(script), but still probably a bad idea inside the head.
Is there any other element or method I could use to group these? The goal being to easily remove all of them when the user navigates away from the current page. A smaller consideration is that Google Analytic instructs you to place it at the end of the head tag (I'll be honest, I don't know why being the last script matters), and I don't want my scripts to interfere. Do these even need to be in the head, could I place a div at the end of the body and use that?
Surely the easiest way to do this, if it is necessary, is with classes:
$(script).addClass('external').appendTo('head');
then later:
$('script.external').remove();
I'm honestly not sure what this would achieve, however.

How to retrieve plain text from a formatted website to use in UIWebView

Not sure if what I want to do is possible, but what I am hoping to do is somehow gather certain pieces of text from a website, remove the header, footer, background, all formatting, and place it into my application in a scrollview or something similar...
I'll give you an example... Imagine I was making wikipedia's iPhone app, I want to download the information about the wiki on dogs, without the header, side bars etc, just the text. How would I go about doing this?
I understand that for this I have not provided any example code or what I've tried or started, but that's just because in this case I'm lost! That doesn't mean I want full chunks of code either. Any help will do. If this doesn't work, I will just have to make a 'mobile optimised' version of the webpages I want to include in my app.
Thanks
(Edit: the term I was trying to use was 'strip the web page of its HTML coding')
You may be going about this the wrong way, or perhaps even asking the wrong question.
Does the target website have an API or datafeed of some kind?
Can you get the information you need in JSON or XML format directly from the site?
I think you've misunderstood the technology. HTML is merely the framwork on which the formatting and data is hung.
Parsing the HTML page seems like an awfully big headache, I doubt you'll ever be able to get it to work, because almost all sites these days are partially or wholly generated on the server side, the page is only the result.
Some sites hide the information in memory and others get it dynamically through ajax for example, which means that simply trying to get the data by parsing the HTML will get zero data.
Another issue you should be aware of though, is that simply copying the data from generated websites may open yourself up to copyright issues.
You have to parse the html code and search for the part that you want and "throw" away the part that you do not need. This is more or less like bruteforcing and the code of the website should not change otherwise you are screwed. So you have to write the parser by hand with this method. But maybe there is a atom or rss feed and you can parse this one. This will be much more easier and you are not depending on the website layout because the rss/atom feed is just about the data. For parsing rss you could try out NSXMLParser.
And then you have to make a valid html page out of the data and present it in the UIWebView

Saving wysiwyg Editor content with Ajax

I am writing a cms (on .net) and have structured the whole page to work client side.
There is a treeview that lets you add/remove/move items and define their names in the languages defined. For each language I save the names of the category defined, but when there is HTML content associated with it, i fall into the JavaScript serializer problem that finds the content too long to be serialized.
What would be the best approach to make sth like this work. Shall I change everything to work with postbacks, or try to manually call _doPostBack for the editor content (which I don't want). Thank you in advance.
I guess would be great to make auto-save with time interval which will submit only diffs between current state and previous save. It will do the key if the user will edit it manually, not for copy/paste, of course. It is if we talk about really big data that we need to save.
Otherwise need to find some ways to compress the data before submitting: json+base64, etc.

What algorithms could I use to identify content on a web page

I have a web page loaded up in the browser (i.e. its DOM and element positioning are both accessible to me) and I want to find the block element (or a sorted list of these elements), which likely contains the most content (as in a continuous block of text). The goal is to exclude things like menus, headers, footers and such.
This is my personal favorite: VIPS: a Vision-based Page Segmentation Algorithm
First, if you need to parse a web page, I would use HTMLAgilityPack to transform it to an XML. It will speed everything and will enable you, using a simple XPath to go directly to the BODY.
After that, you have to run on all the divs (You can get all the DIV elements in a list from the agility pack), and get whatever you want.
There's a simple technique to do this,based on analysing how "noisy" HTML is, i.e., what is the ratio of markup to displayed text through an html page. The Easy Way to Extract Useful Text from Arbitrary HTML describes this tex, giving some python code to illustrate.
Cf. also the HTML::ContentExtractor Perl module, which implements this idea. It would make sense to clean the html first, if you wanted to use this, using beautifulsoup.
I would recommend Vit Baisa's thesis on Web Content Cleaning, I think he has some code too, but I can't find a link for it. There is also a discussion of the very same problem on the natural language processing LingPipe blog.

Resources