Browser Plugin to fill a large html form with test data - firefox

In one of the flows in a java web application, I have a form page which captures around 50 odd fields. Now to test a code change in the last page in this flow I have to fill almost all the fields in all the pages that come before it (approximately around 75 fields). This takes a lot of effort in creating the test data and testing the change
Most of the time I enter the same data in these fields for testing. Any suggestion to automate this, something like a firefox plugin which could save the form data within the browser and populate it again the next time i want to ?
I tried searching over the internet but I could only find Charles Proxy which isn't what I need exactly.

You can use Selenium or iMacros for Firefox - https://addons.mozilla.org/en-US/firefox/addon/imacros-for-firefox/.

Related

jQuery AJAX Load Method - Delay

I'll admit that I'm pretty new web development (only been coding for about a year) and especially green when it comes to JS / jQuery.
A specific web page I've built loads different data based on hovering over certain categories: country clubs, resorts, hotels, etc. When I built the site on my local machine, the javascript function was super quick. However, on the live site, it has a long delay before the data swap happens.
The URL is: http://preferredparkingsolutions.com/client_list.html
Which links to a javascript function at: http://preferredparkingsolutions.com/scripts/clientHover.js
Which replaces the display div (#client_list) by pulling data from a text file.
Is there a better / faster way of doing this?
Yes, this could be optimised by loading the content in up-front and caching it. Currently you are doing a HTTP request each for each and every hover - even if the user has hovered over that element before, since the AJAX responses aren't being cached. Doing this would be your quickest win.
However, I can't see any case at all for having the content live externally. Is there any reason you're against having the content physically in the page and just using show/hide methods? There's various benefits to this - SEO, for one thing, since Google will find the content.
this is the external page you are loading http://preferredparkingsolutions.com/client_list.inc.html and the content looks little and looks like its a static page then why not just load every thing upfront and then just hide and show div's ? as Utkanos suggested you will aslo have a SEO benifit and also its HTTP request each for each and every hover. if you still want to load it externally lost load it once and cache it and use the cached version to hide and show divs.

Designing Web Album: CSS or Ajax?

I want to design a web album with every image in the album having it's own title, and description. So, at a time only one set of image, title and description would be visible. And on clicking next button, the next set of image, title and description would appear,and so on.
So am wondering, what would be the best way to design with? HTML or AJAX?
I don't want to use the ready to use tools such as lightbox.
Do you want the browser's back button to work? If so, then you should make your life simple
and use html (since you will only be displaying one image at a time either way).
Ajax implies using html. On the other hand, using html does not necessarily imply that you need to use AJAX to load content dynamically.
What is the purpose of this project? If you are doing it for the learning experience you should go on with AJAX (from scratch). If you want speed and quality use an existing web image gallery. If you need to write it yourself use plain html (or an ajax framework such as dojo, jquery, etc. this will save you a lot of pain solving cross-browser quirks).
In addition, if you want to be able to click a button to take you to the next (previous) image
and you don't know how many images you will have beforehand, then you are looking for dynamic behavior. You can code dynamic logic either on the client side (javascript), or on the server side (let's say "php" to start with).
Also, how do you plan to keep the corresponding (image, title, description) together?
If you only have a 3 images, say you could hard code each of this into its corresponding html file. eg. 1.html, 2.html, 3.html. Then you would have to point the forward button from a.html to point to b.html. etc...
If you didn't want this boring static behavior and wanted something smarter, say you decided for AJAX. Then you would only have 1.html file and from there (using javascript) you would ask your server for the (image, title, description) and load all that (dynamically, without refreshing the browser) into the same page. The easiest way to get this from the sever is by just reading a a static (XML, or JSON) file which contains all the info (image urls, titles, descriptions). Then with javascript and using DOM manipulation you would remove the old image, and add the new one.
However, this would all be a lot simpler with server-side processing (and it's worth learning). In this case you could have a url which takes a parameter with the image number. eg. http://example.com/gallery/index.php?image=X
then before the server responds to the client with the html, it would realize that you want to load image X so it would get it's corresponding description, title, and url. and "embed" those into the file. Of course, depending on the number, it would also add the right links for the previous and next buttons. Eg. If the currently displaying image was 9 then forward button would "dynamically" be determined to link to (X+1) : http://example.com/gallery/index.php?image=10

How to retrieve plain text from a formatted website to use in UIWebView

Not sure if what I want to do is possible, but what I am hoping to do is somehow gather certain pieces of text from a website, remove the header, footer, background, all formatting, and place it into my application in a scrollview or something similar...
I'll give you an example... Imagine I was making wikipedia's iPhone app, I want to download the information about the wiki on dogs, without the header, side bars etc, just the text. How would I go about doing this?
I understand that for this I have not provided any example code or what I've tried or started, but that's just because in this case I'm lost! That doesn't mean I want full chunks of code either. Any help will do. If this doesn't work, I will just have to make a 'mobile optimised' version of the webpages I want to include in my app.
Thanks
(Edit: the term I was trying to use was 'strip the web page of its HTML coding')
You may be going about this the wrong way, or perhaps even asking the wrong question.
Does the target website have an API or datafeed of some kind?
Can you get the information you need in JSON or XML format directly from the site?
I think you've misunderstood the technology. HTML is merely the framwork on which the formatting and data is hung.
Parsing the HTML page seems like an awfully big headache, I doubt you'll ever be able to get it to work, because almost all sites these days are partially or wholly generated on the server side, the page is only the result.
Some sites hide the information in memory and others get it dynamically through ajax for example, which means that simply trying to get the data by parsing the HTML will get zero data.
Another issue you should be aware of though, is that simply copying the data from generated websites may open yourself up to copyright issues.
You have to parse the html code and search for the part that you want and "throw" away the part that you do not need. This is more or less like bruteforcing and the code of the website should not change otherwise you are screwed. So you have to write the parser by hand with this method. But maybe there is a atom or rss feed and you can parse this one. This will be much more easier and you are not depending on the website layout because the rss/atom feed is just about the data. For parsing rss you could try out NSXMLParser.
And then you have to make a valid html page out of the data and present it in the UIWebView

For ajax - Hashes vs HTML 5 History API?

Before I launch my site, I want to get my URL structure set in stone. A large number of my pages have tabs on them, and it's a much better user experience if when changing a tab, I use ajax to get the relevant changes and just update that, rather than updating the whole page.
Should I use the popular method of just updating the hash of the url for ajax tab changes, or should I just use the HTML 5 history API, and let anyone with browsers that don't support it reload the full page? I've heard people say that websites that use hashes and hashbangs are "breaking the web". Using hashes my urls would look like this: example.com/#popular, and using HTML 5 history my urls would look like this: example.com/?tab=popular.
If you want to serve a different page depending on which tab is selected, then use the HTML 5 history approach. Otherwise just update the hash.
As far as I know, and from my experience it's really six of one and half dozen of another. It's really what you prefer since the end result is the same.

Facebook Game Function, Optimizin a Call, and Loading Bar

I am attempting to make a Facebook game and trying to replicate a common function that I usually find in many other Facebook game (a call to my website and illusionary image that is a loading bar).
The function should do the following:
User clicks on Button
Animated Gif Appears (Loading Bar)
Button Update User's Status
Animated Gif Disappears
Facebook Canvas page is updated
The code I currently have can be found at <dead link>
I am having trouble thinking of Step 2 and 4.
I need to optimize Step 5.
To clarify what happens on Step 5. I have Box 1 which has my stats. And Box 2 which has my points. I click on Box 1. This should update Box 1 with 1 points, and update Box 2; minus a point. (Clicking on Box 1, concurrently update both boxes)
I have successfully done this, but it is quite slow. I was wondering if there are alternative way that may be faster than what I am currently doing.
Script Updated with Mark-up. <dead link>
I've found a quick way to optimize the call. Rather than querying for data that I already have query, I will be using the first query to grab most of my data rather than querying it when I update.
It would help greatly to see the document markup (XHTML) where you have your elements and the calls to your javascript functions.
For steps 2 and 4 I recommend using the visibility attribute rather than display, or having the loading bar in an fb:js-string and using elem.setInnerFbml when you begin loading and once you have your response data, simply update it to the new content (you don't need an explicit loading_finish function in this case).
In your get_skillpoint function, you set parameters in an object and then you specify the action parameter again in the URL you are posting to as a URL query param - you may end up with one value overwriting the other, depending on how you access these values on the server side. I would recommend using different names for these two parameters if they are not the same. Also, why are you trying to send separate GET and POST variable sets? You should put everything in the POST and simply leave out the URL query string. I vaguely remember losing data that way in the past (vaguely, mind you).
If you can post your markup I'll update my answer with any light it sheds on the problem. It might be slow simply because Facebook isn't blinding fast when it comes to FBJS and AJAX. Also, FBML being returned must be preprocessed in the FB proxy before your app gets it, which adds a bit of lag; it's a bit faster to return JSON and just pull the data needed out of it, then place the appropriate pieces into an existing element or make use of fb:js-string.

Resources