I want to know in web pages like Yahoo which every for example 5 min news changes, do we send http packet to server? If I want to implement this feature with AJAX do I need to send http packet again? Can some one explain the dynamic structure(like this updating news)for me.
You can easily find out what's going on on the page: For example use Firefox and download the Firebug addon. Then open firebug after installation and open the Network tab. You may have to enable it. Reload your page in question, you'll see evere request made. And of course you will see timed requests done later to update the page. The details are pretty thorough.
This is how the firebug network tab looked like after posting the first paragraph of this answer:
Related
Sorry for asking a newbie question here. I've searched on the web but everyone else seems to know the answer so I can't find any definite words. I need to verify my omniture tag fires correctly in tamper data. Should I look for a call from adobetag.com? Or what other URL call should I look for?
If your implementation is Omniture's 3rd party cookie, the request will be sent to 2o7.net or omtrdc.net depending on the version of SiteCatalyst you are using (and whether or not it is through their TagManager). If it is a first party implementation, it will be to your own domain that you worked with Omniture ClientCare to setup.
You can certainly look for those domains in any number of addons or other programs (built-in net console, firebug, httpfox, charles proxy - basically anything that can see requests being made) but FYI there are a couple of options that make it easier to see Omniture requests.
If you are using FireFox and have the firebug addon, there is an extension to firebug called omnibug. It adds an extra tab to firebug that will show you requests to Omniture made, and even makes it an easy to read format (It also reports other ones like GA and WebTrends).
Alternatively, Omniture provides a debugging tool called DigitalPulse. Basically you create a bookmark and for the location/url you put the javascript code snippet. Then on the page you click the bookmark and it pops up a window showing info about any omniture requests made.
Filter to requests containing b/ss
They are the requests being sent to Omniture SiteCatalyst.
See these references if you need more information -
http://blogs.adobe.com/digitalmarketing/analytics/validate-your-mobile-app-measurement-implementation/
http://blogs.adobe.com/digitalmarketing/analytics/custom-link-tracking-capturing-user-actions/
http://emptymind.org/validating-page-tags-with-httpfox/
http://tech.groups.yahoo.com/group/webanalytics/message/24232
In short look for: "/b/ss" however this guy wrote a very in depth article on how to do that with HTTPFox it's a good read with screenshots: http://www.mikewebguy.com/2013/08/21/httpfox-helps-you-verify-what-data-is-being-sent-to-your-analytics-provider/
We're working on a Joomla site for GiveCamp, and our code is running into some error in production that didn't occur in test.
Nothing is showing up in the server logs, and some error appears on the screen very very briefly, then is replaced by another error page.
How can we freeze the display to view the first page?
Like stepping through code in a debugger, but in the browser?
The only thing we've come up with is making a video and stopping the playback; there's got to be a smarter way of capturing the web traffic, right?
We're open to using a different browser or adding extensions -- I think we've viewed this in Firefox and Conkeror, so far...
UPDATE: we ended up recording a screen-case, then stopping the playback to see the error. I am still hoping to find a better solution, something that would be caching the web stream.
I know it might be late since you've already found your solution, but the following might be useful for for anyone else having the same problem:
Try using a FF extension to capture the response headers like httpfox or live http headers. These will allow you to capture everything received from the web-server and then go through it to find your message. There are also a bunch of dedicated packet sniffing tools such as wireshark - this will capture everything. If you're on windows, theres an awesome app called fiddler.
We are grabbing our feed at feedburner by using the jquery jGFeed plugin.
this works great until the moment our users are on a httpS:// page.
When we try to load the feed on that page the user gets the message that there is mixed conteent, protected and unprotected on the page.
A solution would be to load the feed on https, but google doesn't allow that, the certificate isn't working.
$.jGFeed('httpS://feeds.feedburner.com/xxx')
Does anyone know a workaround for this. The way it functions now, we simply cannot server the feed in our pages when on httpS
At this time Feedburner does not offer feeds over SSL (https scheme). The message that you're getting regarding mixed content is by design; in fact, any and all content that is not being loaded from a secured connection will trigger that message, so making sure that all content is loaded over SSL is really your only alternative to avoid that popup.
As I mentioned, Feedburner doesn't offer feeds over SSL, so realistically you'll need to look into porting your feed to another service that DOES offer feeds over SSL. Keep in mind what I said above, however, with respect to your feed's content as well. If you have any embedded content that is not delivered via SSL then that content will also trigger the popup that you're trying to avoid.
This comes up from time to time with other services that don't have an SSL cert (Twitter's API is a bit of a mess that way too.) Brian's comment is correct about the nature of the message, so you've got a few options:
If this is on your server, and the core data is on your server too, then you've got end to end SSL capabilities; just point jGFeed to the local RSS feed that FeedBurner's already importing.
Code up a proxy on your server to marshall the call to Feedburner and return the response over SSL.
Find another feed service that supports SSL, and either pass it the original feed or the Feedburner one.
i have started using WordPress paid theme Schema for my several blogs. In general, it is a nice theme, fast and SEO friendly. However, since my blogs are all on HTTPS, then I noticed that if I had a widget of (Google Feedburner) in the sitebar. The chrome will show a security error for any secure page with an insecure form call on the page.
To fix this, it is really simple,
you would just need to change the file widget-subscribe.php located at /wp-content/themes/schema/functions/ and replace all “http://feedburner.google.com” to “https://feedburner.google.com”.
Save the file, and clear the cache, then your browser will show a green padlock.
and i fix this in my this blog www.androidloud.com
I am developing web pages which reference external links/images/stylesheets etc. I have 1 page which loads fine in HTTPS, but then when I apply different external styles, some of the external styles cause a warning "Contains unauthenticated content"
Don't get me wrong, I understand WHAT this means, but I can't see any reference to any HTTP requests in View source, Firebug, Live HTTP Headers or in the View Page Info > Media window.
Does anyone have any tips or ideas of plug ins or tools which can identify exactly which items Firefox is not happy with?
Unfortunately this page is not live on the internet so I can't show it to you.
Thanks
You could, theoretically, use a proxy that just logs all requests and redirects them to the server. Of course, that is a very roundabout way of doing this :)
I have used Proxomitron and this showed the file!
Use FireFox to see the media assets. Click on the lock on the Status Bar when you are on a secure page, then Media.
I am using RUBY to screen scrap a web page (created in asp.net) which uses gridview to display data. I am successfully able to read the data displayed on page-1 of the grid but unable to figure out how I can move to the next page in the grid to read all the data.
Problem is the page number hyperlinks are not normal hyperlinks (with URL) but instead are javascript hyperlink which causes postback to the same page..
An example of the hyperlink:-
6
I recommend using Watir, a ruby library designed for browser testing, if you're already using ruby for processing. For one thing, it gives you a much nicer interface to the DOM elements on the page, and it makes clicking links like this easier:
ie.link(:text, '6').click
Then, of course you have easier methods for navigating the table as well. It's easy enough to automate this process:
1..total_number_of_pages.each do |next_page|
ie.link(:text, next_page).click
# table processing goes here
end
I don't know your use case, but this approach has its advantages and disadvantages. For one thing, it actually runs a browser instance, so if this is something you need to frequently run quietly in the background in completely automated way, this may not be the best approach. On the other hand, if it's ok to launch a browser instance, then you don't have to worry about all that postback nonsense, and you can just click the link as if you were a user.
Watir: http://wtr.rubyforge.org/
You'll need to figure out the actual URL.
Option 1a: Open the page in a browser with good developer support (e.g. firefox with the web development tools) and look through the source to find where _doPostBack is defined. Figure out what URL it's constructing. Note that it might not be in the main page source, but instead in something that the page loads.
Option 1b: Ditto, but have ruby do it. If you're fetching the page with Net:HTTP you've got the tools to find the definition of __doPostBack already (the body as a string, ruby's grep, and the ability to request additional files, such as those in script tags).
Option 2: Monitor the traffic between a browser and the page (e.g. with a logging proxy) to find out what the URL is.
Option 3: Ask the owner of the web page.
Option 4: Guess. This may not be as bad as it sounds (e.g. if the original URL ends with "...?page=1" or something) but in general this is the least likely to work.
Edit (in response to your comment on the other question):
Assuming you're using the Net:HTTP library, you can do a postback by just replacing your get with a post, e.g. my_http.post(my_url) instead of my_http.get(my_url)
Edit (in response to danieltalsky's answer):
watir may be a really good solution for you (I'm kicking myself for not having thought of it), but be aware that you may have to manually fire the event or go through other hoops to get what you want. As a specific gotcha, with any asynchronous fetch like this you need to make sure that the full response has come back before you scrape it; that isn't a problem when you're doing the request inline yourself.
You will have to perform the postback. The data is pass with a form POST back to the server. Like Markus said use something like FireBug or the Developer Tools in IE 8 and fiddler to watch the traffic. But honestly this is a web form using the bloated GridView and you will be in for a fun adventure. ;)
You'll need to do some investigation in order to figure out what HTTP request the javascript execution is performing. I've used the Mozilla browser with the Firebug plugin and also the "Live HTTP Headers" plugin to help determine what is going on. It will likely become clear to you which requests you will need to make in order to traverse to the next page. Make sure you pay attention to any cookies getting set.
I've had really good success using Mechanize for scraping. It wraps all of the HTTP communication, html parsing and searching(using Nokogiri), redirection, and holding onto cookies. But it doesn't know how to execute Javascript, which is why you will need to figure out what http request to perform on your own.