Website Script that will click on button every x seconds - ruby

Is there an easy way of automating clicking on a certain button on a website?
I'm trying to help someone in my company out, the guy basically stares a booking page the entire day, having to click on a single button to refresh the page. If new booking slots becomes available, then it will only be shown on the page on the page refreshes, which means, the guy needs to click on the button the entire day...
Is there a way of maybe automating this? cUrl? Ruby?

If you just refresh the browser to make it refresh then you could try a reload plugin like https://addons.mozilla.org/en-US/firefox/addon/115/. If you actually have to click the button then you could put together a very simple greasemonkey( https://addons.mozilla.org/en-US/firefox/addon/748/ ) script which just simulates a mouse click. With jQuery it would be something like
$("#buttonid").click()
triggered by any one on a hundred timer plugins or setTimout.

First of all i think i have understand your question correctly. What you are trying to do here is that periodically refresh an external web page (say www.google.com)
If so do something like this
create an html page
test.html
and create an iframe inside it and give the url of the web page that you want to refresh
iframe src="http://www.google.com">
and then in the head section add
META HTTP-EQUIV="REFRESH" CONTENT="5" to refresh the page
Ex:
html
<head>
<META HTTP-EQUIV="REFRESH" CONTENT="5">
</head>
<body>
<iframe src="http://www.google.com"></iframe>
</body>
html
Hope you find this helpful (as i said if i understood the question correctly)
cheers
sameera

Related

Generate Pinterest Share Button That Specifies URL

I am trying to create a "pinterest share" button, but am running into a snag.
Currently, I have the pinterest button (generated from their Widget Builder) appearing in a Lightbox. (For certain reasons, it must appear this way.)
The issue is the Lightbox code has direct linking on it, so the code for the lightbox window is something like: www.domain.com/#/social/4
Pinterest is picking up that URL (which has no images since it's just the lightbox) instead of the URL for the main page (www.domain.com).
Does anyone know how I can specify the exact URL to share via the pinterest button?
I have read some posts that said doing this would work:
<img src="//assets.pinterest.com/images/PinExt.png" alt="Pin it" / > <script type="text/javascript" src="http://assets.pinterest.com/js/pinit.js"></script>
However, specifying the URL does not seem to work at all. It appears to be totally ignored and has no impact.
Any ideas?
Thanks in advance!
You can use a standard link and specify all the data in the parameters:
<a href="http://www.pinterest.com/pin/create/button/
?url=http%3A%2F%2Fwww.flickr.com%2Fphotos%2Fkentbrew%2F6851755809%2F
&media=http%3A%2F%2Ffarm8.staticflickr.com%2F7027%2F6851755809_df5b2051c9_z.jpg
&description=Next%20stop%3A%20Pinterest"
data-pin-do="buttonPin"
data-pin-config="above">
<img src="//assets.pinterest.com/images/pidgets/pin_it_button.png" />
</a>
Source: http://developers.pinterest.com/pin_it/
You can try using structured meta data and Rich Pins.

Can I make my ajax website 'crawlable'?

I'm currently building a music based website and I want to build something like this template. It uses ajax and deep linking. (And it makes use of the History.js library - please notice how there's no '#' in the URLs.)
The reason I want to use these 'ajaxy' methods (or maybe use the template altogether) is so that when music is playing, it will remain un-interrupted as the user navigates the site.
My worry is that my site wont be crawlable by Google but I think I can modify code in the page source to fix that. If I look at the source code to the template, in the head I see
<meta name="description" content="">
<meta name="author" content="">
<meta name="keywords" content="">
Now if I add this to the head:
<meta name="fragment" content="!">
will that make the site crawlable? Is there other code I need to add on top of this? Or is it just not possible for this template?
I'm following this guide https://developers.google.com/webmasters/ajax-crawling/docs/getting-started, and I'm on step 3. I will of course have to complete the other steps but I don't know I'm heading in the right direction, or heading towards a dead end!
Any help would be very much appreciated. Many thanks in advance.
From what you said it sounds like your site updates the address bar with clean urls as you navigate via ajax. That's good. The next thing is you want to do is make sure those urls work. If you directly go to a url do you see the specific content it represents. And would a crawler also see the correct content without running javascript. Progressive enhancement works well for that. The final thing is you want to do is make sure bots can pick up those urls.
I've not played with the meta tag for ! But it looks like it is only for the home page and you still need to implement the escaped fragment page. Maybe it does support other pages but the article does not cover that.

Facebook button for Ajax Pages, how to implement and verify that it works

I wanted to know how can I use Facebook Like button on my Ajax web application, that will capture changes in the Open Graph tags for both the og:title and the og:url. I already created a Facebook app and got an API ID.
What I want to know is the code that I need to put on my website in order for Facebook to capture the changes that I've made to the meta tags which contains that title and url information (ie. og:title, og:url).
I followed the instructions on Facebook without success. Furthermore, I want to know how can I locally test the Like button to see that it grabs the data from the Open Graph tags properly.
Also worth mentioning that I've a JQuery code that automatically alters the Open Graph meta tags to include the relevant information for the current Ajax changed page.
Thanks.
You will need to have a separate url for each different page that you want to allow people to like. I would recommend actually pointing the like button to the physical pages you're trying to return via the og:url tag. To refresh the data that Facebook stores about a given url, pass that url into the linter at http://developers.facebook.com/tools/lint.
i created a rotator file for facebook share on my dynamic ajax website.
rotator.asp code sample:
<html>
<% lang=request("lang")
id=request("id")
..some sql to get data...
ogTitle=....
ogImage=....
originalUrl=....
%>
<head>
<meta property="og:title" content="<%=ogTitle%>" />
<meta property="og:image" content="<%=ogImage%>" />
.....
......
<meta http-equiv="refresh" content="0; url=<%=origialUrl%>" />
//dont use redirect.. facebook dont allow 302...
</head>
<body></body>
</html>
for example xxx.com/#!/en/153 page will share xxx.com/rotator.asp?lang=en&id=153

Ajax update in a Sharepoint 2010 Webpart

Is there any easy way to make a Ajax call in a Webpart?
I have a Webpart with a button, and I want that when the user pushes the button, it executes a server function without reloading the page. And then, if all is ok, execute a callback function. I thought that the best way is with a AJAX call.
But when I've looked for how to do it I only get some complicated tutorials that I don't really understand (and most are from old versions of Sharepoint). Any help? What is the best way to start? Thanks
Have you tried using jQuery with a content editor web part? I've done this before and it's rather easy. Here is a step by step for how I do it.
Download jQuery.
Upload jQuery to SiteAssets in Sharepoint.
Upload coded file (see below) with AJAX calls.
Point to coded file via Content Editor Web Part.
It should work!
Here is a default way something should work.
<html>
<head>
<script src="<point to jquery file>"></script>
<script type="text/javascript">
$(document).ready(function(){
$('#main').load('<RELATIVE URL TO SERVER PAGE>');
});
</script>
</head>
<body>
<div id="main"></div>
</body>
</html>
If possible I would go for a Visual Web Part in Visual Studio. You basically create a .NET user control. It saves you a lot of manual control definitions etc.
If in SharePoint2007 you might want to take a look at the "Smartpart". It has ajax support and some great tutorials on how to use it.

Ajax magic: How is Kotaku achieving Ajax *and* Google accessability?

Kotaku has launched a new design without hashbangs. Their site still clearly uses ajax requests, but somehow it is still found through Google and the content shows up in the pagesource. How do they do it? Their text seems to be contained inside a script type=text/javascript, but I don't understand what effect that has, or why they would do that.
(of course, the first page request may just trigger a static, serverside constructed response. But check other articles, it does load json through an ajax request. No page refresh)
Have a look at this site for example:
http://kotaku.com/5800326/read-some-of-new-tomb-raider-game-right-now
No hashes, a very well formed URL and it appears in Google. I have read the Google Ajax guide, and as far as I understand it, Google only requests an html snapshot iff you use #! inside your url.
For your convenience, I have made a screenshot that shows how the text looks inside the Chrome debugger: (what does "ganjaAjaxContent" mean?)
If you search for this article, it is the first match in Google:
Google search for Kotaku article
Being able to do ajax without having to worry about Google search would be excellent.
Kotaku and the other Gawker sites are doing a number of things for SEO:
Submitting XML sitemaps for all of their content
http://kotaku.com/sitemap_today.xml
http://kotaku.com/sitemap.xml
Correct use of title and description tags for Google and Facebook
<title>Read Some of New Tomb Raider Game Right Now</title>
<meta name="fragment" content="!">
<meta name="title" content="Read Some of New Tomb Raider Game Right Now" />
<meta name="description" content="Upcoming Tomb Raider reboot doesn't have a release date yet, but website Siliconera apparently has the game's script and published what's reportedly an excerpt from it. Check it out. [Siliconera]" />
<meta property="og:title" content="Read Some of New Tomb Raider Game Right Now" />
<meta property="og:description" content="Upcoming Tomb Raider reboot doesn't have a release date yet, but website Siliconera apparently has the game's script and published what's reportedly an excerpt from it." />
Displaying HTML post content when Javascript is turned off (inspect the <div class="post-body quick-post"></div> element)
So you're right, Google's first visit loads the semantic, accessible serverside-constructed page. WHile Google can crawl hashbang pages, it doesn't need to, because all of the pages are indexed via the sitemap.xml
Hope this answers all of your questions.
p.s. having said all this, hashbangs are still bad for the web
http://www.tbray.org/ongoing/When/201x/2011/02/09/Hash-Blecch
http://isolani.co.uk/blog/javascript/BreakingTheWebWithHashBangs
http://blog.benward.me/post/3231388630

Resources