Dashing - Twitter search term no updating - ruby

I am new to dashing and I have managed to work a lot out using the internet, however I am now at a loss as to why my widget doesn't update to the new search_term when I change it in the twitter.rb file?
I am using the default twitter.rb file with a couple of amendments. Firstly I have included my Tokens and authorisation keys from twitter.com and secondly, I have just added an extra line to receive more info when something fails in the twitter::error statement.
This is my current code (minus the keys & tokens)
search_term = URI::encode('#weather')
SCHEDULER.every '2m', :first_in => 0 do |job|
begin
tweets = twitter.search("#{search_term}")
if tweets
tweets = tweets.map do |tweet|
{ name: tweet.user.name, body: tweet.text, avatar: tweet.user.profile_image_url_https }
end
send_event('twitter_mentions', comments: tweets)
end
rescue Twitter::Error => e
puts "Twitter Error: #{e}"
puts "\e[33mFor the twitter widget to work, you need to put in your twitter API keys in the jobs/twitter.rb file.\e[0m"
end
end
I have restarted Dashing; I have even rebooted the box it is on, but all to no avail. I am a total loss.
Any help would be greatly appreciated.

Related

Customize Instagram widget on a Dashing.io dashboard

I have set up a dashboard using dashing with a number of (mostly) existing widgets. That worked so far - see production dashboard here (work in progress).
Now I would like to have an Instagram widget that displays the n lastest images taken by username.
I have found a widget that will display images by long and lat and also was able to get my tokens configured, so I can talk to the Instagram API.
Here's the code of my current widget originally from #mjamieson's gist on github.
require 'instagram'
require 'rest-client'
require 'json'
# Instagram Client ID from http://instagram.com/developer
Instagram.configure do |config|
config.client_id = ENV['INSTAGRAM_CLIENT_ID']
config.client_secret = ENV['INSTAGRAM_CLIENT_SECRET']
end
# Latitude, Longitude for location
instadash_location_lat = '45.429522'
instadash_location_long = '-75.689613'
SCHEDULER.every '10m', :first_in => 0 do |job|
photos = Instagram.media_search(instadash_location_lat,instadash_location_long)
if photos
photos.map do |photo|
{ photo: "#{photo.images.low_resolution.url}" }
end
end
send_event('instadash', photos: photos)
end
I got this to work, but would like to modify the given API call to only display images taken by me / a user of my choice. Unfortunately I don't understand ruby or json enough to figure out what the Instagram API documentation wants me to do.
I found the following url
https://api.instagram.com/v1/users/{user-id}/media/recent/?access_token={acces-token}
and tried it (with my credentials filled in). It returned json data correctly including my images (among other data).
How can I modify the given code to display images by username instead of location?
Any help is greatly appreciated.
You'll need an access_token to get content from some user. Take a look at sample application on gem page.
It seems you need something like this:
# here we take access token from session, assuming you already got it
# sometime before and stored it there for future use
client = Instagram.client(:access_token => session[:access_token])
photos = client.user_recent_media
And this example how to get this access_token using OAuth2 browser authorization and sinatra app:
require "sinatra"
require "instagram"
enable :sessions
CALLBACK_URL = "http://localhost:4567/oauth/callback"
Instagram.configure do |config|
config.client_id = "YOUR_CLIENT_ID"
config.client_secret = "YOUR_CLIENT_SECRET"
# For secured endpoints only
#config.client_ips = '<Comma separated list of IPs>'
end
get "/" do
'Connect with Instagram'
end
get "/oauth/connect" do
redirect Instagram.authorize_url(:redirect_uri => CALLBACK_URL)
end
get "/oauth/callback" do
response = Instagram.get_access_token(params[:code], :redirect_uri => CALLBACK_URL)
session[:access_token] = response.access_token
redirect "/nav"
end
Solution
require 'sinatra'
require 'instagram'
# Instagram Client ID from http://instagram.com/developer
Instagram.configure do |config|
config.client_id = ENV['INSTAGRAM_CLIENT_ID']
config.client_secret = ENV['INSTAGRAM_CLIENT_SECRET']
config.access_token = ENV['INSTAGRAM_ACCESS_TOKEN']
end
user_id = ENV['INSTAGRAM_USER_ID']
SCHEDULER.every '2m', :first_in => 0 do |job|
photos = Instagram.user_recent_media("#{user_id}")
if photos
photos.map! do |photo|
{ photo: "#{photo.images.low_resolution.url}" }
end
end
send_event('instadash', photos: photos)
end
Explaination
1.) In addition to the client_id and client_secret I had defined before, I just needed to add my access_token to the Instagram.configure section.
2.) The SCHEDULER was correctly working, but needed to call Instagram.user_recent_media("#{user_id}") instead of Instagram.media_search(instadash_location_lat,instadash_location_long)
3.) To do that I had to set a second missing variable for user_id
Now the call gets recent media filtered by user ID and outputs it into the dashing widget.
Thanks for the participation and hints! That pointed me into the right direction of the documentation and helped me to figure it out myself.

Google Adwords API error: invalid grant

I'm receiving this error trying to authenticate with the Adwords API using a service account and JWT with the Ruby API library.
I am copying the example provided, but it just doesn't seem to work.
/home/michael/.rvm/gems/ruby-2.1.2/gems/signet-0.5.1/lib/signet/oauth_2/client.rb:941:in `fetch_access_token': Authorization failed. Server message: (Signet::AuthorizationError)
{
"error" : "invalid_grant"
}
adwords_api.yml
---
# This is an example configuration file for the AdWords API client library.
# Please fill in the required fields, and copy it over to your home directory.
:authentication:
# Authentication method, methods currently supported: OAUTH2, OAUTH2_JWT.
:method: OAUTH2_JWT
# Auth parameters for OAUTH2_JWT method. See:
# https://developers.google.com/accounts/docs/OAuth2ServiceAccount
:oauth2_issuer: 43242...apps.googleusercontent.com
:oauth2_secret: 'notasecret'
# You can provide path to a file with 'oauth2_keyfile' or the key itself with
# 'oauth2_key' option.
:oauth2_keyfile: /home/.../google-api-key.p12
# To impersonate a user set prn to an email address.
:oauth2_prn: my#email.com
# Other parameters.
:developer_token: ua...w
:client_customer_id: 123-123-1234
:user_agent: test-agent
:service:
# Only production environment is available now, see: http://goo.gl/Plu3o
:environment: PRODUCTION
:connection:
# Enable to request all responses to be compressed.
:enable_gzip: false
# If your proxy connection requires authentication, make sure to include it in
# the URL, e.g.: http://user:password#proxy_hostname:8080
# :proxy: INSERT_PROXY_HERE
:library:
:log_level: INFO
test.rb
#!/usr/bin/env ruby
require 'adwords_api'
def use_oauth2_jwt()
adwords = AdwordsApi::Api.new
adwords.authorize()
campaign_srv = adwords.service(:CampaignService, API_VERSION)
selector = {
:fields => ['Id', 'Name', 'Status'],
:ordering => [
{:field => 'Name', :sort_order => 'ASCENDING'}
]
}
response = campaign_srv.get(selector)
if response and response[:entries]
campaigns = response[:entries]
campaigns.each do |campaign|
puts "Campaign ID %d, name '%s' and status '%s'" %
[campaign[:id], campaign[:name], campaign[:status]]
end
else
puts 'No campaigns were found.'
end
end
if __FILE__ == $0
API_VERSION = :v201409
begin
use_oauth2_jwt()
# HTTP errors.
rescue AdsCommon::Errors::HttpError => e
puts "HTTP Error: %s" % e
# API errors.
rescue AdwordsApi::Errors::ApiException => e
puts "Message: %s" % e.message
puts 'Errors:'
e.errors.each_with_index do |error, index|
puts "\tError [%d]:" % (index + 1)
error.each do |field, value|
puts "\t\t%s: %s" % [field, value]
end
end
end
end
This is going to be difficult to answer definitively as it's authorisation based so the error message is a glorified "not authorised" message.
All I can really do is suggest a few things to check (acknowledging you've probably went through these already):
Your developer token is definately showing as 'Approved'? (you can check this in the client centre - through the setting cog then account then adwords api centre)
You have registered an application through Google Developer Console
You (or the owner of the account you are trying to access) have authorised your application - probably by following this guide and definately seeing one of these things at somepoint:
If you have checked all of these then the only other thing I can suggest is a post to the official forum where they tend to be helpful and often take authorisation issues 'offline' to have a look at the actual soap requests etc. (I have found this much quicker and easier than trying to wade through the levels of AdWords 'support')
Good luck!
After several more hours of fiddling, I finally got it working by setting the oauth2_prn to the primary email on the MCC and Google Apps for Business account.

Having problems with Ruby file from Dashing

I am having trouble with twitter_user.rb, which is supposed to get the number of tweets, followers, and following of a given Twitter username.
I assume that I am supposed to replace TWITTER_USERNAME in line 9 with the Twitter username that I am interested in. I did that and started dashing but I got:
scheduler caught exception:
undefined method '[]' for nil:NilClass
/.../jobs/twitter_user.rb:19:in 'block in <top (required)>'
It looks like the problem is with line 19 which is:
tweets = /profile["']>[\n\t\s]*<strong>([\d.,]+)/.match(response.body)[1].delete('.,').to_i
Can anybody tell me what is going on and how to fix it?
Your assumption is incorrect. The program is looking for an environment variable called TWITTER_USERNAME that is set to the relevant user name. If that variable doesn't exist then the code uses foobugs instead.
If you would rather modify the code than set up an environment variable, then change
twitter_username = ENV['TWITTER_USERNAME'] || 'foobugs'
to
twitter_username = 'myusername'
This is untested code, but it's a general idea how it should have been written. If you clone the source on the original page you can adjust it for your own purposes (i.e. fix it):
require 'nokogiri'
doc = Nokogiri::XML(content)
tweets = doc.at('profile strong').text.delete('.,').to_i
following = doc.at('following strong').text.delete('.,').to_i
followers = doc.at('followers strong').text.delete('.,').to_i
The above three lines can be reduced to something like:
tweets, following, followers = %w[profile following followers].map{ |tag|
doc.at("#{ tag } strong").text.delete(',.').to_i
}
Again, without a usable sample of the XML/HTML I can't do much more, but as a practice we (programmers) shouldn't use regular expressions to try to parse XML or HTML. It's much to easy to break a pattern with either of those types of files.
I managed to solve the same issue for myself by using the twitter API instead to pull out the relevant information. It seems the web page had changed too much for the scraping to work and it could also stop working again at no notice as various people have already said...
This is the solution I used.
#### Get your twitter keys & secrets:
#### https://dev.twitter.com/docs/auth/tokens-devtwittercom
Twitter.configure do |config|
config.consumer_key = 'YOUR_CONSUMER_KEY'
config.consumer_secret = 'YOUR_CONSUMER_SECRET'
config.oauth_token = 'YOUR_OAUTH_TOKEN'
config.oauth_token_secret = 'YOUR_OAUTH_SECRET'
end
twitter_username = 'foobugs'
MAX_USER_ATTEMPTS = 10
user_attempts = 0
SCHEDULER.every '10m', :first_in => 0 do |job|
begin
tw_user = Twitter.user("#{twitter_username}")
if tw_user
tweets = tw_user.statuses_count
followers = tw_user.followers_count
following = tw_user.friends_count
send_event('twitter_user_tweets', current: tweets)
send_event('twitter_user_followers', current: followers)
send_event('twitter_user_following', current: following)
end
rescue Twitter::Error => e
user_attempts = user_attempts +1
puts "Twitter error #{e}"
puts "\e[33mFor the twitter_user widget to work, you need to put in your twitter API keys in the jobs/twitter_user.rb file.\e[0m"
sleep 5
retry if(user_attempts < MAX_USER_ATTEMPTS)
end
end
I have resolved by substituting this line:
followers = /<strong>([\d.]+)<\/strong> Follower/.match(response.body)[0].delete('.,').to_i
with these two:
followers_count_metadata = /followers_count":[\d]+/.match(response.body)
followers = /[\d]+/.match(followers_count_metadata.to_s).to_s

Element not found in the cache - perhaps the page has changed since it was looked up in Selenium Ruby web driver?

I am trying to write a crawler that crawls all links from loaded page and logs all request and response headers along with response body in some file say XML or txt. I am opening all links from first loaded page in new browser window so I wont get this error:
Element not found in the cache - perhaps the page has changed since it was looked up
I want to know what could be the alternate way to make requests and receive response from all links and then locate input elements and submit buttons form all opened windows.
I am able to do above to some extent except when opened window has common site searh box like one on this http://www.testfire.net in the upper right corner.
What I want to do is I want to omit such common boxes so that I can fill other inputs with values using i.send_keys "value" method of webdriver and dont get this error
ERROR: Element not found in the cache - perhaps the page has changed since it was looked up.
What is the way to detect and distinguish input tags from each opened window so that value does not get filled repeatably in common input tags that appear on most pages of website.
My code is following:
require 'rubygems'
require 'selenium-webdriver'
require 'timeout'
class Clicker
def open_new_window(url)
#driver = Selenium::WebDriver.for :firefox
#url = #driver.get " http://test.acunetix.com "
#link = Array.new(#driver.find_elements(:tag_name, "a"))
#windows = Array.new(#driver.window_handles())
#link.each do |a|
a = #driver.execute_script("var d=document,a=d.createElement('a');a.target='_blank';a.href=arguments[0];a.innerHTML='.';d.body.appendChild(a);return a", a)
a.click
end
i = #driver.window_handles
i[0..i.length].each do |handle|
#driver.switch_to().window(handle)
puts #driver.current_url()
inputs = Array.new(#driver.find_elements(:tag_name, 'input'))
forms = Array.new(#driver.find_elements(:tag_name, 'form'))
inputs.each do |i|
begin
i.send_keys "value"
puts i.class
i.submit
rescue Timeout::Error => exc
puts "ERROR: #{exc.message}"
rescue Errno::ETIMEDOUT => exc
puts "ERROR: #{exc.message}"
rescue Exception => exc
puts "ERROR: #{exc.message}"
end
end
forms.each do |j|
begin
j.send_keys "value"
j.submit
rescue Timeout::Error => exc
puts "ERROR: #{exc.message}"
rescue Errno::ETIMEDOUT => exc
puts "ERROR: #{exc.message}"
rescue Exception => exc
puts "ERROR: #{exc.message}"
end
end
end
#Switch back to the original window
#driver.switch_to().window(i[0])
end
end
ol = Clicker.new
url = ""
ol.open_new_window(url)
Guide me how can I get all requeat and response headers with response body using Selenium Webdriver or using http.set_debug_output of ruby's net/http ?
Selenium is not one of the best options to use to attempt to build a "web-crawler". It can be too flakey at times, especially when it comes across unexpected scenarios. Selenium WebDriver is a great tool for automating and testing expectancies and user interactions.
Instead, good old fashioned curl would probably be a better option for web-crawling. Also, I am pretty sure there are some ruby gems that might help you web-crawl, just Google search it!
But To answer the actual question if you were to use Selenium WebDriver:
I'd work out a filtering algorithm where you can add the HTML of an element that you interact with to an variable array. Then, when you go on to the next window/tab/link, it would check against the variable array and skip the element if it finds a matching HTML value.
Unfortunately, SWD does not support getting request headers and responses with its API. The common work-around is to use a third party proxy to intercept the requests.
============
Now I'd like to address a few issues with your code.
I'd suggest before iterating over the links, add a #default_current_window = #driver.window_handle. This will allow you to always return back to the correct window at the end of your script when you call #driver.switch_to.window(#default_current_window).
In your #links iterator, instead of iterating over all the possible windows that could be displayed, use #driver.switch_to.window(#driver.window_handles.last). This will switch to the most recently displayed new window (and it only needs to happen once per link click!).
You can DRY up your inputs and form code by doing something like this:
inputs = []
inputs << #driver.find_elements(:tag_name => "input")
inputs << #driver.find_elements(:tag_name => "form")
inputs.flatten
inputs.each do |i|
begin
i.send_keys "value"
i.submit
rescue e
puts "ERROR: #{e.message}"
end
end
Please note how I just added all of the elements you wanted SWD to find into a single array variable that you iterate over. Then, when something bad happens, a single rescue is needed (I assume you don't want to automatically quit from there, which is why you just want to print the message to the screen).
Learning to DRY up your code and use external gems will help you achieve a lot of what you are trying to do, and at a faster pace.

Using Open-URI to fetch XML and the best practice in case of problems with a remote url not returning/timing out?

Current code works as long as there is no remote error:
def get_name_from_remote_url
cstr = "http://someurl.com"
getresult = open(cstr, "UserAgent" => "Ruby-OpenURI").read
doc = Nokogiri::XML(getresult)
my_data = doc.xpath("/session/name").text
# => 'Fred' or 'Sam' etc
return my_data
end
But, what if the remote URL times out or returns nothing? How I detect that and return nil, for example?
And, does Open-URI give a way to define how long to wait before giving up? This method is called while a user is waiting for a response, so how do we set a max timeoput time before we give up and tell the user "sorry the remote server we tried to access is not available right now"?
Open-URI is convenient, but that ease of use means they're removing the access to a lot of the configuration details the other HTTP clients like Net::HTTP allow.
It depends on what version of Ruby you're using. For 1.8.7 you can use the Timeout module. From the docs:
require 'timeout'
begin
status = Timeout::timeout(5) {
getresult = open(cstr, "UserAgent" => "Ruby-OpenURI").read
}
rescue Timeout::Error => e
puts e.to_s
end
Then check the length of getresult to see if you got any content:
if (getresult.empty?)
puts "got nothing from url"
end
If you are using Ruby 1.9.2 you can add a :read_timeout => 10 option to the open() method.
Also, your code could be tightened up and made a bit more flexible. This will let you pass in a URL or default to the currently used URL. Also read Nokogiri's NodeSet docs to understand the difference between xpath, /, css and at, %, at_css, at_xpath:
def get_name_from_remote_url(cstr = 'http://someurl.com')
doc = Nokogiri::XML(open(cstr, 'UserAgent' => 'Ruby-OpenURI'))
# xpath returns a nodeset which has to be iterated over
# my_data = doc.xpath('/session/name').text # => 'Fred' or 'Sam' etc
# at returns a single node
doc.at('/session/name').text
end

Resources