I am trying to batch-upload images to Redmine and link them each to a certain wiki pages.
The docs (Rest_api, Using the REST API with Ruby) mention some aspects, but the examples fail in various ways. I also tried to derive ideas from the source - without success.
Can anyone provide a short example that shows how to upload and link an image from within Ruby?
This is a bit tricky as both attachments and wiki APIs are relatively new, but I have done something similar in the past. Here is a minimal working example using rest-client:
require 'rest_client'
require 'json'
key = '5daf2e447336bad7ed3993a6ebde8310ffa263bf'
upload_url = "http://localhost:3000/uploads.json?key=#{key}"
wiki_url = "http://localhost:3000/projects/some_project/wiki/some_wiki.json?key=#{key}"
img = File.new('/some/image.png')
# First we upload the image to get attachment token
response = RestClient.post(upload_url, img, {
:multipart => true,
:content_type => 'application/octet-stream'
})
token = JSON.parse(response)['upload']['token']
# Redmine will throw validation errors if you do not
# send a wiki content when attaching the image. So
# we just get the current content and send that
wiki_text = JSON.parse(RestClient.get(wiki_url))['wiki_page']['text']
response = RestClient.put(wiki_url, {
:attachments => {
:attachment1 => { # the hash key gets thrown away - name doesn't matter
:token => token,
:filename => 'image.png',
:description => 'Awesome!' # optional
}
},
:wiki_page => {
:text => wiki_text # original wiki text
}
})
Related
I have a Ruby web app that sends email via Mailgun.
My Mailgun account & gem are properly set up and I can send emails manually (via curl, for instance).
The API key and the API base URL (https sandbox domain) are stored in environment variables.
When I attempt to send emails from the app like this:
def initialize(mailer: nil)
#mailer = mailer || Mailgun::Client.new(ENV['MAILGUN_API_KEY'])
end
then:
def call(user)
mailer.send_message(ENV['MAILGUN_SANDBOX'], {from: '...',
to: user.email,
subject: '...',
text: "..."})
end
When I run the app with Sinatra via localhost:xxxx, I get a Mailgun::CommunicationError at /.../... 301 Moved Permanently: ... nginx pointing to this line:
mailer.send_message(ENV['MAILGUN_SANDBOX'], ...
Any idea why that happens? I've researched the issue for hours but couldn't find a clue on what to do next.
Thanks!
I ran into this same issue. If you have already fixed this then hopefully this can help someone else.
I switched over to message builder for ease of use and being able to render my html but I'm pretty sure it will still send with the format you have setup with :text
When I switched over to the proper domain in the .env file I believe it solved my issue. You'll need 2 different domains to use Mailgun. The first is the full domain for your sandbox. ENV['MAILGUN_DOMAIN'] it is the sandbox domain with the full https://api.mailgun.net/v3/sandboxXXXXxxxXXXXXX.mailgun.org to send most of the mail formats.
You'll also need the last half of the full domain to send messages. That's just the sandboxXXXXxxxXXXXXX.mailgun.org which is passed into the MessageBuilder or other message .send_message method. When I had them mixed up or both the same I kept on getting this error. When I switched over to separate the two in my development.rb and some_mailer.rb is when I could send the mail without a problem.
Below is my file setup, for reference. I'm pretty new to all of this but this is how I'm setup and it's working for me so hopefully it helps.
# .env
MAILGUN_DOMAIN='https://api.mailgun.net/v3/sandboxXXXXxxxXXXXXX.mailgun.org'
MAILGUN_SEND_DOMAIN='sandboxXXXXxxxXXXXXX.mailgun.org'
# development.rb
ActionMailer::Base.smtp_settings = {
:authentication => :plain,
:address => "smtp.mailgun.org",
:port => 587,
:domain => "ENV['MAILGUN_DOMAIN']",
:user_name => "ENV['MAILGUN_USERNAME']",
:password => "ENV['MAILGUN_PASSWORD']"
}
# some_mailer.rb
def some_mail_notification(user)
#user = user
mg_client = Mailgun::Client.new ENV['MAILGUN_KEY']
mb_obj = Mailgun::MessageBuilder.new
mb_obj.from "email#testing.com", {'first' => 'Customer', 'last' => 'Support'}
mb_obj.add_recipient :to, #user.email, { 'first' => #user.first_name, 'last' => #user.last_name }
mb_obj.subject "Your Recent Purchase on Some Site"
mb_obj.body_html ("#{render 'some_mail_notification.html.erb'}")
mg_client.send_message("sandboxXXXXxxxXXXXXX.mailgun.org", mb_obj)
end
I left the send_message above to the sandbox domain but you can set that as an environment variable in the .env file.
I am using the Fog gem to generate presigned urls. I can do this successfully to get read access to the file. Here's what I do:
fog_s3 = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => key,
:aws_secret_access_key => secret
})
object_path = 'foo.wav'
expiry = Date.new(2014,2,1).to_time.to_i
url = fog_s3.directories.new(:key => bucket).files.new(:key => object_path).url(expiry,path_style: true)
But this doesn't work when I try to upload the file. Is there a way to specify the http verb so it would be a PUT and not a GET?
EDIT I see a method: put_object_url which might help. I don't know how access it.
Thanks
EDIT based upon your suggestion:
It helped - it got me a PUT - not GET. However, I'm still having issues. I added content type:
headers = { "Content-Type" => "audio/wav" }
options = { path_style: true }
object_path = 'foo.wav'
expiry = Date.new(2014,2,1).to_time.to_i
url = fog_s3.put_object_url(bucket,object_path, expiry, headers, options)
but the url does not contain Content-Type in it. When done from Javascript in HTML I get the Content-Type in the url and that seems to work. Is this an issue with Fog? or is my header incorrect?
I think put_object_url is indeed what you want. If you follow the url method back to where it is defined, you can see it uses a similar method underlying it called get_object_url here (https://github.com/fog/fog/blob/dc7c5e285a1a252031d3d1570cbf2289f7137ed0/lib/fog/aws/models/storage/files.rb#L83). You should be able to do something similar and can do so by calling this method from the fog_s3 object you already created above. It should end up just looking like this:
headers = {}
options = { path_style: true }
url = fog_s3.put_object_url(bucket, object_path, expires, headers, options)
Note that unlike get_object_url there is an extra headers option snuck in there (which you can use to do stuff like set Content-Type I believe).
Hope that sorts it for you, but just let me know if you have further questions. Thanks!
Addendum
Hmm, seems there may be a bug related to this after all (I'm wondering now how much this portion of the code has been exercised). I think you should be able to work around it though (but I'm not certain). I suspect you can just duplicate the value in the options as a query param also. Could you try something like this?
headers = query = { 'Content-Type' => 'audio/wav' }
options = { path_style: true, query: query }
url = fog_s3.put_object_url(bucket, object_path, expires, headers, options)
Hopefully that fills in the blanks for you (and if so we can think some more about fixing that behavior within fog if it makes sense to do so). Thanks!
Instead of using the *put_object_url* might I suggest that you try using the bucket.files.create action which take a Fog file Hash attributes and return a Fog::Storage::AWS::File.
I prefer to break it down in a bit more steps, here is an example:
fog_s3 = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => key,
:aws_secret_access_key => secret
})
# Define the filename
ext = :wav
filename = "foo.#{ext.to_s}"
# Path to your audio file?
path ="/"
# Define your expiry in the amount of seconds
expiry = 1.day.to_i
#Initialize the bucket to store too
fog_bucket = connection.directories.get(bucket)
file = {
:key => "#{filename}",
:body => IO.read("#{path}#{filename}"),
:content_type => Mime::Type.lookup_by_extension(ext),
:cache_control => "public, max-age=#{expiry}",
:expires => CGI.rfc1123_date(Time.now + expiry),
:public => true
}
# Returns a Fog::Storage::AWS::File
file = fog_bucket.files.create( file )
# Now to retrieve the public_url
url = file.public_url
Note: For subdir's checkout the :prefix option for a AWS bucket.
Fog File Documentation:
Optional attributes... bottom of the page, :) http://rubydoc.info/gems/fog/Fog/Storage/AWS/File
Hopefully the example will help explain the steps in creating a fog file... Cheers! :)
I'm creating custom strategy for Nimble.com API. As they're using OAuth, it's pretty simple.
require 'omniauth-oauth2'
module OmniAuth
module Strategies
class Nimble < OmniAuth::Strategies::OAuth2
option :name, "nimble"
option :client_options, {
:site => "https://api.nimble.com",
:authorize_url => '/oauth/authorize',
:token_url => '/oauth/token'
}
# option :access_token_options, {
# :mode => :query,
# :param_name => :access_token
# }
option :provider_ignores_state, true
uid { raw_info['email'] }
info do
{
'uid' => raw_info['email'],
'name' => raw_info['name'],
'email' => raw_info['email']
}
end
extra do
{ 'raw_info' => raw_info }
end
def raw_info
access_token.options[:mode] = :query
access_token.options[:param_name] = :access_token
#raw_info ||= access_token.get('/api/users/myself/', {:parse => :json}).parsed
end
end
end
end
For passing tokens, they need to use access_token parameter in URL. When I specify options in raw_info function directly, as in sample — it's OK.
When I'm trying to specify this options in access_token_options hash (like in commented section) — parameters aren't passing to token. I'm not very good in Ruby, so I didn't figure out from libraries sources — how correctly pass parameters to access_token in OmniAuth OAuth2 descendants.
I'd like to make it "right way", so access_token initialised with correct options, plese someone point me the right way.
Thank you!
I've explored several existing strategies (GitHub, 4SQ), and looks like it's normal practice to directly modify access token options.
So I'll stay with it :)
I've been reading the docs for the Google Calendar API and the google-api-ruby-client library, but I'm having a lot of trouble understanding them.
I have a Rails application that has a front end that lets users create objects called Events, and it saves them in a database on my server. What I would like is, after these Events are saved in the database, I want to call the Google Calendar API to create an event on a Google Calendar (that the server created, and only the server has access to modify that calendar).
I'm having lots of issues figuring out how to authenticate with the API using the ruby library. It doesn't make sense for me to use OAuth2 because I don't need to authorize anything with the user because I'm not interested in their data. I looked into Service Accounts (http://code.google.com/p/google-api-ruby-client/wiki/ServiceAccounts), but it looks like Google Calendars is not supported by Service Accounts.
Anyone have any ideas? This is the code I was experimenting with (using Service Accounts):
#client = Google::APIClient.new(:key => 'my_api_key')
path_to_key_file = '/somepath/aaaaaa-privatekey.p12'
passphrase = 'my_pass_phrase'
key = Google::APIClient::PKCS12.load_key(path_to_key_file, passphrase)
asserter = Google::APIClient::JWTAsserter.new(
'blah_blah#developer.gserviceaccount.com',
'https://www.googleapis.com/auth/calendar',
key)
# To request an access token, call authorize:
#client.authorization = asserter.authorize()
calendar = #client.discovered_api('calendar', 'v3')
event = {
'summary' => 'Appointment',
'location' => 'Somewhere',
'start' => {
'dateTime' => '2012-06-03T10:00:00.000-07:00'
},
'end' => {
'dateTime' => '2012-06-03T10:25:00.000-07:00'
},
'attendees' => [
{
'email' => 'attendeeEmail'
},
#...
]
}
result = #client.execute!(:api_method => calendar.events.insert,
:parameters => {'calendarId' => 'primary'},
:body => JSON.dump(event),
:headers => {'Content-Type' => 'application/json'})
Then of course I get this error message: Google::APIClient::ClientError (The user must be signed up for Google Calendar.) because the Service Account does not support Google Calendars.
I think you'll still need a real google user to host the calendar instance. But once you've got the calendar created under your identity, you can share it with the service account. In the sharing settings for the calendar, just use the email address of the service account (my service account ends with #developer.gserviceaccount.com). With the right sharing permissions, your service account can create/alter the event info, and not mess with your specific identity. From there, you can share the calendar with more people (or public) for their consumption of the mirrored events.
The other hitch I've run into is that it seems you can only authorize() the service account once per expiration period. You'll have to save the token you get and reuse it for the next hour, and then fetch a new one.
I don't know anything about Ruby. But it seems like understanding the underlying REST queries would help debug your problem. I've documented them here: http://www.tqis.com/eloquency/googlecalendar.htm
I was having trouble with this too and finally got a handle on it. The bottom line is that Google Calendar API v3 requires OAuth and you need to setup an App/Project through the Google Developer Console and then request OAuth permission on the target Google account. Once authorization is granted, you'll want to save the refresh token and use it on subsequent calls to get new access tokens (which expire!). I wrote a detailed blog post about this here: http://www.geekytidbits.com/google-calendar-api-from-ruby/ and this is my example script that should hopefully help you understand the flow:
#gem install 'google-api-client'
require 'google/api_client'
#Setup auth client
client_secrets = Google::APIClient::ClientSecrets.load #client_secrets.json must be present in current directory!
auth_client = client_secrets.to_authorization
auth_client.update!(
:scope => 'https://www.googleapis.com/auth/calendar',
:access_type => "offline", #will make refresh_token available
:approval_prompt =>'force',
:redirect_uri => 'http://www.myauthorizedredirecturl.com'
)
refresh_token_available = File.exist?('refresh_token.txt')
if !refresh_token_available
#OAuth URL - this is the url that will prompt a Google Account owner to give access to this app.
puts "Navigate browser to: '#{auth_client.authorization_uri.to_s}' and copy/paste auth code after redirect."
#Once the authorization_uri (above) is followed and authorization is given, a redirect will be made
#to http://www.myauthorizedredirecturl.com (defined above) and include the auth code in the request url.
print "Auth code: "
auth_client.code = gets
else
#If authorization has already been given and refresh token saved previously, simply set the refresh code here.
auth_client.refresh_token = File.read('refresh_token.txt')
end
#Now, get our access token which is what we will need to work with the API.
auth_client.fetch_access_token!
if !refresh_token_available
#Save refresh_token for next time
#Note: auth_client.refresh_token is only available the first time after OAuth permission is granted.
#If you need it again, the Google Account owner would have deauthorize your app and you would have to request access again.
#Therefore, it is important that the refresh token is saved after authenticating the first time!
File.open('refresh_token.txt', 'w') { |file| file.write(auth_client.refresh_token) }
refresh_token_available = true
end
api_client = Google::APIClient.new
cal = api_client.discovered_api('calendar', 'v3')
#Get Event List
puts "Getting list of events..."
list = api_client.execute(:api_method => cal.events.list,
:authorization => auth_client,
:parameters => {
'maxResults' => 20,
'timeMin' => '2014-06-18T03:12:24-00:00',
'q' => 'Meeting',
'calendarId' => 'primary'})
puts "Fetched #{list.data.items.count} events..."
#Update Event
puts "Updating first event from list..."
update_event = list.data.items[0]
update_event.description = "Updated Description here"
result = api_client.execute(:api_method => cal.events.update,
:authorization => auth_client,
:parameters => { 'calendarId' => 'primary', 'eventId' => update_event.id},
:headers => {'Content-Type' => 'application/json'},
:body_object => update_event)
puts "Done with update."
#Add New Event
puts "Inserting new event..."
new_event = cal.events.insert.request_schema.new
new_event.start = { 'date' => '2015-01-01' } #All day event
new_event.end = { 'date' => '2015-01-01' }
new_event.description = "Description here"
new_event.summary = "Summary here"
result = api_client.execute(:api_method => cal.events.insert,
:authorization => auth_client,
:parameters => { 'calendarId' => 'primary'},
:headers => {'Content-Type' => 'application/json'},
:body_object => new_event)
puts "Done with insert."
Is there any example of WSDL Parser using SOAP4R? I'm trying to list all operations of WSDL file but I can't figure it out :( Can you post me some tutorial?
Thx
Maybe that isn't answer you want, but I recommend you switch to Savon. For example, your task looks like this snippet (this example taken from github's savon page):
require "savon"
# create a client for your SOAP service
client = Savon::Client.new("http://service.example.com?wsdl")
client.wsdl.soap_actions
# => [:create_user, :get_user, :get_all_users]
# execute a SOAP request to call the "getUser" action
response = client.request(:get_user) do
soap.body = { :id => 1 }
end
response.body
# => { :get_user_response => { :first_name => "The", :last_name => "Hoff" } }