I am new to Sinatra and am trying to implement the following:
REST service with get method whose action block can be provided. Something like:
class C1
get '/something' do
<some action to be provided later>
end
post '/something' do
<some action to be provided later>
end
end
C1.new
C1.get_block = { "Hello from get" }
C1.post_block = { "Hello from post" }
Is it possible to do something like above? I am designing an intercepting service which can be used to perform different actions depending on conditions
The following should do the trick. I've added arrays so you can register multiple blocks that should be executed upon a GET/POST requests.
class C1
##get_blocks = []
##post_blocks = []
def register_get(block)
##get_blocks << block
end
def register_post(block)
##post_blocks << block
end
get '/something' do
##get_blocks.each do |block|
block.call(params)
end
end
post '/something' do
##post_blocks.each do |block|
block.call(params)
end
end
end
C1.new
C1.register_get { |params| "Hello from get" }
C1.register_post { |params| "Hello from post" }
I think #MartinKonecny answered the question using a very nice dynamic and effective approach...
...But please allow me to suggest a different approach, assuming that the code itself is static and that it is activated according to a set of conditions.
I know that using the Plezi framework you could alternate controllers for the same route, so that if one controller doesn't answer the request, the next one is tested.
I believe that Sinatra could be used in the same way, so that when the first route fails, the second route is attempted.
A short demonstration of the concept (not an actual app code), using the Plezi framework would look something like this:
require 'plezi'
listen
class C1
def index
"Controller 1 answers"
end
def hello
false
end
end
class C2
def show
"Controller 2 shows your request for: #{params[:id]}"
end
def hello
'Hello World!'
end
end
route '/(:id)', C1
route '/(:id)', C2
exit # exit the terminal to test this code.
This way, for the path localhost:3000/, the C1 class answers. But, the route for C1 fails the restful request for the path localhost:3000/1, so the route for C2 answers that request, attempting to show object with id==1.
This is easy to see when accessing the localhost:3000/hello route - the first route fails, and the second one is then attempted.
If you don't need to define the blocks dynamically, perhaps this approach - which I assume to be available also in Sinatra - will be more easy to code and maintain.
If this isn't available using Sinatra, you could probably imitate this approach using a "routing" class as a controller on your Sinatra app.
Another approach would be to specify the conditions in a case statement, calling different methods according to each situation.
class C1
get '/something' do
case
when true # condition
# do something
when true # condition
# do something
when true # condition
# do something
when true # condition
# do something
else
# do something
end
end
end
Related
I am fairly new to ruby and would like to understand how class instance variables behave in case of multiple parallel requests.
I have a method inside my controller class which is called everytime for each request for a specific operation (create in this case)
class DeployProvision
def self.create(data)
raise "Input JSON not received." unless data
# $logger.info input_data.inspect
failure = false
response_result = ""
response_status = "200"
#validator = SchemaValidate.new
validation = #validator.validate_create_workflow(data.to_json)
end
end
This method is called as (DeployProvision.create(data))
I am a little confused on how #validator class instance variable behaves when multiple requests come. Is it shared among multiple requests. Is it a good idea to declare this as class instance variable instead of a local variable ?
I am working on an existing code base and would like to understand the intent of creating #validator as a class instance variable instead of local variable.
You can write ultra-simple script like this:
require 'sinatra'
class Foo
def self.bar
#test = Time.now
puts #test
end
end
get '/' do
Foo.bar
end
and you'll see it does nothing, because with every call, you're creating new instance of Time(SchemaValidate in your code).
If you used memoization and had something like #validator ||= SchemaValidate.new you would have one instance of SchemaValidate stored between requests.
I don't think that'd change anything in terms of performance and I don't have idea why would anyone do something like that.
You can have some fun with ultra-simple scripts with sinatra to test how it behaves.
Good luck with this code!
I have a simple MySQL wrapper class which will run a query and return results.
class Rsql
def initialize(db)
#client = Mysql2::Client
#db = db
end
def execute_query()
client = #client.new(#db)
client.query("select 1")
end
end
I want to test some stuff involving the results of the query, but I don't want to actually connect to a database to get the results. I tried this test, but it doesn't work:
RSpec.describe Rsql do
it "does it" do
mock_database = double
rsql = Rsql.new(mock_database)
mock_mysql_client = double
allow(mock_mysql_client).to receive(:query).and_return({"1" => 1})
allow_any_instance_of(Mysql2::Client).to receive(:new).and_return(mock_mysql_client)
expect(rsql.execute_query).to eq({"1" => 1})
end
end
Replacing allow_any_instance_of() with allow() works. I was under the impression that allow_any_instance_of() was some kind of a global "pretend this class behaves in this way across the entire program" whereas allow() is for specific instances of a class.
Can someone explain this behavior to me? I'm new to Rspec, so I apologize if this answer is blatantly obvious. I tried searching for the answer, but I couldn't come up with the right search string to find one. Maybe I don't know enough to know when I've found it.
As of RSpec 3.3 , any_instance is deprecated and not recommended to use in your tests.
From the docs:
any_instance is the old way to stub or mock any instance of a class
but carries the baggage of a global monkey patch on all classes. Note
that we generally recommend against using this feature.
You should only need to use allow(some_obj) going forward and the documentation has some great examples (see here).
Such as:
RSpec.describe "receive_messages" do
it "configures return values for the provided messages" do
dbl = double("Some Collaborator")
allow(dbl).to receive_messages(:foo => 2, :bar => 3)
expect(dbl.foo).to eq(2)
expect(dbl.bar).to eq(3)
end
end
Edit, if you really want to use any_instance, do so like this:
(Mysql2::Client).allow_any_instance.to receive(:something)
Edit2, your exact stub doesn't work because you're not stubbing an instance, you're stubbing before the object is initialized. In that case you would do allow(Mysql2::Client).to receive(:new).
this Rsql class seems a service
class Rsql
def initialize(db)
#client = Mysql2::Client
#db = db
end
def execute_query()
client = #client.new(#db)
client.query("select 1")
end
end
lets create a test for it, now we should to test this function execute_query with subject ()
and to create clients in db we can use let! like this
let!(:client1) do
FactoryBot.create(...
with this we should not use double or something
require 'rails_helper'
RSpec.describe RsqlTest do
subject(:clients) do
Rsql.execute_query()
end
context 'select' do
let!(:client1) do
FactoryBot.create(...
end
it 'should return records' do
expect(clients).to include(client1)
end
end
end
My goal is to set an instance variable using AFMotion's AFMotion::HTTP.get method.
I've set up a Post model. I would like to have something like:
class Post
...
def self.all
response = AFMotion::HTTP.get("localhost/posts.json")
objects = JSON.parse(response)
results = objects.map{|x| Post.new(x)}
end
end
But according to the docs, AFMotion requires some sort of block syntax that looks and seems to behave like an async javascript callback. I am unsure how to use that.
I would like to be able to call
#posts = Post.all in the ViewController. Is this just a Rails dream? Thanks!
yeah, the base syntax is async, so you don't have to block the UI while you're waiting for the network to respond. The syntax is simple, place all the code you want to load in your block.
class Post
...
def self.all
AFMotion::HTTP.get("localhost/posts.json") do |response|
if result.success?
p "You got JSON data"
# feel free to parse this data into an instance var
objects = JSON.parse(response)
#results = objects.map{|x| Post.new(x)}
elsif result.failure?
p result.error.localizedDescription
end
end
end
end
Since you mentioned Rails, yeah, this is a lil different logic. You'll need to place the code you want to run (on completion) inside the async block. If it's going to change often, or has nothing to do with your Model, then pass in a &block to yoru method and use that to call back when it's done.
I hope that helps!
Is there a way to keep track of variables that are created when using let?
I have a series of tests, some of which use let(:server) { #blah blah }. Part of the blah is to wait for the server to start up so that it is in a decent state before it is used.
The issue comes when I'm done with that test. I want to kill the server using server.kill(). This would be almost perfect if I could say something to the effect of
after(:each) { server.kill }
But this would create the server and waste all the resources/time to create it when it is referenced, only to kill it immediately if the server hadn't been used in the preceding test. Is there a way to keep track of and only clean up the server if it has been used?
I've come across a similar problem. A simple way to solve this is to set a instance variable in the let method to track if the object was created:
describe MyTest do
before(:each) { #created_server = false }
let(:server) {
#created_server = true
Server.new
}
after(:each) { server.kill if #created_server }
end
What I would do is something like this:
describe MyTest do
let(:server) { Server.new }
context "without server" do
## dont kill the server in here.
end
context "with server" do
before do
server
end
after(:each) { server.kill }
it {}
it {}
end
end
This is definitely a hack:
describe "cleanup for let" do
let(:expensive_object) {
ExpensiveObject.new
}
after(:context) {
v = __memoized[:expensive_object]
v.close if v
}
end
I figured that rspec had to be storing these lazy values somewhere the instance could access them, and __memoized is that place.
With a helper, it becomes a bit tidier:
def cleanup(name, &block)
after(:context) do
v = __memoized[name]
instance_exec(v, &block) if v
end
end
describe "cleanup for let" do
let(:expensive_object) {
ExpensiveObject.new
}
cleanup(:expensive_object) { |v|
v.close
}
end
There's still room for improvement, though. I think I would rather not have to type the object's name twice, so something like this would be nicer:
describe "cleanup for let" do
let(:expensive_object) {
ExpensiveObject.new
}.cleanup { |v|
v.close
}
end
I'm not sure I can do that without hacking rspec to pieces, but maybe if rspec themselves saw the benefit of it, something could be done in core...
Edit: Changed to using instance_exec because rspec started whining if things were called from the wrong context, and changed cleanup to be after(:context), because apparently this is the level it's memoising at.
Just write a small decorator to handle both the explicit and implicit starting of the server and which allows you to determine if the server has been started.
Imagine this to be the real server that needs to be started:
class TheActualServer
def initialize
puts 'Server starting'
end
def operation1
1
end
def operation2
2
end
def kill
puts 'Server stopped'
end
end
The reusable decorator could look like this:
class ServiceWrapper < BasicObject
def initialize(&start_procedure)
#start_procedure = start_procedure
end
def started?
!!#instance
end
def instance
#instance ||= #start_procedure.call
end
alias start instance
private
def method_missing(method_name, *arguments)
instance.public_send(method_name, *arguments)
end
def respond_to?(method_name)
super || instance.respond_to?(method_name)
end
end
Now you can apply this in your specs like the following:
describe 'something' do
let(:server) do
ServiceWrapper.new { TheActualServer.new }
end
specify { expect(server.operation1).to eql 1 }
specify { expect(server.operation2).to eql 2 }
specify { expect(123).to be_a Numeric }
context 'when server is running' do
before(:each) { server.start }
specify { expect('abc').to be_a String }
specify { expect(/abc/).to be_a Regexp }
end
after(:each) { server.kill if server.started? }
end
When a method is called on the decorator, it will run it's own implementation if one exists. For example if #started? is called, it will answer whether the actual server has been started or not. If it doesn't have an own implementation of that method, it will delegate the method call to the server object returned by that. If it doesn't have a reference to an instance of the actual server at that point, it will run the provided start_procedure to get one and memoize that for future calls.
If you put all the posted code into a file called server_spec.rb you can then run it with:
rspec server_spec.rb
The output will be like this:
something
Server starting
Server stopped
should eql 1
Server starting
Server stopped
should eql 2
should be a kind of Numeric
when server is running
Server starting
Server stopped
should be a kind of String
Server starting
Server stopped
should be a kind of Regexp
Finished in 0.00165 seconds (files took 0.07534 seconds to load)
5 examples, 0 failures
Note that in the examples 1 and 2, methods on the server are called, and therefore you see the output of the server that is implicitly started by the decorator.
In example 3 there is no interaction with the server at all, therefore you don't see the server's output in the log.
Then again in examples 4 and 5, there is not direct interaction with the server object in the example code, but the server is explicitly started through a before block, which can also be seen in the output.
I have a Sinatra based REST service app and I would like to call one of the resources from within one of the routes, effectively composing one resource from another. E.g.
get '/someresource' do
otherresource = get '/otherresource'
# do something with otherresource, return a new resource
end
get '/otherresource' do
# etc.
end
A redirect will not work since I need to do some processing on the second resource and create a new one from it. Obviously I could a) use RestClient or some other client framework or b) structure my code so all of the logic for otherresource is in a method and just call that, however, it feels like it would be much cleaner if I could just re-use my resources from within Sinatra using their DSL.
Another option (I know this isn't answering your actual question) is to put your common code (even the template render) within a helper method, for example:
helpers do
def common_code( layout = true )
#title = 'common'
erb :common, :layout => layout
end
end
get '/foo' do
#subtitle = 'foo'
common_code
end
get '/bar' do
#subtitle = 'bar'
common_code
end
get '/baz' do
#subtitle = 'baz'
#common_snippet = common_code( false )
erb :large_page_with_common_snippet_injected
end
Sinatra's documentation covers this - essentially you use the underlying rack interface's call method:
http://www.sinatrarb.com/intro.html#Triggering%20Another%20Route
Triggering Another Route
Sometimes pass is not what you want, instead
you would like to get the result of calling another route. Simply use
call to achieve this:
get '/foo' do
status, headers, body = call env.merge("PATH_INFO" => '/bar')
[status, headers, body.map(&:upcase)]
end
get '/bar' do
"bar"
end
I was able to hack something up by making a quick and dirty rack request and calling the Sinatra (a rack app) application directly. It's not pretty, but it works. Note that it would probably be better to extract the code that generates this resource into a helper method instead of doing something like this. But it is possible, and there might be better, cleaner ways of doing it than this.
#!/usr/bin/env ruby
require 'rubygems'
require 'stringio'
require 'sinatra'
get '/someresource' do
resource = self.call(
'REQUEST_METHOD' => 'GET',
'PATH_INFO' => '/otherresource',
'rack.input' => StringIO.new
)[2].join('')
resource.upcase
end
get '/otherresource' do
"test"
end
If you want to know more about what's going on behind the scenes, I've written a few articles on the basics of Rack you can read. There is What is Rack? and Using Rack.
This may or may not apply in your case, but when I’ve needed to create routes like this, I usually try something along these lines:
%w(main other).each do |uri|
get "/#{uri}" do
#res = "hello"
#res.upcase! if uri == "other"
#res
end
end
Building on AboutRuby's answer, I needed to support fetching static files in lib/public as well as query paramters and cookies (for maintaining authenticated sessions.) I also chose to raise exceptions on non-200 responses (and handle them in the calling functions).
If you trace Sinatra's self.call method in sinatra/base.rb, it takes an env parameter and builds a Rack::Request with it, so you can dig in there to see what parameters are supported.
I don't recall all the conditions of the return statements (I think there were some Ruby 2 changes), so feel free to tune to your requirements.
Here's the function I'm using:
def get_route url
fn = File.join(File.dirname(__FILE__), 'public'+url)
return File.read(fn) if (File.exist?fn)
base_url, query = url.split('?')
begin
result = self.call('REQUEST_METHOD' => 'GET',
'PATH_INFO' => base_url,
'QUERY_STRING' => query,
'rack.input' => StringIO.new,
'HTTP_COOKIE' => #env['HTTP_COOKIE'] # Pass auth credentials
)
rescue Exception=>e
puts "Exception when fetching self route: #{url}"
raise e
end
raise "Error when fetching self route: #{url}" unless result[0]==200 # status
return File.read(result[2].path) if result[2].is_a? Rack::File
return result[2].join('') rescue result[2].to_json
end