rails persist objects over requests in development mode - ruby

I am trying to interact with Matlab.Application.Single win32ole objects in my rails application. The problem I am running into is that while I am developing my application, each separate request reloads my win32ole objects so I loose the connection to my matlab orignal instances and new instances are made. Is there a way to persist live objects between requests in rails? or is there a way to reconnect to my Matlab.Application.Single instances?
In production mode I use module variables to store my connection between requests, but in development mode Module variables are reloaded every request.
here is a snippet of my code
require 'win32ole'
module Calculator
#engine2 = nil
#engine3 = nil
def self.engine2
if #engine2.nil?
#engine2 = WIN32OLE.new("Matlab.Application.Single")
#engine2.execute("run('setup_path.m')")
end
#engine2
end
def self.engine3
if #engine3.nil?
#engine3 = WIN32OLE.new("Matlab.Application.Single")
#engine3.execute("run('setup_path.m')")
end
#engine3
end
def self.load_CT_image(file)
Calculator.engine2.execute("spm_image('Init','#{file}')")
end
def self.load_MR_image(file)
Calculator.engine3.execute("spm_image('Init','#{file}')")
end
end
I am then able to use my code in my controllers like this:
Calculator.load_CT_image('Post_Incident_CT.hdr')
Calculator.load_MR_image('Post_Incident_MRI.hdr')

You can keep an app-wide object in a constant that won't be reset for every request. Add this to a new file in config/initializers/:
ENGINE_2 = WIN32OLE.new("Matlab.Application.Single")
You might also need to include the .execute("run('setup_path.m')") line here as well (I'm not familiar with WIN32OLE). You can then assign that object to your instance variables in your Calculator module (just replace the WIN32OLE.new("Matlab.Application.Single") call with ENGINE_2, or simply refer to them directly.
I know this is beyond the scope of your question, but you have a lot of duplicated code here, and you might want to think about creating a class or module to manage your Matlab instances -- spinning up new ones as needed, and shutting down old ones that are no longer in use.

Related

Using "helper" method in view

I made a helper in my rails project that makes a request in an external api and get a certain value from it.
def show_CoinPrice coin
begin
coinTicker = JSON.parse(HTTParty.get("https://www.mercadobitcoin.net/api/#{coin}/ticker/").body)
"R$ #{coinTicker["ticker"]["last"].to_number.round}"
rescue
"---"
end
end
However I have doubts if this was a good practice to do (this code caused slowness in my view), there is something I can do better, whether with a controller or something ?!
Thanks.
I think u should request to external API using client-side (JS). Cs fetching API with rails helper it's will run while your server is rendering view.

How can I run a method at the end of a file from a gem?

I have a gem (e.g. mygem) and as is normal, I add mygem to a file by putting require "mygem" at the top. What if I have a method in mygem called finish_jobs and I want it to run in the following location:
require "mygem"
# code, code code
finish_jobs
How would I do that without forcing the user to add the method every time they use the gem?
Specifically, what I am trying to do is write a server app (with rack) and I need the methods in the body of the file to be processed before the server is started.
This is certainly possible.
Why not just add the code directly into the Gem (since it sounds like it is under your control and is not an external dependency)?
module MyGem
def printSomething
p 2 + 2
end
module_function :printSomething
printSomething()
# => 4
end
If this isn't what you had in mind, let me know and I can update the solution.
Also, see Kernel#at_exit
A more explanatory guide on Kernel#at_exit
I don't know of a way to do what you're describing.
One workaround would be to provide an API which accepts a block. This approach allows you to run code after the user's setup without exposing implementation details to them.
A user could call your library method, providing a block to set up their server:
require "mygem"
MyGem.code_code_code {
# user's code goes here
}
Then, your library code would:
Accept the block
Call some library code
Here's an example implementation:
module MyGem
# Run some user-provided code by `yield`-ing the block
# Then run the gem's finalizer
def self.code_code_code
# Execute the block:
yield
# Finalize:
finish_jobs
end
end
This way, you can accept code from the user but still control setup and finalization.
I hope it helps!

Rails with mutex on class variable, rake task and cron

Sorry for such a big question. I do not have much experience with Rails threads and mutex.
I have a class as follow which is used by different controllers to get the license for each customers.
Customers and their licenses gets added and removed every hour. An api is available to get all customers and their licenses.
I plan to create a rake task to call update_set_customers_licenses, run hourly via a cronjob.
I have following questions:
1) Even with a mutex, currently there is a potential for problem, there is a chance that my rake task can occur while updating. Any idea on how to solve this?
2) My design below writes the json out to a file, this is done is for safety as the api is not that reliable. As can be seen, it is not reading the file back, so in essence the file write is useless. I tried to implement a file read but together with mutex and rake task, it gets really confusing. Any pointers will help here.
class Customer
##customers_to_licenses_hash = nil
##last_updated_at = nil
##mutex = Mutex.new
CUSTOMERS_LICENSES_FILE = "#{Rails.root}/tmp/customers_licenses"
def self.cached_license_with_customer(customer)
Rails.cache.fetch('customer') {self.license_with_customer(customer)}
end
def self.license_with_customer(customer)
##mutex.synchronize do
license = ##customers_to_licenses_hash[customer]
if license
return license
elsif(##customers_to_licenses_hash.nil? || Time.now.utc - ##last_updated_at > 1.hours)
updated = self.update_set_customers_licenses
return ##customers_to_licenses_hash[customer] if updated
else
return nil
end
end
end
def self.update_set_customers_licenses
updated = nil
file_write = File.open(CUSTOMERS_LICENSES_FILE, 'w')
results = self.get_active_customers_licenses
if results
##customers_to_licenses_hash = results
file_write.print(results.to_json)
##last_updated_at = Time.now.utc
updated = true
end
file_write.close
updated
end
def self.get_active_customers_licenses
#http get thru api
#return hash of records
end
end
I'm pretty it's the case that every time rails loads, the environment is "fresh" and has no concept of "state" in between instances. That is to say, a mutex in one ruby instance (the one request to rails) has no effect on a second ruby instance (another request to rails or in this case, a rake task).
If you follow the data upstream, you'll find that the common root of every instance that can be used to synchronize them is the database. You could use transactional blocks or maybe a manual flag you set and unset in the database.

How much should interfaces of elements in Page Objects be abstracted?

I have a page object called LineItemsPage
class LineItemsPage
attr_accessor :add_line_item_button
def initialize(test_env)
#browser = test_env[:browser]
#action_bar = #browser.div(:id => 'lineitems_win').div(:class => 'window-body').div(:class => 'actionbar')
#add_line_item_button = #action_bar.img(:class => 'button add')
end
def method_missing(sym, *args, &block)
#browser.send sym, *args, &block
end
end
I use it like so:
When /^I click on Add Item and enter the following values:$/ do |table|
#line_items_page = LineItemsPage.new(#test_env)
#line_items_page.add_line_item_button.when_present.click
end
I'm wondering if I should be abstracting the click, by adding something like the following to my LineItemsPage class:
def add_item
self.add_line_item_button.when_present.click
end
And then using it like so:
#line_items_page.add_item
I'm looking for best practices, either with regards to Page Object in particular or Ruby in general. I feel that encapsulating the interface by using add_item() is going a bit far, but I'm wondering if I'm unaware of issues I might run into down the road if I don't do that.
Personally, I try to make my page object methods be in the domain language with no reference to the implementation.
I used to do something like #line_items_page.add_line_item_button.when_present.click, however it has caused problems in the following scenarios:
1) The add line item was changed from a button to a link.
2) The process for adding a line item has changed - say its now done by a right-click or it has become a two step process (like open some dropdown and then click the add line).
In either case, you would have to locate all the places you add line items and update them. If you had all the logic in the add_item page object method, you would only have to update the one place.
From an implementation perspective, I have found that Cheezy's page object accessors work pretty well. However, for image buttons (or any of your app's custom controls), I would add additional methods to the PageObject::Accessors module. Or if they are one off controls, you can add the methods directly to the specific page object.
Update - Reply to Comment Regarding Some Starting Points:
I have not come across too much documentation, but here are a couple links that might help:
1) The Cheezy Page Object project wiki - Gives a simple example to get started
2) Cheezy's blog posts where the page object gem first started. Note that the content here might not be exactly how the gem is currently implemented, but I think it gives a good foundation to understanding what he is trying to achieve. This in turn makes it easier to understand what is happening when you have to open up and modify the gem to fit you needs.

Ruby TCP "bot" using EventMachine - implementing a command dispatcher

I've crafted a basic TCP client using EventMachine. Code:
# run.rb
EventMachine::run do
EventMachine::connect $config_host, $config_port, Bot
end
# bot.rb
module Bot
def post_init
# log us in and do any other spinup.
sleep(1)
send_data $config_login + "\n"
EventMachine.add_periodic_timer($config_keepalive_duration) { send_data $config_keepalive_str + "\n" }
#valid_command = /^<#{$config_passphrase}:([^>:]+):(#\d+)>(.*)$/
end
def receive_data(data)
if(ma = #valid_command.match(data))
command, user, args = ma[1,3]
args.strip!
command.downcase!
p "Received: #{command}, #{user}, #{args}"
# and.. here we handle the command.
end
end
end
This all works quite well. The basic idea is that it should connect, listen for specially formatted commands, and execute the command; in executing the command, there may be any number of "actions" taken that result in various data sent by the client.
But, for my next trick, I need to add the ability to actually handle the commands that Bot receives.
I'd like to do this using a dynamic library of event listeners, or something similar to that; ie, I have an arbitrary number of plugins that can register to listen for a specific command and get a callback from bot.rb. (Eventually, I'd like to be able to reload these plugins without restarting the bot, too.)
I've looked at the ruby_events gem and I think this makes sense but I'm having a little trouble of figuring out the best way to architect things. My questions include...
I'm a little puzzled where to attach ruby_events listeners to - it just extends Object so it doesn't make it obvious how to implement it.
Bot is a module, so I can't just call Bot.send_data from one of the plugins to send data - how can I interact with the EM connection from one of the plugins?
I'm completely open to any architectural revisions or suggestions of other gems that make what I'm trying to do easier, too.
I'm not sure exactly what you're trying to do, but one common pattern of doing this in EM is to define the command handlers as callbacks. So then the command logic can be pushed up out of the Bot module itself, which just handles the basic socket communication and command parsing. Think of how a web server dispatches to an application - the web server doesn't do the work, it just dispatches.
For example something like this
EM::run do
bot = EM::connect $config_host, $config_port, Bot
bot.on_command_1 do |user, *args|
# handle command_1 in a block
end
# or if you want to modularize it, something like this
# where Command2Handler = lambda {|user, *args| #... }
bot.on_command_2(&Command2Handler)
end
So then you just need to implement #on_command_1, #on_command_2, etc. in your Bot, which is just a matter of storing the procs to instance vars; then calling them once you parse the passed commands out of your receive_data.
A good, very readable example of this in production is TwitterStream.

Resources