I'm reading a Redis set within an EventMachine reactor loop using a suitable Redis EM gem ('em-hiredis' in my case) and have to check if some Redis sets contain members in a cascade. My aim is to get the name of the set which is not empty:
require 'eventmachine'
require 'em-hiredis'
def fetch_queue
#redis.scard('todo').callback do |scard_todo|
if scard_todo.zero?
#redis.scard('failed_1').callback do |scard_failed_1|
if scard_failed_1.zero?
#redis.scard('failed_2').callback do |scard_failed_2|
if scard_failed_2.zero?
#redis.scard('failed_3').callback do |scard_failed_3|
if scard_failed_3.zero?
EM.stop
else
queue = 'failed_3'
end
end
else
queue = 'failed_2'
end
end
else
queue = 'failed_1'
end
end
else
queue = 'todo'
end
end
end
EM.run do
#redis = EM::Hiredis.connect "redis://#{HOST}:#{PORT}"
# How to get the value of fetch_queue?
foo = fetch_queue
puts foo
end
My question is: how can I tell EM to return the value of 'queue' in 'fetch_queue' to use it in the reactor loop? a simple "return queue = 'todo'", "return queue = 'failed_1'" etc. in fetch_queue results in "unexpected return (LocalJumpError)" error message.
Please for the love of debugging use some more methods, you wouldn't factor other code like this, would you?
Anyway, this is essentially what you probably want to do, so you can both factor and test your code:
require 'eventmachine'
require 'em-hiredis'
# This is a simple class that represents an extremely simple, linear state
# machine. It just walks the "from" parameter one by one, until it finds a
# non-empty set by that name. When a non-empty set is found, the given callback
# is called with the name of the set.
class Finder
def initialize(redis, from, &callback)
#redis = redis
#from = from.dup
#callback = callback
end
def do_next
# If the from list is empty, we terminate, as we have no more steps
unless #current = #from.shift
EM.stop # or callback.call :error, whatever
end
#redis.scard(#current).callback do |scard|
if scard.zero?
do_next
else
#callback.call #current
end
end
end
alias go do_next
end
EM.run do
#redis = EM::Hiredis.connect "redis://#{HOST}:#{PORT}"
finder = Finder.new(redis, %w[todo failed_1 failed_2 failed_3]) do |name|
puts "Found non-empty set: #{name}"
end
finder.go
end
Related
im building a small game in ruby to practice programming, so far everything has went well but im trying to implement multiplayer support, i can connect to the server and i can send information but when I try to read form the server it just freezes and my screen goes completely black. and i cant find the cause, ive read the documentation for the gem im using for TCP and i dont know, maybe i missed something, but if any of you have some insight I would really appreciate it
heres the repo if this code isnt enough
https://github.com/jaypitti/ruby-2d-gosu-game
heres the client side code
class Client
include Celluloid::IO
def initialize(server, port)
begin
#socket = TCPSocket.new(server, port)
rescue
$error_message = "Cannot find game server."
end
end
def send_message(message)
#socket.write(message) if #socket
end
def read_message
#socket.readpartial(4096) if #socket
end
end
heres the gameserver
require 'celluloid/autostart'
require 'celluloid/io'
class Server
include Celluloid::IO
finalizer :shutdown
def initialize(host, port)
puts "Starting Server on #{host}:#{port}."
#server = TCPServer.new(host, port)
#objects = Hash.new
#players = Hash.new
async.run
end
def shutdown
#server.close if #server
end
def run
loop { async.handle_connection #server.accept }
end
def handle_connection(socket)
_, port, host = socket.peeraddr
user = "#{host}:#{port}"
puts "#{user} has joined the arena."
loop do
data = socket.readpartial(4096)
data_array = data.split("\n")
if data_array and !data_array.empty?
begin
data_array.each do |row|
message = row.split("|")
if message.size == 10
case message[0]
when 'obj'
#players[user] = message[1..9] unless #players[user]
#objects[message[1]] = message[1..9]
when 'del'
#objects.delete message[1]
end
end
response = String.new
#objects.each_value do |obj|
(response << obj.join("|") << "\n") if obj
end
socket.write response
end
rescue Exception => exception
puts exception.backtrace
end
end # end data
end # end loop
rescue EOFError => err
player = #players[user]
puts "#{player[3]} has left"
#objects.delete player[0]
#players.delete user
socket.close
end
end
server, port = ARGV[0] || "0.0.0.0", ARGV[1] || 1234
supervisor = Server.supervise(server, port.to_i)
trap("INT") do
supervisor.terminate
exit
end
sleep
it just freezes and my screen goes completely black. and i cant find the cause
A good trick you can look at is attaching to your process with either rbspy or rbtrace to see that is going on when it is stuck.
You can also try first reducing dependencies here a bit and doing this with a simple threadpool prior to going full async with celluloid or event machine.
First of all you should not be rescuing Exception all over the place. Wrapping long begin rescue blocks around nested iterators is begging for trouble.
It sounds like a threading issues, memory and/or CPU but that's just a guess. Try to monitor your resources or use some performance checking gems. But for the love of Satoshi Nakamoto, please write some test coverage and see your methods fail miserably, then fix them!
Some of these may help:
group :development do
gem 'bullet', require: false
gem 'flamegraph', require: false
gem 'memory_profiler', require: false
gem 'rack-mini-profiler', require: false
gem 'seed_dump'
gem 'stackprof', require: false
gem 'traceroute', require: false
end
I have this simple code
require 'json'
module Html
class JsonHelper
attr_accessor :path
def initialize(path)
#path = path
end
def add(data)
old = JSON.parse(File.read(path))
merged = old.merge(data)
File.write(path, merged.to_json)
end
end
end
and this spec (reduced as much as I could while still working)
require 'html/helpers/json_helper'
describe Html::JsonHelper do
let(:path) { "/test/data.json" }
subject { described_class.new(path) }
describe "#add(data)" do
before(:each) do
allow(File).to receive(:write).with(path, anything) do |path, data|
#saved_string = data
#saved_json = JSON.parse(data)
end
subject.add(new_data)
end
let(:new_data) { { oldestIndex: 100 } }
let(:old_data) { {"test" => 'testing', "old" => 50} }
def stub_old_json
allow(File).to receive(:read).with(path).and_return(#data_before.to_json)
end
context "when given data is not present" do
before(:each) do
puts "HERE"
binding.pry
#data_before = old_data
stub_old_json
end
it "adds data" do
expect(#saved_json).to include("oldestIndex" => 100)
end
it "doesn't change old data" do
expect(#saved_json).to include(old_data)
end
end
end
end
HERE never gets printed and binding.pry doesn't stop execution and tests fail with message No such file or directory # rb_sysopen - /test/data.json
This all means that before(:each) never gets executed.
Why?
How to fix it?
It does not print desired message because it fails at the first before block. Rspec doc about execution order
It fails because you provided an absolute path, so it is checking /test/data.json
Either use relative path to the test ie. ../data.json (just guessing),
or full path.
In case of rails:
Rails.root.join('path_to_folder_with_data_json', 'data.json')
I am building a simple web spider using Sidekiq and Mechanize.
When I run this for one domain, it works fine. When I run it for multiple domains, it fails. I believe the reason is that web_page gets overwritten when instantiated by another Sidekiq worker, but I am not sure if that's true or how to fix it.
# my scrape_search controller's create action searches on google.
def create
#scrape = ScrapeSearch.build(keywords: params[:keywords], profession: params[:profession])
agent = Mechanize.new
scrape_search = agent.get('http://google.com/') do |page|
search_result = page.form...
search_result.css("h3.r").map do |link|
result = link.at_css('a')['href'] # Narrowing down to real search results
#domain = Domain.new(some params)
ScrapeDomainWorker.perform_async(#domain.url, #domain.id, remaining_keywords)
end
end
end
I'm creating a Sidekiq job per domain. Most of the domains I'm looking for should contain just a few pages, so there's no need for sub-jobs per page.
This is my worker:
class ScrapeDomainWorker
include Sidekiq::Worker
...
def perform(domain_url, domain_id, keywords)
#domain = Domain.find(domain_id)
#domain_link = #domain.protocol + '://' + domain_url
#keywords = keywords
# First we scrape the homepage and get the first links
#domain.to_parse = ['/'] # to_parse is an array of PATHS to parse for the domain
mechanize_path('/')
#domain.verified << '/' # verified is an Array field containing valid domain paths
get_paths(#web_page) # Now we should have to_scrape populated with homepage links
#domain.scraped = 1 # Loop counter
while #domain.scraped < 100
#domain.to_parse.each do |path|
#domain.to_parse.delete(path)
#domain.scraped += 1
mechanize_path(path) # We create a Nokogiri HTML doc with mechanize for the valid path
...
get_paths(#web_page) # Fire this to repopulate to_scrape !!!
end
end
#domain.save
end
def mechanize_path(path)
agent = Mechanize.new
begin
#web_page = agent.get(#domain_link + path)
rescue Exception => e
puts "Mechanize Exception for #{path} :: #{e.message}"
end
end
def get_paths(web_page)
paths = web_page.links.map {|link| link.href.gsub((#domain.protocol + '://' + #domain.url), "") } ## This works when I scrape a single domain, but fails with ".gsub for nil" when I scrape a few domains.
paths.uniq.each do |path|
#domain.to_parse << path
end
end
end
This works when I scrape a single domain, but fails with .gsub for nil for web_page when I scrape a few domains.
You can wrap you code in another class, and then create and object of that class within your worker:
class ScrapeDomainWrapper
def initialize(domain_url, domain_id, keywords)
# ...
end
def mechanize_path(path)
# ...
end
def get_paths(web_page)
# ...
end
end
And your worker:
class ScrapeDomainWorker
include Sidekiq::Worker
def perform(domain_url, domain_id, keywords)
ScrapeDomainWrapper.new(domain_url, domain_id, keywords)
end
end
Also, bear in mind that Mechanize::Page#links may be a nil.
I am implementing a simple program in Celluloid that ideally will run a few actors in parallel, each of which will compute something, and then send its result back to a main actor, whose job is simply to aggregate results.
Following this FAQ, I introduced a SupervisionGroup, like this:
module Shuffling
class AggregatorActor
include Celluloid
def initialize(shufflers)
#shufflerset = shufflers
#results = {}
end
def add_result(result)
#results.merge! result
#shufflerset = #shufflerset - result.keys
if #shufflerset.empty?
self.output
self.terminate
end
end
def output
puts #results
end
end
class EvalActor
include Celluloid
def initialize(shufflerClass)
#shuffler = shufflerClass.new
self.async.runEvaluation
end
def runEvaluation
# computation here, which yields result
Celluloid::Actor[:aggregator].async.add_result(result)
self.terminate
end
end
class ShufflerSupervisionGroup < Celluloid::SupervisionGroup
shufflers = [RubyShuffler, PileShuffle, VariablePileShuffle, VariablePileShuffleHuman].to_set
supervise AggregatorActor, as: :aggregator, args: [shufflers.map { |sh| sh.new.name }]
shufflers.each do |shuffler|
supervise EvalActor, as: shuffler.name.to_sym, args: [shuffler]
end
end
ShufflerSupervisionGroup.run
end
I terminate the EvalActors after they're done, and I also terminate the AggregatorActor when all of the workers are done.
However, the supervision thread stays alive and keeps the main thread alive. The program never terminates.
If I send .run! to the group, then the main thread terminates right after it, and nothing works.
What can I do to terminate the group (or, in group terminology, finalize, I suppose) after the AggregatorActor terminates?
What I did after all, is change the AggregatorActor to have a wait_for_results:
class AggregatorActor
include Celluloid
def initialize(shufflers)
#shufflerset = shufflers
#results = {}
end
def wait_for_results
sleep 5 while not #shufflerset.empty?
self.output
self.terminate
end
def add_result(result)
#results.merge! result
#shufflerset = #shufflerset - result.keys
puts "Results for #{result.keys.inspect} recorded, remaining: #{#shufflerset.inspect}"
end
def output
puts #results
end
end
And then I got rid of the SupervisionGroup (since I didn't need supervision, ie rerunning of actors that failed), and I used it like this:
shufflers = [RubyShuffler, PileShuffle, VariablePileShuffle, VariablePileShuffleHuman, RiffleShuffle].to_set
Celluloid::Actor[:aggregator] = AggregatorActor.new(shufflers.map { |sh| sh.new.name })
shufflers.each do |shuffler|
Celluloid::Actor[shuffler.name.to_sym] = EvalActor.new shuffler
end
Celluloid::Actor[:aggregator].wait_for_results
That doesn't feel very clean, it would be nice if there was a cleaner way, but at least this works.
I'm converting a generator over from RubiGen and would like to make it so the group of tasks in Thor::Group does not complete if a condition isn't met.
The RubiGen generator looked something like this:
def initialize(runtime_args, runtime_options = {})
super
usage if args.size != 2
#name = args.shift
#site_name=args.shift
check_if_site_exists
extract_options
end
def check_if_site_exists
unless File.directory?(File.join(destination_root,'lib','sites',site_name.underscore))
$stderr.puts "******No such site #{site_name} exists.******"
usage
end
end
So it'd show a usage banner and exit out if the site hadn't been generated yet.
What is the best way to recreate this using thor?
This is my task.
class Page < Thor::Group
include Thor::Actions
source_root File.expand_path('../templates', __FILE__)
argument :name
argument :site_name
argument :subtype, :optional => true
def create_page
check_if_site_exists
page_path = File.join('lib', 'sites', "#{site_name}")
template('page.tt', "#{page_path}/pages/#{name.underscore}_page.rb")
end
def create_spec
base_spec_path = File.join('spec', 'isolation', "#{site_name}")
if subtype.nil?
spec_path = base_spec_path
else
spec_path = File.join("#{base_spec_path}", 'isolation')
end
template('functional_page_spec.tt', "#{spec_path}/#{name.underscore}_page_spec.rb")
end
protected
def check_if_site_exists # :nodoc:
$stderr.puts "#{site_name} does not exist." unless File.directory?(File.join(destination_root,'lib','sites', site_name.underscore))
end
end
after looking through the generators for the spree gem i added a method first that checks for the site and then exits with code 1 if the site is not found after spitting out an error message to the console. The code looks something like this:
def check_if_site_exists
unless File.directory?(path/to/site)
say "site does not exist."
exit 1
end
end