I have a problem about sleep for loop on ruby - ruby

guys i write a chat bot for twitch in hobby. i want to spam a word in chat in a time like that write to chat"hello" and wait 10 sec then write again
i did that but in my code have a answer system like that when anyone write to chat !hey my bot answering hello to him. and when i tried in same system that not working
until #socket.eof? do
message = #socket.gets
puts message
if message.match(/PRIVMSG ##{#channel} :(.*)$/)
a = Time.now
b = Time.now + 5
while a < b do
write_to_chat("!prime")
a += 4
sleep(20)
end
end
if message.match(/PRIVMSG ##{#channel} :(.*)$/)
content = $~[1]
username = message.match(/#(.*).tmi.twitch.tv/)[1]
if content.include? 'theoSea'
write_to_chat(" theoAse theoAse theoAse")
end

i solved it with scheduler
i wanted a code for write every 5 min i tried lots of method but sleep command
and i found this link "https://github.com/jmettraux/rufus-scheduler"
all in link u can use this gem.
thx for everything guys.

Related

How to request a channel's videos on ruby

im trying request a channel's videos for discord notification.
thats mean i request datas for per minute.
i just reach with search and that mean its cost 100
its just for my hobby and i dont want to pay for that.
have any idea how can i get last uploaded videos without search.
here my codes if u interest:
require "yt"
Yt.configure do |config|
config.api_key = 'mykey'
end
channel1 = Yt::Channel.new id: 'UCQjhFO_CA5e1-Ymopbfmx5Q'
videos = channel1.videos
video2 = videos.where(id:' UCQjhFO_CA5e1-Ymopbfmx5Q').map(&:id)
puts video2
thx
I solved with playlists.
firstly u should create a playlist and upload all videos on that playlist after that
u can use this codes :
Yt.configure do |config|
config.api_key = 'Yourkey'
end
item = Yt::Playlist.new(id: 'PLj-AxDUBuKVyraMkOwGloWO_zcUB0XNH-').playlist_items.first
item1 = item.video_id
it spends 5 quotas per request.

How to push messages from unacked to ready

My question is similar to a question asked previously, however it does not find an answer, I have a Consumer which I want to process an action called a Web Service, however, if this web service does not respond for some reason, I want the consumer not to process the message of the RabbitMQ but I encole it to process it later, my consumer is the following one:
require File.expand_path('../config/environment.rb', __FILE__)
conn=Rabbit.connect
conn.start
ch = conn.create_channel
x = ch.exchange("d_notification_ex", :type=> "x-delayed-message", :arguments=> { "x-delayed-type" => "direct"})
q = ch.queue("d_notification_q", :durable =>true)
q.bind(x)
p 'Wait ....'
q.subscribe(:manual_ack => true, :block => true) do |delivery_info, properties, body|
datos=JSON.parse(body)
if datos['status']=='request'
#I call a web service and process the json
result=Notification.send_payment_notification(datos.to_json)
else
#I call a web service and process the body
result=Notification.send_payment_notification(body)
end
#if the call to the web service, the web server is off the result will be equal to nil
#therefore, he did not notify RabbitMQ, but he puts the message in UNACKED status
# and does not process it later, when I want him to keep it in the queue and evaluate it afterwards.
unless result.nil?
ch.ack(delivery_info.delivery_tag)
end
end
An image of RabbitMQ,
There is some way that in the statement: c hack (delivery_info.delivery_tag), this instead of deleting the element of the queue can process it later, any ideas? Thanks
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Try this:
if result.nil?
ch.nack(delivery_info.delivery_tag)
else
ch.ack(delivery_info.delivery_tag)
end
I decided to send the data back to the queue with a style "producer within the consumer", my code now looks like this:
if result.eql? 'ok'
ch.ack(delivery_info.delivery_tag)
else
if(datos['count'] < 5)
datos['count'] += 1
d_time=1000
x.publish(datos.to_json, :persistent => true, :headers=>{"x-delay" => d_time})
end
end
However I was forced to include one more attribute in the JSON attribute: Count! so that it does not stay in an infinite cycle.

Using parfor and labSend/labRecieve

I want to run two matlab scripts in parallel for a project and communicate between them. The purpose of this is to have one script do image analysis and sending the results to the other which will use it for more calculations (time consuming, but not related to the task of finding stuff in the images). Since both tasks are time consuming, and should preferably be done in real time, I believe that parallelization is necessary.
To get a feel for how this should be done I created a test script to find out how to communicate between the two scripts.
The first script takes a user input using the built in function input, and then using labSend sends it to the other, which recieves it, and prints it.
function [blarg] = inputStuff(blarg)
mpiInit(); %added because of error message, but do not work...
for i=1:2
labBarrier; % added because of error message
inp = input('Enter a number to write');
labSend(inp);
if (inp == 0)
break;
else
i = 1;
end
end
end
function [ blarg ] = testWrite( blarg )
mpiInit(); % added because of error message, but does not help
par = 0;
if ( blarg == 0)
par = 1;
end
for i = 1:10
if (par == 1)
labBarrier
delta = labReceive();
i = 1;
else
delta = input('Enter number to write');
end
if (delta == 0)
break;
end
s = strcat('This lab no', num2str(labindex), '. Delta is = ')
delta
end
end
%%This is the file test_parfor.m
funlist = {#inputStuff, #testWrite};
matlabpool(2);
mpiInit(); % added because of error message, but does not help
parfor i=1:2
funlist{i}(0);
end
matlabpool close;
Then, when the code is run, the following error message appears:
Starting matlabpool using the 'local' profile ... connected to 2 labs.
Error using parallel_function (line 589)
The MPI implementation has not yet been loaded. Please
call mpiInit.
Error stack:
testWrite.m at 11
Error in test_parfor (line 8)
parfor i=1:2
Calling the method mpiInit does not help... (Called as shown in the code above.)
And nowhere in the examples that mathworks have in the documentation, or on their website, show this error or what to do with it.
Any help is appreciated!
You would typically use constructs such as labSend, labRecieve and labBarrier within an spmd block, rather than a parfor block.
parfor is intended for implementing embarrassingly parallel algorithms, in other words algorithms that consist of multiple independent tasks that can be run in parallel, and do not require communication between tasks.
I'm stretching my knowledge here (perhaps someone more expert can correct me), but as I understand things, it does not set up an MPI ring for communication between workers, which is probably the explanation for the (rather uninformative) error message you're getting.
An spmd block enables communication between workers using labSend, labRecieve and labBarrier. There are quite a few examples of using them all in the documentation.
Sam is right that the MPI functionality is not enabled during parfor, only during spmd. You need to do something more like this:
spmd
funlist{labindex}(0);
end
(Sam is also quite right that the error message you saw is pretty unhelpful)

Rate Exceeding in workflow_execution polling

I am currently trying to modify a plugin for posting metrics to new-relic via AWS. I have successfully managed to make the plugin post metrics from swf to new relic (not originally in plugin), but have encountered a problem if the program runs for too long.
When the program runs for a bout 10 minutes I get the following error:
Error occurred in poll cycle: Rate exceeded
I believe this is coming from my polling swf for the workflow executions
domain.workflow_executions.each do |execution|
starttime = execution.started_at
endtime = execution.closed_at
isOpen = execution.open?
status = execution.status
if endtime != nil
running_workflow_runtime_total += (endtime - starttime)
number_of_completed_executions += 1
end
if status.to_s == "open"
openCount = openCount + 1
elsif status.to_s == "completed"
completedCount = completedCount + 1
elsif status.to_s == "failed"
failedCount = failedCount + 1
elsif status.to_s == "timed_out"
timed_outCount = timed_outCount + 1
end
end
This is called in a polling cycle every 60 seconds
Is there a way to set the polling rate? Or another way to get the workflow executions?
Thanks, here's a link to the ruby sdk for swf => link
The issue is likely that you are creating a large number of workflow executions and each iteration through the loop in workflow_executions is causing a lookup, which eventually is exceeding your rate limit.
This could also be getting a bit expensive, so be careful.
It's not clear what you're really trying to do, so I can't tell you how to fix it unless you post all your code (or the parts around calls to SWF).
You can see here:
https://github.com/aws/aws-sdk-ruby/blob/05d15cd1b6037e98f2db45f8c2597014ee376a59/lib/aws/simple_workflow/workflow_execution_collection.rb
That a call is made to SWF for each workflow in the collection.

Ruby: begin, sleep, retry: where to put incrementer

I have a method 'rate_limited_follow' that takes my Twitter useraccount and follows all the users in an array 'users'. Twitter's got strict rate limits, so the method deals with that contingency by sleeping for 15 minutes and then retrying again. (I didn't write this method, rather got it from the Twitter ruby gem api). You'll notice that it checks to see if the number of attempts are less than the MAX_ATTEMPTS.
My users array has about 400 users that I'm trying to follow. It's adding 15 users at a time (when the rate limits seems to kick in), then sleeping for 15 minutes. Since I set the MAX_ATTEMPTS constant to 3 (just to test it), I expected it to stop trying once it had added 45 users (3 times 15) but it's gone past that, continuing to add 15 users around every fifteen minutes, so it seems as if num_attempts is somehow remaining below 3, even though it's gone through this cycle more than 3 times. Is there something I don't understand about the code? Once 'sleep' is finished and it hits 'retry', where does it start again? Is there some reason num_attempts isn't incrementing?
Calling the method in the loop
>> users.each do |i|
?> rate_limited_follow(myuseraccount, i)
>> end
Method definition with constant
MAX_ATTEMPTS = 3
def rate_limited_follow (account, user)
num_attempts = 0
begin
num_attempts += 1
account.twitter.follow(user)
rescue Twitter::Error::TooManyRequests => error
if num_attempts <= MAX_ATTEMPTS
sleep(15*60) # minutes * 60 seconds
retry
else
raise
end
end
end
Each call to rate_limited_follow resets your number of attempts - or, to rephrase, you are keeping track of attempts per user rather than attempts over your entire array of users.
Hoist num_attempt's initialization out of rate_limited_follow, so that it isn't being reset by each call, and you'll have the behavior that you're looking for.

Resources