Heres a piece of code from my irc bot. What it does is that it checks for new tweets at an specific account, then replies it in the channel.
Now, is there an better way to check for new tweets? that for i in 1..999999999 feels a bit unoptimal, and ddos'y.
tweetzsr = {}
xzsr = {}
zsr = {}
on :message, ".start zsr" do |m|
if zsr[m.channel] == true
m.reply "already doing it.."
else
m.reply "ok."
zsr[m.channel] = true
for i in 1..99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
sleep 60
puts "#{xzsr[m.channel]} = x, tweetzsr = #{tweetzsr[m.channel]}"
tweetzsr[m.channel] = Twitter.user_timeline("twiitterracount").first.text
if xzsr[m.channel] == tweetzsr[m.channel]
nil
else
m.reply "#{tweetzsr[m.channel]} - via twitter feed"
xzsr[m.channel] = tweetzsr[m.channel]
end
end
end
end
First of all, for an infinite loop use the loop method.
Best for this kind of thing is to use Twitter's streaming api. This api will send a single request to Twitter, which will then push any new data to your client. For that there is a gem called TweetStream.
Example usage:
TweetStream::Client.new.userstream do |status|
m.reply "#{status.text} - via twitter"
end
Is that your infinite loop?
Change it to this to improve sanity
loop do
sleep 60
newtweet[m.channel] = Twitter.user_timeline("twitteracount").first.text
next if oldtweet[m.channel] == newtweet[m.channel]
m.reply "#{newtweet[m.channel]} - via twitter"
oldtweet[m.channel] = newtweet[m.channel]
end
You're missing the two end keywords.
Related
I'm pretty new to Ruby, having decided to try my hand with another programming language. I have been trying to get to grips with sockets, which is an area I'm not too familiar with in general. I have created a basic 'game' that allows a player to move his icon around the screen, however, I am trying to make this into something that could be multiplayer using TCP Sockets.
At the moment, when a player launches the game as a host, it creates a server. If the player starts a game as a client, it connects to the server. Currently, it only works on the same machine, but the connection has been established successfully, and when connecting as a client, the server creates a username, sends this back to the client, which then uses it to create a player.
The problem comes when i try to communicate between the server and the client, the messages appear to be sending from server to client, but only partially, and it appears they are largely truncated at either the beginning or end.
If anyone could advice on what is causing this, it would be greatly appreciated. I am using Ruby with Gosu and Celluloid-IO.
SERVER CLASS
require 'src/Game.rb'
require 'celluloid/io'
class Server
include Celluloid::IO
finalizer:shutdown
def initialize
puts ("server init")
#super
#playersConnected = Array.new
#actors = Array.new
#server = TCPServer.new("0.0.0.0", 28888)
##server.open(8088)
puts #server
async.run
end
def run
loop {
async.handle_connection #server.accept
}
end
def readMessage(socket, player)
msg = socket.recv(30)
data = msg.split"|"
#playersConnected.each do |p|
if p.getUser() == player
p.setX(data[1])
p.setY(data[2])
end
#puts "END"
end
end
def handle_connection(socket)
_, port, host = socket.peeraddr
user = "#{host}:#{port}"
puts "#{user} has joined the game"
#playersConnected.push(Player.new(user))
socket.send "#{user}", 0
#socket.send "#{Player}|#{user}", 0
#socket.send "#{port}", 0
puts "PLAYER LIST"
#playersConnected.each do |player|
puts player
end
Thread.new{
loop{
readMessage(socket, user)
#divide message
#find array index
#update character position in array
}
}
Thread.new{
loop{
#playersConnected.each do |p|
msg = p.getUser() + "|" + "#{p.getX}" + "|" + "#{p.getY}"
socket.send(msg, 0)
end
}
}
end
Server.new
end
CLIENT CLASS
require 'src/Game.rb'
class Client < Game
#finalizer :shutdown
def initialize
super
#socket = TCPSocket.new("localhost", 28888)
#188.222.55.241
while #player.nil?
#player = Player.new(#socket.recv(1024))
puts #player
puts #player.getUser()
#player.warp(0, 0)
pulse
end
end
def pulse
Thread.new{
loop{
msg = #player.getUser() + "|" + "#{#player.getX()}" + "|" + "#{#player.getY()}"
#socket.write(msg)
}
}
Thread.new{
loop{
msg = #socket.recv(1024)
data = msg.split"|"
puts data[0]
match = false
#players.each do |player|
if player.getUser() == data[0]
puts "MATCHX"
player.setX(data[1])
player.setY(data[2])
match = true
end
if match == false
p = Player.new(data[0])
#p.warp(data[1],data[2])
#players.push(p)
end
end
puts "end"
}
}
end
Client.new.show
end
Side note; There is also a Host class, which mimics the client class, only it calls the server. I am aware this is a terrible way to do things, i intend to fix it once i overcome the current issue.
Many thanks in advance
Im trying to build a sub-domain brute forcer for use with my clients - I work in security/pen testing.
Currently, I am able to get Resolv to look up around 70 hosts in 10 seconds, give or take and wanted to know if there was a way to get it to do more. I have seen alternative scripts out there, mainly Python based that can achieve far greater speeds than this. I don't know how to increase the number of requests Resolv makes in parallel, or if i should split the list up. Please note I have put Google's DNS servers in the sample code, but will be using internal ones for live usage.
My rough code for debugging this issue is:
require 'resolv'
def subdomains
puts "Subdomain enumeration beginning at #{Time.now.strftime("%H:%M:%S")}"
subs = []
domains = File.open("domains.txt", "r") #list of domain names line by line.
Resolv.new(:nameserver => ['8.8.8.8', '8.8.4.4'])
File.open("tiny.txt", "r").each_line do |subdomain|
subdomain.chomp!
domains.each do |d|
puts "Checking #{subdomain}.#{d}"
ip = Resolv.new.getaddress "#{subdomain}.#{d}" rescue ""
if ip != nil
subs << subdomain+"."+d << ip
end
end
end
test = subs.each_slice(4).to_a
test.each do |z|
if !z[1].nil? and !z[3].nil?
puts z[0] + "\t" + z[1] + "\t\t" + z[2] + "\t" + z[3]
end
end
puts "Finished at #{Time.now.strftime("%H:%M:%S")}"
end
subdomains
domains.txt is my list of client domain names, for example google.com, bbc.co.uk, apple.com and 'tiny.txt' is a list of potential subdomain names, for example ftp, www, dev, files, upload. Resolv will then lookup files.bbc.co.uk for example and let me know if it exists.
One thing is you are creating a new Resolv instance with the Google nameservers, but never using it; you create a brand new Resolv instance to do the getaddress call, so that instance is probably using some default nameservers and not the Google ones. You could change the code to something like this:
resolv = Resolv.new(:nameserver => ['8.8.8.8', '8.8.4.4'])
# ...
ip = resolv.getaddress "#{subdomain}.#{d}" rescue ""
In addition, I suggest using the File.readlines method to simplify your code:
domains = File.readlines("domains.txt").map(&:chomp)
subdomains = File.readlines("tiny.txt").map(&:chomp)
Also, you're rescuing the bad ip and setting it to the empty string, but then in the next line you test for not nil, so all results should pass, and I don't think that's what you want.
I've refactored your code, but not tested it. Here is what I came up with, and may be clearer:
def subdomains
puts "Subdomain enumeration beginning at #{Time.now.strftime("%H:%M:%S")}"
domains = File.readlines("domains.txt").map(&:chomp)
subdomains = File.readlines("tiny.txt").map(&:chomp)
resolv = Resolv.new(:nameserver => ['8.8.8.8', '8.8.4.4'])
valid_subdomains = subdomains.each_with_object([]) do |subdomain, valid_subdomains|
domains.each do |domain|
combined_name = "#{subdomain}.#{domain}"
puts "Checking #{combined_name}"
ip = resolv.getaddress(combined_name) rescue nil
valid_subdomains << "#{combined_name}#{ip}" if ip
end
end
valid_subdomains.each_slice(4).each do |z|
if z[1] && z[3]
puts "#{z[0]}\t#{z[1]}\t\t#{z[2]}\t#{z[3]}"
end
end
puts "Finished at #{Time.now.strftime("%H:%M:%S")}"
end
Also, you might want to check out the dnsruby gem (https://github.com/alexdalitz/dnsruby). It might do what you want to do better than Resolv.
[Note: I've rewritten the code so that it fetches the IP addresses in chunks. Please see https://gist.github.com/keithrbennett/3cf0be2a1100a46314f662aea9b368ed. You can modify the RESOLVE_CHUNK_SIZE constant to balance performance with resource load.]
I've rewritten this code using the dnsruby gem (written mainly by Alex Dalitz in the UK, and contributed to by myself and others). This version uses asynchronous message processing so that all requests are being processed pretty much simultaneously. I've posted a gist at https://gist.github.com/keithrbennett/3cf0be2a1100a46314f662aea9b368ed but will also post the code here.
Note that since you are new to Ruby, there are lots of things in the code that might be instructive to you, such as method organization, use of Enumerable methods (e.g. the amazing 'partition' method), the Struct class, rescuing a specific Exception class, %w, and Benchmark.
NOTE: LOOKS LIKE STACK OVERFLOW ENFORCES A MAXIMUM MESSAGE SIZE, SO THIS CODE IS TRUNCATED. GO TO THE GIST IN THE LINK ABOVE FOR THE COMPLETE CODE.
#!/usr/bin/env ruby
# Takes a list of subdomain prefixes (e.g. %w(ftp xyz)) and a list of domains (e.g. %w(nytimes.com afp.com)),
# creates the subdomains combining them, fetches their IP addresses (or nil if not found).
require 'dnsruby'
require 'awesome_print'
RESOLVER = Dnsruby::Resolver.new(:nameserver => %w(8.8.8.8 8.8.4.4))
# Experiment with this to get fast throughput but not overload the dnsruby async mechanism:
RESOLVE_CHUNK_SIZE = 50
IpEntry = Struct.new(:name, :ip) do
def to_s
"#{name}: #{ip ? ip : '(nil)'}"
end
end
def assemble_subdomains(subdomain_prefixes, domains)
domains.each_with_object([]) do |domain, subdomains|
subdomain_prefixes.each do |prefix|
subdomains << "#{prefix}.#{domain}"
end
end
end
def create_query_message(name)
Dnsruby::Message.new(name, 'A')
end
def parse_response_for_address(response)
begin
a_answer = response.answer.detect { |a| a.type == 'A' }
a_answer ? a_answer.rdata.to_s : nil
rescue Dnsruby::NXDomain
return nil
end
end
def get_ip_entries(names)
queue = Queue.new
names.each do |name|
query_message = create_query_message(name)
RESOLVER.send_async(query_message, queue, name)
end
# Note: although map is used here, the record in the output array will not necessarily correspond
# to the record in the input array, since the order of the messages returned is not guaranteed.
# This is indicated by the lack of block variable specified (normally w/map you would use the element).
# That should not matter to us though.
names.map do
_id, result, error = queue.pop
name = _id
case error
when Dnsruby::NXDomain
IpEntry.new(name, nil)
when NilClass
ip = parse_response_for_address(result)
IpEntry.new(name, ip)
else
raise error
end
end
end
def main
# domains = File.readlines("domains.txt").map(&:chomp)
domains = %w(nytimes.com afp.com cnn.com bbc.com)
# subdomain_prefixes = File.readlines("subdomain_prefixes.txt").map(&:chomp)
subdomain_prefixes = %w(www xyz)
subdomains = assemble_subdomains(subdomain_prefixes, domains)
start_time = Time.now
ip_entries = subdomains.each_slice(RESOLVE_CHUNK_SIZE).each_with_object([]) do |ip_entries_chunk, results|
results.concat get_ip_entries(ip_entries_chunk)
end
duration = Time.now - start_time
found, not_found = ip_entries.partition { |entry| entry.ip }
puts "\nFound:\n\n"; puts found.map(&:to_s); puts "\n\n"
puts "Not Found:\n\n"; puts not_found.map(&:to_s); puts "\n\n"
stats = {
duration: duration,
domain_count: ip_entries.size,
found_count: found.size,
not_found_count: not_found.size,
}
ap stats
end
main
Im trying to practice pulling APIs with Ruby. Im trying to pull videogame news from Steam.
Below is my code.
The idea is, when the program is ran, the the user is prompted to enter a game ID between 200 and 440.
Anything not in between dont exist or the numbers arent continuous.
Anyway, Im trying to pass the gameID variable into the string:
"http://api.steampowered.com/ISteamNews/GetNewsForApp/v0002/?appid=#{gameID}&count=5&maxlength=300&format=json"
The string is wrapped in a function. When I try to run the program, the error says wrong number of arguments ( 0 for 1 ).
What am i doing wrong, and what am I missing? Many thanks in advance as usual :)
*been doing nothing but asking questions so far, hope to contribute someday once I get better :)
require 'json'
require 'HTTParty'
puts "----------------------------------------------------------"
puts "Welcome to my practice"
puts "The purpose of this exercise is to use the SteamAPI"
puts "to pull videogame news from Steam"
puts "----------------------------------------------------------"
reset = true
while reset
puts "Please enter a game ID between 200 - 440"
gameID = gets.to_i
if gameID < 200
puts "--Invalid input--"
reset = true
elsif gameID > 400
puts "--Invalid input--"
reset= true
else
reset = false
end
end
puts "--------------------Loading API----------------------------"
def get_news( gameID )
string = "http://api.steampowered.com/ISteamNews/GetNewsForApp/v0002/?appid=#{gameID}&count=5&maxlength=300&format=json"
page = HTTParty.get( string )
browse = page["appnews"]["newsitems"]
browse.map do |content|
{title: content["title"], contents: content["contents"]}
end
end
def display_story( content )
puts "Title: #{content[:title]}"
puts "--------------------"
puts " #{content[:contents]}"
puts "--------------------"
end
get_news.each do |content|
display_story( content )
end
You've defined get_news to take a gameID argument, but you haven't passed it anything. At the end of your file, you need get_news(gameID).each.
I wrote this auto-reply bot with ruby, it is supposed to autoreply with cleverbot messages when im away:
require "cleverbot"
require "cinch"
$client = Cleverbot::Client.new
def get_answer(text)
reply = $client.write text
return reply
end
bot = Cinch::Bot.new do
configure do |c|
c.nick = "mybotsnickname"
c.server = "my.irc.testserver"
c.channels = ["#mychannel"]
end
on :message do |m|
m.reply m.user
m.reply get_answer(m.message)
end
end
bot.start
It works fine but the session id changes every message. What do i have to change to keep it? best case scenario is every user writing me gets a different session id at cleverbot so they have individual conversations.
I'm pretty new to ruby.
I used: https://github.com/benmanns/cleverbot
and https://github.com/cinchrb/cinch
Comparing this to the structure of my cinch bot, I'd try the following:
1) Make get_answer a helper block and place it inside the bot = Cinch::Bot.new block:
helpers do
def get_answer(text)
reply = $client.write text
return reply
end
end
2) Replace
on :message do |m|
with
on :message do |m, text|
3) Replace
m.reply get_answer(m.message)
with
m.reply get_answer(text)
I suspect this should work. But I'm a relatively new to Ruby as well.
I have a database full of URLs that I need to test HTTP response time for on a regular basis. I want to have many worker threads combing the database at all times for a URL that hasn't been tested recently, and if it finds one, test it.
Of course, this could cause multiple threads to snag the same URL from the database. I don't want this. So, I'm trying to use Mutexes to prevent this from happening. I realize there are other options at the database level (optimistic locking, pessimistic locking), but I'd at least prefer to figure out why this isn't working.
Take a look at this test code I wrote:
threads = []
mutex = Mutex.new
50.times do |i|
threads << Thread.new do
while true do
url = nil
mutex.synchronize do
url = URL.first(:locked_for_testing => false, :times_tested.lt => 150)
if url
url.locked_for_testing = true
url.save
end
end
if url
# simulate testing the url
sleep 1
url.times_tested += 1
url.save
mutex.synchronize do
url.locked_for_testing = false
url.save
end
end
end
sleep 1
end
end
threads.each { |t| t.join }
Of course there is no real URL testing here. But what should happen is at the end of the day, each URL should end up with "times_tested" equal to 150, right?
(I'm basically just trying to make sure the mutexes and worker-thread mentality are working)
But each time I run it, a few odd URLs here and there end up with times_tested equal to a much lower number, say, 37, and locked_for_testing frozen on "true"
Now as far as I can tell from my code, if any URL gets locked, it will have to unlock. So I don't understand how some URLs are ending up "frozen" like that.
There are no exceptions and I've tried adding begin/ensure but it didn't do anything.
Any ideas?
I'd use a Queue, and a master to pull what you want. if you have a single master you control what's getting accessed. This isn't perfect but it's not going to blow up because of concurrency, remember if you aren't locking the database a mutex doesn't really help you is something else accesses the db.
code completely untested
require 'thread'
queue = Queue.new
keep_running = true
# trap cntrl_c or something to reset keep_running
master = Thread.new do
while keep_running
# check if we need some work to do
if queue.size == 0
urls = URL.all(:times_tested.lt => 150)
urls.each do |u|
queue << u.id
end
# keep from spinning the queue
sleep(0.1)
end
end
end
workers = []
50.times do
workers << Thread.new do
while keep_running
# get an id
id = queue.shift
url = URL.get(id)
#do something with the url
url.save
sleep(0.1)
end
end
end
workers.each do |w|
w.join
end