Multiplayer game - Elixir channels - websocket

I'm beginner in Elixir.
I have an elixir application for a multiplayer game that simply replicates the received command to all players connected to the channel. This is work but have some latency when replicates the received command. what is the best option for remove latency problem?
For replicates the command to all players connected to the channel, i use the broadcast function. Is the best function for this?
See follow code:
defmodule GameWeb.GameChannel do
use GameWeb, :channel
alias Game.GameState
alias Game.Error
# join to topic game:*
def join("game:" <> code, %{"email" => email}, socket) do
case Map.has_key?(GameState.games(), code) do
true ->
socket = assign(socket, :player, 2)
game =
code
|> GameState.get_game()
|> Map.put(:player2, %{:email => email, :score => 0})
|> GameState.update_game()
socket = assign(socket, :game, game)
{:ok, game, socket}
false ->
socket = assign(socket, :player, 1)
game =
GameState.create_game(code)
|> Map.put(:player1, %{:email => email, :score => 0})
|> GameState.update_game()
socket = assign(socket, :game, game)
{:ok, game, socket}
end
end
# topic not found
def join(_topic, _payload, _socket) do
{:error, Error.get(:resource_not_found)}
end
def handle_in("playerAction", payload, socket) do
broadcast!(socket, "playerAction", Map.put(payload, :from_player, socket.assigns.player))
{:noreply, socket}
end
end

Related

Absinthe - How to put_session in resolver function?

I'm using Absinthe and have a sign in mutation. When users send over valid credentials, I'd like to set a session cookie in the response via put_session.
The problem I'm facing is that I'm not able to access the conn from within a resolver function. That tells me that I'm not supposed to update the connection's properties from within a resolver.
Is it possible to do this with Absinthe? What are some alternative solutions?
It looks like one solution is:
In the resolver, resolve either an {:ok, _} or an {:error, _} as normal
Add middleware after the resolver to pattern match that resolution.value returned from step 1 and update the GraphQL context
Use the before_send feature of Absinthe (which has access to both the GraphQL context and the connection to put_session before sending a response
Code Example
Mutation:
mutation do
#desc "Authenticate a user."
field :login, :user do
arg(:email, non_null(:string))
arg(:password, non_null(:string))
resolve(&Resolvers.Accounts.signin/3)
middleware(fn resolution, _ ->
case resolution.value do
%{user: user, auth_token: auth_token} ->
Map.update!(
resolution,
:context,
&Map.merge(&1, %{auth_token: auth_token, user: user})
)
_ ->
resolution
end
end)
end
end
Resolver:
defmodule AppWeb.Resolvers.Accounts do
alias App.Accounts
def signin(_, %{email: email, password: password}, _) do
if user = Accounts.get_user_by_email_and_password(email, password) do
auth_token = Accounts.generate_user_session_token(user)
{:ok, %{user: user, auth_token: auth_token}}
else
{:error, "Invalid credentials."}
end
end
end
Router:
defmodule AppWeb.Router do
use AppWeb, :router
pipeline :api do
plug(:accepts, ["json"])
plug(:fetch_session)
end
scope "/" do
pipe_through(:api)
forward("/api", Absinthe.Plug,
schema: AppWeb.Schema,
before_send: {__MODULE__, :absinthe_before_send}
)
forward("/graphiql", Absinthe.Plug.GraphiQL,
schema: AppWeb.Schema,
before_send: {__MODULE__, :absinthe_before_send}
)
end
def absinthe_before_send(conn, %Absinthe.Blueprint{} = blueprint) do
if auth_token = blueprint.execution.context[:auth_token] do
put_session(conn, :auth_token, auth_token)
else
conn
end
end
def absinthe_before_send(conn, _) do
conn
end
end
Not sure why you want to use a session, can't this be solved using a bearer?
Please disregard the interfaces. :-)
Mutation.
object :user_token_payload do
field(:user, :user)
field(:token, :string)
end
object :login_user_mutation_response, is_type_of: :login_user do
interface(:straw_hat_mutation_response)
field(:errors, list_of(:straw_hat_error))
field(:successful, non_null(:boolean))
field(:payload, :user_token_payload)
end
Resolver.
def authenticate_user(args, _) do
case Accounts.authenticate_user(args) do
{:ok, user, token} -> MutationResponse.succeeded(%{user: user, token: token})
{:error, message} -> MutationResponse.failed(StrawHat.Error.new(message))
end
end
Now the client can pass along that token with the Authorization header, and pick it up with a plug.
defmodule MyAppWeb.Plugs.Context do
import Plug.Conn
alias MyApp.Admission
def init(opts), do: opts
def call(conn, _) do
case build_context(conn) do
{:ok, context} -> put_private(conn, :absinthe, %{context: context})
_ -> put_private(conn, :absinthe, %{context: %{}})
end
end
#doc """
Return the current user context based on the authorization header
"""
def build_context(conn) do
auth_header =
get_req_header(conn, "authorization")
|> List.first()
if auth_header do
"Bearer " <> token = auth_header
case Admission.get_token_by_hash(token) do
nil -> :error
token -> {:ok, %{current_user: token.user}}
end
else
:error
end
end
end
Then add the plug to your pipeline
plug(MyApp.Plugs.Context)
Then you can pick up the current user in your resolvers like so.
def create_note(%{input: input}, %{context: %{current_user: user}}) do
end

Ruby: Sending string over TCP Socket

I'm pretty new to Ruby, having decided to try my hand with another programming language. I have been trying to get to grips with sockets, which is an area I'm not too familiar with in general. I have created a basic 'game' that allows a player to move his icon around the screen, however, I am trying to make this into something that could be multiplayer using TCP Sockets.
At the moment, when a player launches the game as a host, it creates a server. If the player starts a game as a client, it connects to the server. Currently, it only works on the same machine, but the connection has been established successfully, and when connecting as a client, the server creates a username, sends this back to the client, which then uses it to create a player.
The problem comes when i try to communicate between the server and the client, the messages appear to be sending from server to client, but only partially, and it appears they are largely truncated at either the beginning or end.
If anyone could advice on what is causing this, it would be greatly appreciated. I am using Ruby with Gosu and Celluloid-IO.
SERVER CLASS
require 'src/Game.rb'
require 'celluloid/io'
class Server
include Celluloid::IO
finalizer:shutdown
def initialize
puts ("server init")
#super
#playersConnected = Array.new
#actors = Array.new
#server = TCPServer.new("0.0.0.0", 28888)
##server.open(8088)
puts #server
async.run
end
def run
loop {
async.handle_connection #server.accept
}
end
def readMessage(socket, player)
msg = socket.recv(30)
data = msg.split"|"
#playersConnected.each do |p|
if p.getUser() == player
p.setX(data[1])
p.setY(data[2])
end
#puts "END"
end
end
def handle_connection(socket)
_, port, host = socket.peeraddr
user = "#{host}:#{port}"
puts "#{user} has joined the game"
#playersConnected.push(Player.new(user))
socket.send "#{user}", 0
#socket.send "#{Player}|#{user}", 0
#socket.send "#{port}", 0
puts "PLAYER LIST"
#playersConnected.each do |player|
puts player
end
Thread.new{
loop{
readMessage(socket, user)
#divide message
#find array index
#update character position in array
}
}
Thread.new{
loop{
#playersConnected.each do |p|
msg = p.getUser() + "|" + "#{p.getX}" + "|" + "#{p.getY}"
socket.send(msg, 0)
end
}
}
end
Server.new
end
CLIENT CLASS
require 'src/Game.rb'
class Client < Game
#finalizer :shutdown
def initialize
super
#socket = TCPSocket.new("localhost", 28888)
#188.222.55.241
while #player.nil?
#player = Player.new(#socket.recv(1024))
puts #player
puts #player.getUser()
#player.warp(0, 0)
pulse
end
end
def pulse
Thread.new{
loop{
msg = #player.getUser() + "|" + "#{#player.getX()}" + "|" + "#{#player.getY()}"
#socket.write(msg)
}
}
Thread.new{
loop{
msg = #socket.recv(1024)
data = msg.split"|"
puts data[0]
match = false
#players.each do |player|
if player.getUser() == data[0]
puts "MATCHX"
player.setX(data[1])
player.setY(data[2])
match = true
end
if match == false
p = Player.new(data[0])
#p.warp(data[1],data[2])
#players.push(p)
end
end
puts "end"
}
}
end
Client.new.show
end
Side note; There is also a Host class, which mimics the client class, only it calls the server. I am aware this is a terrible way to do things, i intend to fix it once i overcome the current issue.
Many thanks in advance

Elixir - Check if string is empty

I am playing with Elixir and Phoenix Framework for the first time after following this Tutorial..
I have a simple client/server app.
chat/lib/chat_web/room_channel.ex:
defmodule ChatWeb.RoomChannel do
use Phoenix.Channel
def join("room:lobby", _message, socket) do
{:ok, socket}
end
def join("room:" <> _private_room_id, _params, _socket) do
{:error, %{reason: "unauthorized"}}
end
def handle_in("new_msg", %{"body" => body}, socket) do
broadcast! socket, "new_msg", %{body: body}
{:noreply, socket}
end
end
I want to block empty incoming messages (body is empty string)
def handle_in("new_msg", %{"body" => body}, socket) do
# I guess the code should be here..
broadcast! socket, "new_msg", %{body: body}
{:noreply, socket}
end
How can I do that?
I want to block empty incoming messages (body is empty string)
You can add a guard clause for this. Either when body != "" or when byte_size(body) > 0
def handle_in("new_msg", %{"body" => body}, socket) when body != "" do
...
end
Now this function will only match if body is not "".
If you also want to handle empty body case, you can add two clauses like this (no need for the guard clause anymore since the second clause will never match if body is empty):
def handle_in("new_msg", %{"body" => ""}, socket) do
# broadcast error here
end
def handle_in("new_msg", %{"body" => body}, socket) do
# broadcast normal here
end
You can use answer proposed by #Dogbert, but to be 100% sure that string is not empty you can use wrap the broadcast! in the helper private function or just wrap into if or unless (negative if) expression.
unless String.trim(body) == "" do
broadcast! socket, "new_msg", %{body: body}
end
If you want to return an error message you try to use something more complex eg.:
if String.trim(body) != "" do
broadcast! socket, "new_msg", %{body: body}
else
broadcast! socket, "error_msg", %{body: "Body is empty"}
end

How to use Phoenix.Channel.reply/2 for async reply to channel push

I'm trying to extend the example of Phoenix.Channel.reply/2 from the Phoenix documentation into a fully working example of asynchronously replying to Phoenix channel/socket push events:
Taken from https://hexdocs.pm/phoenix/Phoenix.Channel.html#reply/2:
def handle_in("work", payload, socket) do
Worker.perform(payload, socket_ref(socket))
{:noreply, socket}
end
def handle_info({:work_complete, result, ref}, socket) do
reply ref, {:ok, result}
{:noreply, socket}
end
I've reworked the example as follows:
room_channels.ex
...
def handle_in("work", job, socket) do
send worker_pid, {self, job}
{:noreply, socket}
end
def handle_info({:work_complete, result}, socket) do
broadcast socket, "work_complete", %{result: result}
{:noreply, socket}
end
...
worker.ex
...
receive do
{pid, job} ->
result = perform(job) # stub
send pid, {:work_complete, result}
end
...
This solution works, but it doesn't rely on generating and passing a socket_ref with socket_ref(socket) and on Phoenix.Channel.reply/2. Instead it relies on Phoenix.Channel.broadcast/3.
The documentation implies that reply/2 is specifically used for this scenario of asynchronously replying to socket push events:
reply(arg1, arg2)
Replies asynchronously to a socket push.
Useful when you need to reply to a push that can’t otherwise be
handled using the {:reply, {status, payload}, socket} return from your
handle_in callbacks. reply/3 will be used in the rare cases you need
to perform work in another process and reply when finished by
generating a reference to the push with socket_ref/1.
When I generate and pass a socket_ref, and rely on Phoenix.Channel.reply/2 for asynchronous replies to socket push, I don't get it to work at all:
room_channels.ex
...
def handle_in("work", job, socket) do
send worker_pid, {self, job, socket_ref(socket)}
{:noreply, socket}
end
def handle_info({:work_complete, result, ref}, socket) do
reply ref, {:ok, result}
{:noreply, socket}
end
...
worker.ex
...
receive do
{pid, job, ref} ->
result = perform(job) # stub
send pid, {:work_complete, result, ref}
end
...
My room_channels.ex handle_info function is called but reply/2 doesn't seem to send a message down the socket. I see no stacktrace on stderr or any output on stdout to indicate an error either. What is more, keeping track of a socket_ref seems to just add overhead to my code.
What is the benefit of using a socket_ref and reply/2 over my solution with broadcast/3 and how can I get the solution with reply/2 to work?
I was wrong, the example with Phoenix.Channel.reply/2 will work:
room_channels.ex
...
def handle_in("work", job, socket) do
send worker_pid, {self, job, socket_ref(socket)}
{:noreply, socket}
end
def handle_info({:work_complete, result, ref}, socket) do
reply ref, {:ok, result}
{:noreply, socket}
end
...
worker.ex
...
receive do
{pid, job, ref} ->
result = perform(job) # stub
send pid, {:work_complete, result, ref}
end
...
In my implementation I made the mistake of sending a synchronous reply to the event push with the return value of {:reply, :ok, socket} instead of {:noreply, socket}.
On closer inspection of the websocket frames send from the server to the client, I found that the browser did receive my server replies from reply ref, {:ok, result} but that the associated callback was never called.
It seems that Phoenix' Socket.js client library accepts at most a single reply per push events.
hope I'm not too late to this conversation.
I managed to use your code above and got the callback to work in the javascript.
The trick is to listen to the phx_reply event. Inside priv/static/app.js each Channel javascript object has a list of predefined in CHANNEL_EVENTS (line and is set to listen on it at the following code block:
this.on(CHANNEL_EVENTS.reply, function (payload, ref) {
_this2.trigger(_this2.replyEventName(ref), payload);
});
What I did was within the channel on callback, I listen for the phx_reply event:
channel.on("phx_reply", (data) => {
console.log("DATA ", data);
// Process the data
}
This has been tested on Elixir 1.3 and Phoenix 1.2
Hope that helps!

Am i using eventmachine in the right way?

I am using ruby-smpp and redis to achive a queue based background worker to send SMPP messages.
And i am wondering if I am using eventmachine in the right way. It works but it doesnt feel right.
#!/usr/bin/env ruby
# Sample SMS gateway that can receive MOs (mobile originated messages) and
# DRs (delivery reports), and send MTs (mobile terminated messages).
# MTs are, in the name of simplicity, entered on the command line in the format
# <sender> <receiver> <message body>
# MOs and DRs will be dumped to standard out.
require 'smpp'
require 'redis/connection/hiredis'
require 'redis'
require 'yajl'
require 'time'
LOGFILE = File.dirname(__FILE__) + "/sms_gateway.log"
PIDFILE = File.dirname(__FILE__) + '/worker_test.pid'
Smpp::Base.logger = Logger.new(LOGFILE)
#Smpp::Base.logger.level = Logger::WARN
REDIS = Redis.new
class MbloxGateway
# MT id counter.
##mt_id = 0
# expose SMPP transceiver's send_mt method
def self.send_mt(sender, receiver, body)
if sender =~ /[a-z]+/i
source_addr_ton = 5
else
source_addr_ton = 2
end
##mt_id += 1
##tx.send_mt(('smpp' + ##mt_id.to_s), sender, receiver, body, {
:source_addr_ton => source_addr_ton
# :service_type => 1,
# :source_addr_ton => 5,
# :source_addr_npi => 0 ,
# :dest_addr_ton => 2,
# :dest_addr_npi => 1,
# :esm_class => 3 ,
# :protocol_id => 0,
# :priority_flag => 0,
# :schedule_delivery_time => nil,
# :validity_period => nil,
# :registered_delivery=> 1,
# :replace_if_present_flag => 0,
# :data_coding => 0,
# :sm_default_msg_id => 0
#
})
end
def logger
Smpp::Base.logger
end
def start(config)
# Write this workers pid to a file
File.open(PIDFILE, 'w') { |f| f << Process.pid }
# The transceiver sends MT messages to the SMSC. It needs a storage with Hash-like
# semantics to map SMSC message IDs to your own message IDs.
pdr_storage = {}
# Run EventMachine in loop so we can reconnect when the SMSC drops our connection.
loop do
EventMachine::run do
##tx = EventMachine::connect(
config[:host],
config[:port],
Smpp::Transceiver,
config,
self # delegate that will receive callbacks on MOs and DRs and other events
)
# Let the connection start before we check for messages
EM.add_timer(3) do
# Maybe there is some better way to do this. IDK, But it works!
EM.defer do
loop do
# Pop a message
message = REDIS.lpop 'messages:send:queue'
if message # If there is a message. Process it and check the queue again
message = Yajl::Parser.parse(message, :check_utf8 => false) # Parse the message from Json to Ruby hash
if !message['send_after'] or (message['send_after'] and Time.parse(message['send_after']) < Time.now)
self.class.send_mt(message['sender'], message['receiver'], message['body']) # Send the message
REDIS.publish 'log:messages', "#{message['sender']} -> #{message['receiver']}: #{message['body']}" # Push the message to the redis queue so we can listen to the channel
else
REDIS.lpush 'messages:queue', Yajl::Encoder.encode(message)
end
else # If there is no message. Sleep for a second
sleep 1
end
end
end
end
end
sleep 2
end
end
# ruby-smpp delegate methods
def mo_received(transceiver, pdu)
logger.info "Delegate: mo_received: from #{pdu.source_addr} to #{pdu.destination_addr}: #{pdu.short_message}"
end
def delivery_report_received(transceiver, pdu)
logger.info "Delegate: delivery_report_received: ref #{pdu.msg_reference} stat #{pdu.stat}"
end
def message_accepted(transceiver, mt_message_id, pdu)
logger.info "Delegate: message_accepted: id #{mt_message_id} smsc ref id: #{pdu.message_id}"
end
def message_rejected(transceiver, mt_message_id, pdu)
logger.info "Delegate: message_rejected: id #{mt_message_id} smsc ref id: #{pdu.message_id}"
end
def bound(transceiver)
logger.info "Delegate: transceiver bound"
end
def unbound(transceiver)
logger.info "Delegate: transceiver unbound"
EventMachine::stop_event_loop
end
end
# Start the Gateway
begin
puts "Starting SMS Gateway. Please check the log at #{LOGFILE}"
# SMPP properties. These parameters work well with the Logica SMPP simulator.
# Consult the SMPP spec or your mobile operator for the correct settings of
# the other properties.
config = {
:host => 'server.com',
:port => 3217,
:system_id => 'user',
:password => 'password',
:system_type => 'type', # default given according to SMPP 3.4 Spec
:interface_version => 52,
:source_ton => 0,
:source_npi => 1,
:destination_ton => 1,
:destination_npi => 1,
:source_address_range => '',
:destination_address_range => '',
:enquire_link_delay_secs => 10
}
gw = MbloxGateway.new
gw.start(config)
rescue Exception => ex
puts "Exception in SMS Gateway: #{ex} at #{ex.backtrace.join("\n")}"
end
Some easy steps to make this code more EventMachine-ish:
Get rid of the blocking Redis driver, use em-hiredis
Stop using defer. Pushing work out to threads with the Redis driver will make things even worse as it relies on locks around the socket it's using.
Get rid of the add_timer(3)
Get rid of the inner loop, replace it by rescheduling a block for the next event loop using EM.next_tick. The outer one is somewhat unnecessary. You shouldn't loop around EM.run as well, it's cleaner to properly handle a disconnect by doing a reconnect in your unbound method instead of stopping and restarting the event loop, by calling the ##tx.reconnect.
Don't sleep, just wait. EventMachine will tell you when new things come in on a network socket.
Here's how the core code around EventMachine would look like with some of the improvements:
def start(config)
File.open(PIDFILE, 'w') { |f| f << Process.pid }
pdr_storage = {}
EventMachine::run do
##tx = EventMachine::connect(
config[:host],
config[:port],
Smpp::Transceiver,
config,
self
)
REDIS = EM::Hiredis.connect
pop_message = lambda do
REDIS.lpop 'messages:send:queue' do |message|
if message # If there is a message. Process it and check the queue again
message = Yajl::Parser.parse(message, :check_utf8 => false) # Parse the message from Json to Ruby hash
if !message['send_after'] or (message['send_after'] and Time.parse(message['send_after']) < Time.now)
self.class.send_mt(message['sender'], message['receiver'], message['body'])
REDIS.publish 'log:messages', "#{message['sender']} -> #{message['receiver']}: #{message['body']}"
else
REDIS.lpush 'messages:queue', Yajl::Encoder.encode(message)
end
end
EM.next_tick &pop_message
end
end
end
end
Not perfect and could use some cleaning up too, but this is more what it should be like in an EventMachine manner. No sleeps, avoid using defer if possible, and don't use network drivers that potentially block, implement traditional loop by rescheduling things on the next reactor loop. In terms of Redis, the difference is not that big, but it's more EventMachine-y this way imho.
Hope this helps. Happy to explain further if you still have questions.
You're doing blocking Redis calls in EM's reactor loop. It works, but isn't the way to go. You could take a look at em-hiredis to properly integrate Redis calls with EM.

Resources