how can i access data inside map in elixir - phoenix-framework

i have simple chat app written using phoenix framework.
i want to access some data inside the socket
this is the method im using for that
def join("room:" <> _user, _, socket) do
IO.inspect socket
send self(), :after_join
{:ok, socket}
end
it will give nice map with all the details.
what is the best way to get all rooms(topic: "room:Testuser") available using this method
this is the sample result showed in console
[info] JOIN room:Testuser to PhoenixChat.RoomChannel
Transport: Phoenix.Transports.WebSocket
Parameters: %{}
%Phoenix.Socket{assigns: %{user: "Testuser"}, channel: PhoenixChat.RoomChannel,
channel_pid: #PID<0.409.0>, endpoint: PhoenixChat.Endpoint,
handler: PhoenixChat.UserSocket, id: nil, joined: false,
pubsub_server: PhoenixChat.PubSub, ref: nil,
serializer: Phoenix.Transports.WebSocketSerializer, topic: "room:Testuser",
transport: Phoenix.Transports.WebSocket, transport_name: :websocket,
transport_pid: #PID<0.375.0>}
[info] Replied room:Testuser :ok

The thing you are tinkering with is not map per say. It is what we usually call struct! Struct is a map with well defined fields (similar to objects you may know from other languages).
As you have already discovered when you inspect it you can read all of the key value pairs.
When you want to access field of a struct you can say struct.field. Please read tutorial on Elixir website for more information.

Related

What’s the right way to use PonyOrm with FastApi?

For a personal project I am using PonyOrm with FastApi ; is there a classy way to keep a db_session through the whole async lifecycle call of an endpoint ?
The documentation of PonyOrm talks about using the decorator and yield; but it didn't work for me so after looking on other Github projects, I found this workaround which is working fine.
But I don't really know what's happening behind the scenes and why the documentation of Pony isn't accurate about the async topic.
def _enter_session():
session = db_session(sql_debug=True)
Request.pony_session = session
session.__enter__()
def _exit_session():
session = getattr(Request, 'pony_session', None)
if session is not None:
session.__exit__()
#app.middleware("http")
async def add_pony(request: Request, call_next):
_enter_session()
response = await call_next(request)
_exit_session()
return response
and then in a dependency for example :
async def current_user(
username: str = Depends(current_user_from_token)) -> User:
with Request.pony_session:
# db actions
and in an endpoint call :
#router.post("/token", response_model=Token)
async def login_for_access_token(
request: Request,
user_agent: Optional[str] = Header(None),
form_data: OAuth2PasswordRequestForm = Depends()):
status: bool = authenticate_user(
form_data.username,
form_data.password,
request.client.host,
user_agent)
#db_session
def authenticate_user(
username: str,
password: str,
client_ip: str = 'Undefined',
client_app: str = 'Undefined'):
user: User = User.get(email=username)
If you guys have a better way or a good explanation, I would love to hear about it :)
I'm a kinda PonyORM developer and FastAPI user.
The problem with the async and Pony is that Pony uses transactions which in our understanding are atomic. Also we use thread local cache that can be used in another session if context will switch to another coroutine.
I agree that we should add information about it in documentation.
To be sure everything will be okay you should use db_session as the context manager and be sure that you don't have async calls inside this block of code.
If your endpoints are not asynchronous you can also use db_session decorator for them.
In Pony we agree that using ContextVar instead of Local should help with some cases.
The answer in one sentence is: Use little shortliving sessions and don't interrupt them with async.
Try using a standard fastapi dependency:
from fastapi import Depends
async def get_pony():
with db_session(sql_debug=True) as session:
yield session
async def current_user(
username: str = Depends(current_user_from_token),
pony_session = Depends(get_pony)) -> User:
with pony_session:
# db actions

How can I send messages to specific client using Faye Websockets?

I've been working on a web application which is essentially a web messenger using sinatra. My goal is to have all messages encrypted using pgp and to have full duplex communication between clients using faye websocket.
My main problem is being able to send messages to a specific client using faye. To add to this all my messages in a single chatroom are saved twice for each person since it is pgp encrypted.
So far I've thought of starting up a new socket object for every client and storing them in a hash. I do not know if this approach is the most efficient one. I have seen that socket.io for example allows you to emit to a specific client but not with faye websockets it seems ? I am also considering maybe using a pub sub model but once again I am not sure.
Any advice is appreciated thanks !
I am iodine's author, so I might be biased in my approach.
I would consider naming a channel by the used ID (i.e. user1...user201983 and sending the message to the user's channel.
I think Faye will support this. I know that when using the iodine native websockets and builtin pub/sub, this is quite effective.
So far I've thought of starting up a new socket object for every client and storing them in a hash...
This is a very common mistake, often seen in simple examples.
It works only in single process environments and than you will have to recode the whole logic in order to scale your application.
The channel approach allows you to scale using Redis or any other Pub/Sub service without recoding your application's logic.
Here's a quick example you can run from the Ruby terminal (irb). I'm using plezi.io just to make it a bit shorter to code:
require 'plezi'
class Example
def index
"Use Websockets to connect."
end
def pre_connect
if(!params[:id])
puts "an attempt to connect without credentials was made."
return false
end
return true
end
def on_open
subscribe channel: params[:id]
end
def on_message data
begin
msg = JSON.parse(data)
if(!msg["to"] || !msg["data"])
puts "JSON message error", data
return
end
msg["from"] = params[:id]
publish channel: msg["to"].to_s, message: msg.to_json
rescue => e
puts "JSON parsing failed!", e.message
end
end
end
Plezi.route "/" ,Example
Iodine.threads = 1
exit
To test this example, use a Javascript client, maybe something like this:
// in browser tab 1
var id = 1
ws = new WebSocket("ws://localhost:3000/" + id)
ws.onopen = function(e) {console.log("opened connection");}
ws.onclose = function(e) {console.log("closed connection");}
ws.onmessage = function(e) {console.log(e.data);}
ws.send_to = function(to, data) {
this.send(JSON.stringify({to: to, data: data}));
}.bind(ws);
// in browser tab 2
var id = 2
ws = new WebSocket("ws://localhost:3000/" + id)
ws.onopen = function(e) {console.log("opened connection");}
ws.onclose = function(e) {console.log("closed connection");}
ws.onmessage = function(e) {console.log(e.data);}
ws.send_to = function(to, data) {
this.send(JSON.stringify({to: to, data: data}));
}.bind(ws);
ws.send_to(1, "hello!")

gRPC context on the client side

I am building a client/server system in go, using gRPC and protobuf (and with a gRPC gateway to REST).
I use metadata in the context on the server side to carry authentication data from the client, and that works perfectly well.
Now, I'd like the server to set some metadata keys/values so that the client can get them, alongside with the response. How can I do that? Using SetHeader and SendHeader? Ideally, I'd like every single response from the server to integrate that metadata (can be seen as some kind of UnaryInterceptor, but on the response rather than the request?)
Here is the code for the server and for the client.
I finally found my way: https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md
So basically, grpc.SetHeader() + grpc.SendHeader() and grpc.SetTrailer() are totally what I was looking for. On the client side, grpc.Header() and grpc.Trailer() functions need to be passed to the RPC call, and their argument is a metadata.MD object to be filled.
On the client side, define your receiving metadata:
var header, trailer metadata.MD
Then, pass it to the SomeRPCCall() unary RPC:
response, err := client.SomeRPCCall(
context.Background(),
proto.MyMessage{},
grpc.Header(&header),
grpc.Trailer(&trailer),
)
And now, you can check what's in your metadata:
for key, value := range header {
fmt.Printf("%s => %s", key, value)
}
for key, value := range trailer {
fmt.Printf("%s => %s", key, value)
}
On the server side, you can:
force the data to be sent right after the RPC is received (but before it's processed):
grpc.SendHeader(ctx, metadata.New(map[string]string{"my-key": "my-value"}))
or set & send the metadata at the end of the RPC process (along with the Status):
grpc.SetTrailer(ctx, metadata.New(map[string]string{"my-key": "my-value"}))

Save Google Cloud Speech API operation(job) object to retrieve results later

I'm struggling to use the Google Cloud Speech Api with the ruby client (v0.22.2).
I can execute long running jobs and can get results if I use
job.wait_until_done!
but this locks up a server for what can be a long period of time.
According to the API docs, all I really need is the operation name(id).
Is there any way of creating a job object from the operation name and retrieving it that way?
I can't seem to create a functional new job object such as to use the id from #grpc_op
What I want to do is something like:
speech = Google::Cloud::Speech.new(auth_credentials)
job = speech.recognize_job file, options
saved_job = job.to_json #Or some element of that object such that I can retrieve it.
Later, I want to do something like....
job_object = Google::Cloud::Speech::Job.new(saved_job)
job.reload!
job.done?
job.results
Really hoping that makes sense to somebody.
Struggling quite a bit with google's ruby clients on the basis that everything seems to be translated into objects which are much more complex than the ones required to use the API.
Is there some trick that I'm missing here?
You can monkey-patch this functionality to the version you are using, but I would advise upgrading to google-cloud-speech 0.24.0 or later. With those more current versions you can use Operation#id and Project#operation to accomplish this.
require "google/cloud/speech"
speech = Google::Cloud::Speech.new
audio = speech.audio "path/to/audio.raw",
encoding: :linear16,
language: "en-US",
sample_rate: 16000
op = audio.process
# get the operation's id
id = op.id #=> "1234567890"
# construct a new operation object from the id
op2 = speech.operation id
# verify the jobs are the same
op.id == op2.id #=> true
op2.done? #=> false
op2.wait_until_done!
op2.done? #=> true
results = op2.results
Update Since you can't upgrade, you can monkey-patch this functionality to an older-version using the workaround described in GoogleCloudPlatform/google-cloud-ruby#1214:
require "google/cloud/speech"
# Add monkey-patches
module Google
Module Cloud
Module Speech
class Job
def id
#grpc.name
end
end
class Project
def job id
Job.from_grpc(OpenStruct.new(name: id), speech.service).refresh!
end
end
end
end
end
# Use the new monkey-patched methods
speech = Google::Cloud::Speech.new
audio = speech.audio "path/to/audio.raw",
encoding: :linear16,
language: "en-US",
sample_rate: 16000
job = audio.recognize_job
# get the job's id
id = job.id #=> "1234567890"
# construct a new operation object from the id
job2 = speech.job id
# verify the jobs are the same
job.id == job2.id #=> true
job2.done? #=> false
job2.wait_until_done!
job2.done? #=> true
results = job2.results
Ok. Have a very ugly way of solving the issue.
Get the id of the Operation from the job object
operation_id = job.grpc.grpc_op.name
Get an access token to manually use the RestAPI
json_key_io = StringIO.new(ENV["GOOGLE_CLOUD_SPEECH_JSON_KEY"])
authorisation = Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io:json_key_io,
scope:"https://www.googleapis.com/auth/cloud-platform"
)
token = authorisation.fetch_access_token!
Make an api call to retrieve the operation details.
This will return with a "done" => true parameter, once results are in and will display the results. If "done" => true isn't there then you'll have to poll again later until it is.
HTTParty.get(
"https://speech.googleapis.com/v1/operations/#{operation_id}",
headers: {"Authorization" => "Bearer #{token['access_token']}"}
)
There must be a better way of doing that. Seems such an obvious use case for the speech API.
Anyone from google in the house who can explain a much simpler/cleaner way of doing it?

(REDDIT) Error trying to subscribe to subreddits via API

I know that Snoo seems to be unmaintained, but I wanted to use a ruby framework since I'm trying to improve my Ruby skill.
I'm trying to add some functionality starting with subscribing and unsubscribing to subreddits. Link to API doc.
My first attempt was with the built-in post method which returned a 404 error
def subscribe(subreddit)
logged_in?
post('/api/subscribe.json',body:{uh: #modhash, action:'sub', sr: subreddit, api_type: 'json'})
end
Since the built-in post method was giving me a 404 I decided to try the HTTParty post method:
def subscribe(subreddit)
logged_in?
HTTParty.post('http://www.reddit.com/api/subscribe.json',body:{uh: #modhash, action:'sub', sr: subreddit, api_type: 'json'})
end
That returns this:
pry(main)> reddit.subscribe('/r/nba')
=> {"json"=>{"errors"=>[["USER_REQUIRED", "please login to do that", nil]]}}
Does anyone know if I need to pass more info in the body or if I'm just sending a badly formed request? Thanks!
Also, before running "reddit.subscribe" I have verified that I'm logged in with with a cookie, a modhash, can access my account info, etc.
Solution found:
def subscribe(subreddit)
#query the subreddit for it's 'about' info and get json back
subreddit_json = self.subreddit_info(subreddit)
#build the coded unique identifier for the targeted subreddit
subreddit_id = subreddit_json['kind'] + "_" + subreddit_json['data']['id']
#send post request to server
server_response = self.class.post('/api/subscribe.json',
body:{uh:#modhash, action:'sub', sr: subreddit_id, api_type:'json'})
end
The Reddit API doesn't accept the subreddit name as the value passed with 'sr', (e.g. sr:'/r/funny'). It requires the subreddit "type" (which is always 't5' for subreddits) and unique forum id. The parameter passed would look something like: sr: "t5_2qo4s". This information is available if you go to your target subreddit and add about.json, e.g., www.reddit.com/r/funny/about.json

Resources