Akka HTTP WebSocket client equivalent of this node.js - websocket

I have some user documentation that expresses how to use a websocket with this node snippet:
var socket = io(“HOST:PORT”);
socket.on('request-server', function() {
socket.emit('server-type', 'red')
});
What would the equivalent client code be in Akka HTTP?
I have derived the following from the example in the Akka documentation. It isn't quite what I'd like to write, because
I think I need to connect and wait for the request-server event before sending any events & I don't know how to do that
I don't know how to format the TextMessages in the Source to be equivalent to `socket.emit('server-type', 'red').
It only prints "closed"
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
val incoming: Sink[Message, Future[Done]] = Sink.foreach[Message] {
case message: TextMessage.Strict => println(message.text)
case z => println(z)
}
val outgoing = Source(List(TextMessage("'server-type': 'red'")))
val webSocketFlow = Http().webSocketClientFlow(
WebSocketRequest("ws://localhost:3000/socket.io"))
val (upgradeResponse, closed) =
outgoing
.viaMat(webSocketFlow)(Keep.right)
.toMat(incoming)(Keep.both)
.run()
val connected = upgradeResponse.flatMap { upgrade =>
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Future.successful(Done)
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
connected.onComplete(println)
closed.foreach(_ => println("closed"))
What is the Akka client equivalent to the given socket.io code?

Your connection is getting closed immediately after sending message "outgoing".
Check out Half-Closed Websockets here http://doc.akka.io/docs/akka-http/10.0.0/scala/http/client-side/websocket-support.html#half-closed-websockets

Related

Ajax.post --> dom.fetch

I'm trying to use the dom.fetch (or dom.Fetch.fetch) api instead of Ajax.post and have a few problems:
Is this a correct translation from ajax to fetch?
Ajax.post(
url = "http://localhost:8080/ajax/myMethod",
data = byteBuffer2typedArray(Pickle.intoBytes(req.payload)),
responseType = "arraybuffer",
headers = Map("Content-Type" -> "application/octet-stream"),
)
dom.fetch(
"http://localhost:8080/fetch/myMethod",
new RequestInit {
method = HttpMethod.POST
body = byteBuffer2typedArray(Pickle.intoBytes(req.payload))
headers = new Headers {
js.Array(
js.Array("Content-Type", "application/octet-stream")
)
}
}
)
A "ReferenceError: fetch is not defined" is thrown on the js side though, same if replacing with dom.Fetch.fetch.
My setup:
Fresh jsdom 19.0.0 with
npm init private
npm install jsdom
project/plugins.sbt
libraryDependencies += "org.scala-js" %% "scalajs-env-jsdom-nodejs" % "1.1.0"
addSbtPlugin("org.scala-js" % "sbt-scalajs" % "1.8.0")
build.sbt (in js project)
libraryDependencies += "org.scala-js" %%% "scalajs-dom" % "2.0.0"
jsEnv := new JSDOMNodeJSEnv(JSDOMNodeJSEnv.Config()
.withArgs(List("--dns-result-order=ipv4first")))
Thought that the jsEnv workaround was not needed on Scala.js 1.8 (see https://github.com/scala-js/scala-js-js-envs/issues/12#issuecomment-958925883). But it is still needed when I run the ajax version. With the workaround, my ajax version works fine, so it seems that my node installation is fine.
The fetch API is only available by-default in browser environments, and not in Node. node-fetch is also not pulled in (or at least not re-exported) by jsdom, so fetch is not available with the current package/environment setup.
Possible solutions:
Set the ScalaJS side up in such a way that it would call node-fetch on NodeJS and fetch on browser
Use XMLHttpRequest which is available on both platforms
(Please see the #scala-js channel in the Scala Discord for a related conversation).
Got help on the scala-js channel on Discord from #Aly here and #armanbilge here who pointed out that:
fetch is not available by default in Node.js or JSDOM, only in browsers.
scala-js-dom provides typesafe access to browser APIs, not Node.js APIs.
The distinction between browser API and Node API wasn't clear for me before, although it is well described in step 6 of the scala-js tutorial.
So, dom.fetch of the scala-js-dom API works when running a js program in a browser, but not if running a test that uses the Node jsEnv(ironment)! To fetch in a test one would have to npm install node-fetch and use node-fetch, maybe by making a facade with scala-js.
Since I want my code to work for both browser (scala-js-dom) and test (Node.js), I ended up falling back to simply using the Ajax.post implementation with XMLHttpRequest:
case class PostException(xhr: dom.XMLHttpRequest) extends Exception {
def isTimeout: Boolean = xhr.status == 0 && xhr.readyState == 4
}
val url = s"http://$interface:$port/ajax/" + slothReq.path.mkString("/")
val byteBuffer = Pickle.intoBytes(slothReq.payload)
val requestData = byteBuffer.typedArray().subarray(byteBuffer.position, byteBuffer.limit)
val req = new dom.XMLHttpRequest()
val promise = Promise[dom.XMLHttpRequest]()
req.onreadystatechange = { (e: dom.Event) =>
if (req.readyState == 4) {
if ((req.status >= 200 && req.status < 300) || req.status == 304)
promise.success(req)
else
promise.failure(PostException(req))
}
}
req.open("POST", url) // (I only need to POST)
req.responseType = "arraybuffer"
req.timeout = 0
req.withCredentials = false
req.setRequestHeader("Content-Type", "application/octet-stream")
req.send(requestData)
promise.future.recover {
case PostException(xhr) =>
val msg = xhr.status match {
case 0 => "Ajax call failed: server not responding."
case n => s"Ajax call failed: XMLHttpRequest.status = $n."
}
println(msg)
xhr
}.flatMap { req =>
val raw = req.response.asInstanceOf[ArrayBuffer]
val dataBytes = TypedArrayBuffer.wrap(raw.slice(1))
Future.successful(dataBytes)
}

Akka.Net Clustering Simple Explanation

I try to do a simple cluster using akka.net.
The goal is to have a server receiving request and akka.net processing it through it cluster.
For testing and learning I create a simple WCF service that receive a math equation and I want to send this equation to be solved.
I have one project server and another client.
The configuration on the server side is :
<![CDATA[
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
deployment {
/math {
router = consistent-hashing-group #round-robin-pool # routing strategy
routees.paths = [ "/user/math" ]
virtual-nodes-factor = 8
#nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
}
remote {
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 8081
hostname = "127.0.0.1"
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of seed node
}
}
]]>
On the Client side the configuration is like this :
<![CDATA[
akka {
actor.provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
remote {
log-remote-lifecycle-events = DEBUG
log-received-messages = on
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 0
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
actor.deployment {
/math {
router = round-robin-pool # routing strategy
routees.paths = ["/user/math"]
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
allow-local-routees = on
use-role = math
max-nr-of-instances-per-node = 10
}
}
}
}
]]>
The cluster connection seems to correctly be made. I see the status [UP] and the association with the role "math" that appeared on the server side.
Event follwing the example on the WebCramler, I don't achieved to make a message to be delivered. I always get a deadletters.
I try like this :
actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math");
or
var actor = sys.ActorSelection("/user/math");
Does someone know a good tutorial or could help me ?
Thanks
Some remarks:
First: assuming your sending work from the server to the client. Then you are effectively remote deploying actors on your client.
Which means only the server node needs the actor.deployment config section.
The client only needs the default cluster config (and your role setting ofcourse).
Second: Try to make it simpler first. Use a round-robin-pool instead. Its much simpler. Try to get that working. And work your way up from there.
This way its easier to eliminate configuration/network/other issues.
Your usage: actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math"); is correct.
A sample of how your round-robin-pool config could look:
deployment {
/math {
router = round-robin-pool # routing strategy
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
Try this out. And let me know if that helps.
Edit:
Ok after looking at your sample. Some things i changed
ActorManager->Process: Your creating a new router actor per request. Don't do that. Create the router actor once, and reuse the IActorRef.
Got rid of the minimal cluster size settings in the MathAgentWorker project
Since your not using remote actor deployment. I changed the round-robin-pool to a round-robin-group.
After that it worked.
Also remember that if your using the consistent-hashing-group router you need to specify the hashing key. There are various ways to do that, in your sample i think the easiest way would be to wrap the message your sending to your router in a ConsistentHashableEnvelope. Check the docs for more information.
Finally the akka deployment sections looked like this:
deployment {
/math {
router = round-robin-group # routing strategy
routees.paths = ["/user/math"]
cluster {
enabled = on
allow-local-routees = off
use-role = math
}
}
}
on the MathAgentWorker i only changed the cluster section which now looks like this:
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
And the only thing that the ActorManager.Process does is:
return await Program.Instance.RouterInstance.Ask<TResult>(msg, TimeSpan.FromSeconds(10));

Using side effects in Akka Streams to implement commands received from a websocket

I want to be able to click a button on a website, have it represent a command, issue that command to my program via a websocket, have my program process that command (which will produce a side effect), and then return the results of that command to the website to be rendered.
The websocket would be responsible for updating state changes applied by different actors that are within the users view.
Example: Changing AI instructions via the website. This modifies some values, which would get reported back to the website. Other users might change other AI instructions, or the AI would react to current conditions changing position, requiring the client to update the screen.
I was thinking I could have an actor responsible for updating the client with changed information, and just have the receiving stream update the state with the changes?
Is this the right library to use? Is there a better method to achieve what I want?
You can use akka-streams and akka-http for this just fine. An example when using an actor as a handler:
package test
import akka.actor.{Actor, ActorRef, ActorSystem, Props, Stash, Status}
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.ws.{Message, TextMessage}
import akka.http.scaladsl.server.Directives._
import akka.stream.scaladsl.{Flow, Sink, Source, SourceQueueWithComplete}
import akka.stream.{ActorMaterializer, OverflowStrategy, QueueOfferResult}
import akka.pattern.pipe
import scala.concurrent.{ExecutionContext, Future}
import scala.io.StdIn
object Test extends App {
implicit val actorSystem = ActorSystem()
implicit val materializer = ActorMaterializer()
implicit def executionContext: ExecutionContext = actorSystem.dispatcher
val routes =
path("talk") {
get {
val handler = actorSystem.actorOf(Props[Handler])
val flow = Flow.fromSinkAndSource(
Flow[Message]
.filter(_.isText)
.mapAsync(4) {
case TextMessage.Strict(text) => Future.successful(text)
case TextMessage.Streamed(textStream) => textStream.runReduce(_ + _)
}
.to(Sink.actorRefWithAck[String](handler, Handler.Started, Handler.Ack, Handler.Completed)),
Source.queue[String](16, OverflowStrategy.backpressure)
.map(TextMessage.Strict)
.mapMaterializedValue { queue =>
handler ! Handler.OutputQueue(queue)
queue
}
)
handleWebSocketMessages(flow)
}
}
val bindingFuture = Http().bindAndHandle(routes, "localhost", 8080)
println("Started the server, press enter to shutdown")
StdIn.readLine()
bindingFuture
.flatMap(_.unbind())
.onComplete(_ => actorSystem.terminate())
}
object Handler {
case object Started
case object Completed
case object Ack
case class OutputQueue(queue: SourceQueueWithComplete[String])
}
class Handler extends Actor with Stash {
import context.dispatcher
override def receive: Receive = initialReceive
def initialReceive: Receive = {
case Handler.Started =>
println("Client has connected, waiting for queue")
context.become(waitQueue)
sender() ! Handler.Ack
case Handler.OutputQueue(queue) =>
println("Queue received, waiting for client")
context.become(waitClient(queue))
}
def waitQueue: Receive = {
case Handler.OutputQueue(queue) =>
println("Queue received, starting")
context.become(running(queue))
unstashAll()
case _ =>
stash()
}
def waitClient(queue: SourceQueueWithComplete[String]): Receive = {
case Handler.Started =>
println("Client has connected, starting")
context.become(running(queue))
sender() ! Handler.Ack
unstashAll()
case _ =>
stash()
}
case class ResultWithSender(originalSender: ActorRef, result: QueueOfferResult)
def running(queue: SourceQueueWithComplete[String]): Receive = {
case s: String =>
// do whatever you want here with the received message
println(s"Received text: $s")
val originalSender = sender()
queue
.offer("some response to the client")
.map(ResultWithSender(originalSender, _))
.pipeTo(self)
case ResultWithSender(originalSender, result) =>
result match {
case QueueOfferResult.Enqueued => // okay
originalSender ! Handler.Ack
case QueueOfferResult.Dropped => // due to the OverflowStrategy.backpressure this should not happen
println("Could not send the response to the client")
originalSender ! Handler.Ack
case QueueOfferResult.Failure(e) =>
println(s"Could not send the response to the client: $e")
context.stop(self)
case QueueOfferResult.QueueClosed =>
println("Outgoing connection to the client has closed")
context.stop(self)
}
case Handler.Completed =>
println("Client has disconnected")
queue.complete()
context.stop(self)
case Status.Failure(e) =>
println(s"Client connection has failed: $e")
e.printStackTrace()
queue.fail(new RuntimeException("Upstream has failed", e))
context.stop(self)
}
}
There are lots of places here which could be tweaked, but the basic idea remains the same. Alternatively, you could implement the Flow[Message, Message, _] required by the handleWebSocketMessages() method by using GraphStage. Everything used above is also described in detail in akka-streams documentation.

How to listen to ACK packages on ephemeral ports

I need to write a tftp client implementation to send a file from a windows phone 8.1 to a piece of hardware.
Because I need to be able to support windows 8.1 I need to use the Windows.Networking.Sockets classes.
I'm able to send my Write request package but I am having troubles to receive the ack package (wireshark). This ack package is sent to an "ephemeral port" according to the TFTP specification but the port is blocked according to wireshark.
I know how to use sockets on a specific port but I don't know how to be able to receive ack packages send to different (ephemeral) ports. I need to use the port used for that ack package to continue the TFTP communication.
How would I be able to receive the ACK packages and continue to work on a different port? Do I need to bind the socket to multiple ports? I've been trying to find answers on the microsoft docs and google but other implementations gave me no luck so far.
As reference my current implementation:
try {
hostName = new Windows.Networking.HostName(currentIP);
} catch (error) {
WinJS.log && WinJS.log("Error: Invalid host name.", "sample", "error");
return;
}
socketsSample.clientSocket = new Windows.Networking.Sockets.DatagramSocket();
socketsSample.clientSocket.addEventListener("messagereceived", onMessageReceived);
socketsSample.clientSocket.bindEndpointAsync(new Windows.Networking.HostName(hostName), currentPort);
WinJS.log && WinJS.log("Client: connection started.", "sample", "status");
socketsSample.clientSocket.connectAsync(hostName, serviceName).done(function () {
WinJS.log && WinJS.log("Client: connection completed.", "sample", "status");
socketsSample.connected = true;
var remoteFile = "test.txt";
var tftpMode = Modes.Octet;
var sndBuffer = createRequestPacket(Opcodes.Write, remoteFile, tftpMode);
if (!socketsSample.clientDataWriter) {
socketsSample.clientDataWriter =
new Windows.Storage.Streams.DataWriter(socketsSample.clientSocket.outputStream);
}
var writer = socketsSample.clientDataWriter;
var reader;
var stream;
writer.writeBytes(sndBuffer);
// The call to store async sends the actual contents of the writer
// to the backing stream.
writer.storeAsync().then(function () {
// For the in-memory stream implementation we are using, the flushAsync call
// is superfluous, but other types of streams may require it.
return writer.flushAsync();
});
}, onError);
Finally found the issue.
Instead of connectAsynch I used getOutputStreamAsynch and now it receives messages on the client socket:
Some code:
tftpSocket.clientSocket.getOutputStreamAsync(new Windows.Networking.HostName(self.hostName), tftpSocket.serviceNameConnect).then(function (stream) {
console.log("Client: connection completed.", "sample", "status");
var writer = new Windows.Storage.Streams.DataWriter(stream); //use the stream that was created when calling getOutputStreamAsync
tftpSocket.clientDataWriter = writer; //keep the writer in case we need to close sockets we also close the writer
writer.writeBytes(sndBytes);
// The call to store async sends the actual contents of the writer
// to the backing stream.
writer.storeAsync().then(function () {
// For the in-memory stream implementation we are using, the flushAsync call
// is superfluous, but other types of streams may require it.
return writer.flushAsync();
});
}, self.onError);

Connection between js and akka-http websockets fails 95% of the time

I'm trying to setup a basic connection between an akka-http websocket server and simple javascript.
1 out of roughly 20 connections succeeds, the rest fails. I have no idea why the setup of the connection is so unreliable.
Application.scala:
import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.stream.ActorMaterializer
import services.WebService
import scala.concurrent.Await
import scala.concurrent.duration._
import java.util.concurrent.TimeoutException
object Application extends App {
implicit val system = ActorSystem("api")
implicit val materializer = ActorMaterializer()
import system.dispatcher
val config = system.settings.config
val interface = config.getString("app.interface")
val port = config.getInt("app.port")
val service = new WebService
val binding = Http().bindAndHandle(service.route, interface, port)
try {
Await.result(binding, 1 second)
println(s"server online at http://$interface:$port/")
} catch {
case exc: TimeoutException =>
println("Server took to long to startup, shutting down")
system.shutdown()
}
}
WebService.scala:
import actors.{PublisherActor, SubscriberActor}
import akka.actor.{Props, ActorSystem}
import akka.http.scaladsl.model.ws.{Message, TextMessage}
import akka.http.scaladsl.server.Directives
import akka.stream.Materializer
import akka.stream.scaladsl.{Source, Flow}
import scala.concurrent.duration._
class WebService(implicit fm: Materializer, system: ActorSystem) extends Directives {
import system.dispatcher
system.scheduler.schedule(15 second, 15 second) {
println("Timer message!")
}
def route =
get {
pathSingleSlash {
getFromResource("web/index.html")
} ~
path("helloworld") {
handleWebsocketMessages(websocketActorFlow)
}
}
def websocketActorFlow: Flow[Message, Message, Unit] =
Flow[Message].collect({
case TextMessage.Strict(msg) =>
println(msg)
TextMessage.Strict(msg.reverse)
})
}
client side:
<input type="text" id="inputMessage"/><br/>
<input type="button" value="Send!" onClick="sendMessage()"/><br/>
<span id="response"></span>
<script type="application/javascript">
var connection;
function sendMessage() {
connection.send(document.getElementById("inputMessage").value);
}
document.addEventListener("DOMContentLoaded", function (event) {
connection = new WebSocket("ws://localhost:8080/helloworld");
connection.onopen = function (event) {
connection.send("connection established");
};
connection.onmessage = function (event) {
console.log(event.data);
document.getElementById("response").innerHTML = event.data;
}
});
</script>
if the connection to the server fails I get a timeout message after 5 seconds which says the following:
[DEBUG] [07/23/2015 07:59:54.517] [api-akka.actor.default-dispatcher-27] [akka://api/user/$a/flow-76-3-publisherSource-prefixAndTail] Cancelling akka.stream.impl.MultiStreamOutputProcessor$SubstreamOutput#a54778 (after: 5000 ms)
No matter if the connection fails or succeeds, I always get the following log message:
[DEBUG] [07/23/2015 07:59:23.849] [api-akka.actor.default-dispatcher-4] [akka://api/system/IO-TCP/selectors/$a/0] New connection accepted
Look at that error message carefully... it is coming from a source I would not have expected, some "MultiStreamOutputProcessor" when I only expect to handle a single stream.
That tells me - along with the webSocketActorFlow - that maybe you are getting messages and they aren't being caught by the flow, and so you're ending up with substreams you never expected.
So instead of it "only working some of the time," maybe it is "working most of the time but unable to handle all of the input as you have demanded in the flow, and you are left with un-selectable streams that have to die first.
See if you can either a) make sure you get a grip on the streams so you don't end up with stragglers, b) bandaid adjust timeouts, and c) detect such occurences and cancel processing the downstream
https://groups.google.com/forum/#!topic/akka-user/x-tARRaJ0LQ
akka {
stream {
materializer {
subscription-timeout {
timeout=30s
}
}
}
}
http://grokbase.com/t/gg/akka-user/1561gr0jgt/debug-message-cancelling-akka-stream-impl-multistreamoutputprocessor-after-5000-ms

Resources