I am trying to make Aedes works as a MQTT broker AND Websocket server. According to that doc: https://github.com/moscajs/aedes/blob/master/docs/Examples.md
what i am suppose to understand. Ideally, i want the listener fired up whatever if its a websocket client or a mqtt client.
Is it possible to do something like:
server.broadcast('foo/bar', {data:''})
and all client, websockets and mqtt receive the message ? The doc is not very clear and i am very suprised that websocket-stream is used. It is very low lvl right ?
here some server side code:
const port = 1883
const aedes = require('aedes')({
persistence: mongoPersistence({
url: 'mongodb://127.0.0.1/aedes-test',
// Optional ttl settings
ttl: {
packets: 300, // Number of seconds
subscriptions: 300
}
}),
authenticate: (client, username, password, callback) => {
},
authorizePublish: (client, packet, callback) => {
},
authorizeSubscribe: (client, packet, callback) => {
}
});
//const server = require('net').createServer(aedes.handle);
const httpServer = require('http').createServer()
const ws = require('websocket-stream')
ws.createServer({ server: httpServer }, aedes.handle)
httpServer.listen(port, function () {
Logger.debug('Aedes listening on port: ' + port)
aedes.publish({ topic: 'aedes/hello', payload: "I'm broker " + aedes.id })
});
It should just be case of starting both servers with the same aedes object as follows:
const port = 1883
const wsPort = 8883
const aedes = require('aedes')({
persistence: mongoPersistence({
url: 'mongodb://127.0.0.1/aedes-test',
// Optional ttl settings
ttl: {
packets: 300, // Number of seconds
subscriptions: 300
}
}),
authenticate: (client, username, password, callback) => {
},
authorizePublish: (client, packet, callback) => {
},
authorizeSubscribe: (client, packet, callback) => {
}
});
const server = require('net').createServer(aedes.handle);
const httpServer = require('http').createServer()
const ws = require('websocket-stream')
ws.createServer({ server: httpServer }, aedes.handle)
server.listen(port, function() {
Logger.debug('Ades MQTT listening on port: ' + port)
})
httpServer.listen(wsPort, function () {
Logger.debug('Aedes MQTT-WS listening on port: ' + wsPort)
aedes.publish({ topic: 'aedes/hello', payload: "I'm broker " + aedes.id })
});
Related
I'm using graphql-ws https://www.npmjs.com/package/graphql-ws to manage my websocket connection, but am unable to figure out how to handle a dropped connection. Once my internet drops (toggling wifi) or computer sleeps, subscriptions all drop and websocket never reconnects.
closed never gets called. Everything else works as expected, just the disconnects an issue.
createClient({
retryAttempts: 5,
shouldRetry: () => true,
url: "ws://localhost:8080",
on: {
connected: () => {
console.log("CONNECTED");
},
closed: () => {
console.log("CLOSED");
},
error: (e) => {
console.log(e);
},
},
})
);
You can use keepAlive, ping, and pong as a trigger to restart your connection, and keep retryAttempt to infinite.
That's my attempt at keeping the socket alive:
createClient({
url: 'wss://$domain/v1/graphql',
retryAttempts: Infinity,
shouldRetry: () => true,
keepAlive: 10000,
connectionParams: () => {
const access_token = getAccessTokenFunction();
return {
headers: {
Authorization: `Bearer ${access_token || ''}`
}
};
},
on: {
connected: (socket) => {
activeSocket = socket; // to be used at pings & pongs
// get the access token expiry time and set a timer to close the socket
// once the token expires... Since 'retryAttempts: Infinity' it will
// try to reconnect again by getting a fresh token.
const token_expiry_time = getTokenExpiryDate();
const current_time = Math?.round(+new Date() / 1000);
const difference_time = (token_expiry_time - current_time) * 1000;
if (difference_time > 0) {
setTimeout(() => {
if (socket?.readyState === WebSocket?.OPEN) {
socket?.close(CloseCode?.Forbidden, "Forbidden");
}
}, difference_time);
}
},
ping: (received) => {
if (!received)
// sent
timedOut = setTimeout(() => {
if (activeSocket?.readyState === WebSocket?.OPEN)
activeSocket?.close(4408, 'Request Timeout');
}, 5000); // wait 5 seconds for the pong and then close the connection
},
pong: (received) => {
if (received) clearTimeout(timedOut); // pong is received, clear connection close timeout
}
}
})
I have following code for the SocketIO client:
import { io } from "socket.io-client";
const token = window.localStorage.getItem('TOKEN') || window.sessionStorage.getItem('TOKEN')
const ioSocket = io("xxxx", {
autoConnect: false,
'reconnection': true,
'reconnectionDelay': 500,
'reconnectionAttempts': 10,
reconnectionDelayMax: 10000,
reconnectionAttempts: Infinity,
transportOptions: {
polling: {
extraHeaders: {
authorization: `${token}`,
},
},
},
});
export const socket = ioSocket
Before user is not logged in, token is not available, and when user perform login action, the page is not refreshed (I am using Vue 3 with vue-router), and the connection is never established. Once I manually refresh the page, connection is created. Is there a way to try to connect once again after some period of time?
This is the code I tried to manually connect:
socket.on("connect", () => {
console.log(socket.id);
});
onMounted(() => {
if(!socket.connected)
{
socket.connect();
}
socket.emit("join", import.meta.env.VITE_SOCKET_ROOM, (message) => {
console.log(message);
});
})
I have an issue with subscription can't be unsubscribe.
Before we start, this is my setup: Apollo Client(graphql-ws) <-> Apollo Server(graphql-ws). On the server, I build a custom PubSub instead of using the one provided.
As you can see here, the client has sent a complete request to server with the id. However, the server is still sending more data to it. I have read somewhere that you have to send GQL_STOP, aka STOP instead. However, this is what Apollo Client is sending.
A bit of code:
Client subscription:
export const useGetDataThroughSubscription = (
resourceIds: number[],
startDate?: Date,
endDate?: Date
) => {
const variables = {
startTime: startDate?.toISOString() ?? '',
endTime: endDate?.toISOString() ?? '',
resourceIds,
};
return useGetDataSubscription({
variables,
...
})
}
Server pubsub:
const createPubSub = <TopicPayload extends { [key: string]: unknown }>(
emitter: EventEmitter = new EventEmitter()
) => ({
publish: <Topic extends Extract<keyof TopicPayload, string>>(
topic: Topic,
payload: TopicPayload[Topic]
) => {
emitter.emit(topic as string, payload);
},
async *subscribe<Topic extends Extract<keyof TopicPayload, string>>(
topic: Topic,
retrievalFunc: (value: TopicPayload[Topic]) => Promise<any>
): AsyncIterableIterator<TopicPayload[Topic]> {
const asyncIterator = on(emitter, topic);
for await (const [value] of asyncIterator) {
const data = await retrievalFunc(value);
yield data;
}
},
Server subscribe to event:
const resolver: Resolvers = {
Subscription: {
[onGetAllLocationsEvent]: {
async *subscribe(_a, _b, ctx) {
const locations = await ...;
yield locations;
const iterator = ctx.pubsub.subscribe(
onGetAllLocationsEvent,
async (id: number) => {
const location = ...;
return location;
}
);
for await (const data of iterator) {
if (data) {
yield [data];
}
}
},
resolve: (payload) => payload,
},
},
};
In this one, if instead of the for loop, I return iterator instead, then the server will send back a complete and stop the subscription all together. That's great, but I want to keep the connection open until client stop listening.
And server publish
ctx.pubsub.publish(onGetAllResourcesEvent, resource.id);
So how should I deal with this?
Tried to make example-event.ts work as HTTP Server. The example is a simple counter. You can subscribe to the count as an event.
It works until a client asks a second time for the value (longpoll).
EventSource ready
Emitted change 1
[binding-http] HttpServer on port 8080 received 'GET /eventsource' from [::ffff:192.168.0.5]:58268
[binding-http] HttpServer on port 8080 replied with '200' to [::ffff:192.168.0.5]:58268
[binding-http] HttpServer on port 8080 received 'GET /eventsource/events/onchange' from [::ffff:192.168.0.5]:58268
[core/exposed-thing] ExposedThing 'EventSource' subscribes to event 'onchange'
[core/content-serdes] ContentSerdes serializing to application/json
Emitted change 2
[binding-http] HttpServer on port 8080 replied with '200' to [::ffff:192.168.0.5]:58268
[binding-http] HttpServer on port 8080 closed Event connection
[core/exposed-thing] ExposedThing 'EventSource' unsubscribes from event 'onchange'
[binding-http] HttpServer on port 8080 received 'GET /eventsource/events/onchange' from [::ffff:192.168.0.5]:58268
[core/exposed-thing] ExposedThing 'EventSource' subscribes to event 'onchange'
[core/content-serdes] ContentSerdes serializing to application/json
[core/content-serdes] ContentSerdes serializing to application/json
Emitted change 3
events.js:377
throw er; // Unhandled 'error' event
^
Error [ERR_STREAM_WRITE_AFTER_END]: write after end
at new NodeError (internal/errors.js:322:7)
at writeAfterEnd (_http_outgoing.js:694:15)
at ServerResponse.end (_http_outgoing.js:815:7)
at SafeSubscriber._next (C:\xxx\node_modules\#node-wot\binding-http\dist\http-server.js:721:45)
at SafeSubscriber.__tryOrUnsub (C:\xxx\node_modules\rxjs\Subscriber.js:242:16)
at SafeSubscriber.next (C:\xxx\node_modules\rxjs\Subscriber.js:189:22)
at Subscriber._next (C:\xxx\node_modules\rxjs\Subscriber.js:129:26)
at Subscriber.next (C:\xxx\node_modules\rxjs\Subscriber.js:93:18)
at Subject.next (C:\xxx\node_modules\rxjs\Subject.js:55:25)
at Object.ExposedThing.emitEvent (C:\xxx\node_modules\#node-wot\core\dist\exposed-thing.js:53:50)
Emitted 'error' event on ServerResponse instance at:
at writeAfterEndNT (_http_outgoing.js:753:7)
at processTicksAndRejections (internal/process/task_queues.js:83:21) {
code: 'ERR_STREAM_WRITE_AFTER_END'
}
I am a little confused by all the subscribing and unsubscribing, but this is maybe because of longpoll is used.
my code:
server side:
Servient = require('#node-wot/core').Servient;
HttpServer = require('#node-wot/binding-http').HttpServer;
Helpers = require('#node-wot/core').Helpers;
// create Servient add HTTP binding with port configuration
let servient = new Servient();
servient.addServer(new HttpServer({}));
// internal state, not exposed as Property
let counter = 0;
servient.start().then((WoT) => {
WoT.produce({
title: 'EventSource',
events: {
onchange: {
data: { type: 'integer' },
},
},
})
.then((thing) => {
console.log('Produced ' + thing.getThingDescription().title);
thing.expose().then(() => {
console.info(thing.getThingDescription().title + ' ready');
setInterval(() => {
++counter;
thing.emitEvent('onchange', counter);
console.info('Emitted change ', counter);
}, 5000);
});
})
.catch((e) => {
console.log(e);
});
});
cient side:
const servient = new Wot.Core.Servient();
servient.addClientFactory(new Wot.Http.HttpClientFactory());
const helpers = new Wot.Core.Helpers(servient);
const addr ='http://192.168.0.5:8080/eventsource';
getTd(addr);
function getTd(addr) {
servient.start().then((thingFactory) => {
helpers
.fetch(addr)
.then((td) => {
thingFactory.consume(td).then((thing) => {
showEvents(thing);
});
})
.catch((error) => {
window.alert('Could not fetch TD.\n' + error);
});
});
}
function showEvents(thing) {
let td = thing.getThingDescription();
for (let evnt in td.events) {
if (td.events.hasOwnProperty(evnt)) {
document.getElementById("events").innerHTML = "waiting...";
thing
.subscribeEvent(evnt, (res) => {
document.getElementById("events").innerHTML = res;
})
.catch((err) => window.alert('error: ' + err));
}
}
}
The problem also occurs if I'm using example-event-client.ts or just send GET requests to "http://192.168.0.5:8080/eventsource/events/onchange" using the browser.
What do I have to do to make the example work?
I'm using:
Aurora Serverless Data API (Postgres)
TypeORM with typeorm-aurora-data-api-driver
AWS Lambda with Serverless framework (TypeScript, WebPack)
I'm connecting to the db like it's described in github,
const connection = await createConnection({
type: 'aurora-data-api-pg',
database: 'test-db',
secretArn: 'arn:aws:secretsmanager:eu-west-1:537011205135:secret:xxxxxx/xxxxxx/xxxxxx',
resourceArn: 'arn:aws:rds:eu-west-1:xxxxx:xxxxxx:xxxxxx',
region: 'eu-west-1'
})
And this is how I use it inside of my Lambda function
export const testConfiguration: APIGatewayProxyHandler = async (event, _context) => {
let response;
try {
const connectionOptions: ConnectionOptions = await getConnectionOptions();
const connection = await createConnection({
...connectionOptions,
entities,
});
const userRepository = connection.getRepository(User);
const users = await userRepository.find();
response = {
statusCode: 200,
body: JSON.stringify({ users }),
};
} catch (e) {
response = {
statusCode: 500,
body: JSON.stringify({ error: 'server side error' }),
};
}
return response;
};
When I execute is first time it works just well.
But second and next times I'm getting an error
AlreadyHasActiveConnectionError: Cannot create a new connection named "default", because connection with such name already exist and it now has an active connection session.
So, what is the proper way to manage this connection?
Should it be somehow reused?
I've found some resolutions for simple RDS but the whole point of Aurora Serverless Data API is that you don't have to manage the connection
when you try to establish a connection, you need to check if there is already a connection it can use. this is my Database class used to handle connections
export default class Database {
private connectionManager: ConnectionManager;
constructor() {
this.connectionManager = getConnectionManager();
}
async getConnection(): Promise<Connection> {
const CONNECTION_NAME = 'default';
let connection: Connection;
if (this.connectionManager.has(CONNECTION_NAME)) {
logMessage(`Database.getConnection()-using existing connection::: ${CONNECTION_NAME}`);
connection = await this.connectionManager.get(CONNECTION_NAME);
if (!connection.isConnected) {
connection = await connection.connect();
}
} else {
logMessage('Database.getConnection()-creating connection ...');
logMessage(`DB host::: ${process.env.DB_HOST}`);
const connectionOptions: ConnectionOptions = {
name: CONNECTION_NAME,
type: 'postgres',
port: 5432,
logger: 'advanced-console',
logging: ['error'],
host: process.env.DB_HOST,
username: process.env.DB_USERNAME,
database: process.env.DB_DATABASE,
password: process.env.DB_PASSWORD,
namingStrategy: new SnakeNamingStrategy(),
entities: Object.keys(entities).map((module) => entities[module]),
};
connection = await createConnection(connectionOptions);
}
return connection;
}
}