I try to generate step load for performance test on k6 for websocket.
regular settings like
export let options = {
stages: [
{
"duration": "0m30s",
"target": 10
},
{
"duration": "0m30s",
"target": 10
},
{
"duration": "0m30s",
"target": 0
}
],
};
doesn't work for k6. I tried --vus 10 --i 10
but it just go through scenario 10 times and sleep till the end of 10 minutes.
Than I tried k6 run --vus 5 --stage 3m:10,5m:10,10m:35,1m30s:0 but result is almost the same. How create active load with step pattern for websocket testing? to connect every time after flow is done?
Test flow:
import ws from "k6/ws";
import { check } from "k6";
export default function() {
const url = "ws://URL:8000/";
const params = { tags: { my_tag: "hello" } };
const response = ws.connect(url, params, function(socket) {
socket.on("open", function open() {
console.log("connected");
socket.send(Date.now());
var url = "ws://URL:8000/";
var response = ws.connect(url, null, function(socket) {
socket.on('open', function() {
socket.send('Hello');
socket.send('How are you?');
});
socket.on("close", () => console.log("disconnected"));
socket.on("error", (e) => {
if (e.error() != "websocket: close sent") {
console.log("An unexpected error occured: ", e.error());
}
});
check(response, { "status is 101": r => r && r.status === 101 });
})
})
})
}
As mentioned in the documentation ws.connect blocks until the connection is closed either by calling socket.close() or the server closing the connection on the other end (or due to an error while doing something else).
So if you want to just send the 2 messages there you should call socket.close(). You can also close after receiving something on the other side using by using
socket.on("message", function(msg) {
console.log("got: "+ msg);
socket.close();
}
But if you don't need to reconnect but instead want to just send more messages you should probably use a timer:
socket.setInterval(function() {
socket.send('Hello');
socket.send('How are you?');
}, 1500);
will send messages each 1.5s
I don't know why you open another websocket connection inside the other, but that might be a mistake on your part and is likely going to make the code hard to read if you don't use a different variable name from socket
I wasted lots of time trying it with the K6 but it seems not supported till today, k6 does not support socket.io testing, check the links attached.
https://github.com/grafana/k6/issues/667
You can go for artillery, as suggested in the official socket.io docs
https://socket.io/docs/v4/load-testing/
Related
I'm getting this error message when trying to connect to a aws neptune db from a lambda:
2022-05-05T18:36:04.114Z e0c9ee4c-0e1d-49c7-ad05-d8bab79d3ea6 WARN Determining whether retriable error: Server error: {
"requestId": "some value",
"code": "TimeLimitExceededException",
"detailedMessage": "A timeout occurred within the script or was otherwise cancelled directly during evaluation of [some value]"
} (598)
The timeout happens consistently after 20s.
It's not clear what's causing this. Things I've tried:
increasing the lambda memory in case it's just a hardware problem, but no luck
increasing neptune query timeout from 20s to 60s, but the request still times out at 20s.
This is the code of the lambda that tries to initialize the connection:
import { driver, structure } from 'gremlin';
import { getUrlAndHeaders } from 'gremlin-aws-sigv4/lib/utils';
const getConnectionDetails = () => {
if (process.env['USE_IAM'] == 'true') {
return getUrlAndHeaders(
process.env['CLUSTER_ENDPOINT'],
process.env['CLUSTER_PORT'],
{},
'/gremlin',
'wss'
);
} else {
const database_url =
'wss://' +
process.env['CLUSTER_ENDPOINT'] +
':' +
process.env['CLUSTER_PORT'] +
'/gremlin';
return { url: database_url, headers: {} };
}
};
const getConnection = () => {
const { url, headers } = getConnectionDetails();
const c = new driver.DriverRemoteConnection(url, {
mimeType: 'application/vnd.gremlin-v2.0+json',
headers: headers,
});
c._client._connection.on('close', (code, message) => {
console.info(`close - ${code} ${message}`);
if (code == 1006) {
console.error('Connection closed prematurely');
throw new Error('Connection closed prematurely');
}
});
return c;
};
This was working previously using more powerful hardware (r4.2xlarge) for the neptune db, but I changed that t3.medium to minimize cost and it seems that's when the problem started. But I find it hard to believe that this hardware change alone would cause the connection to timeout, and it's odd that it continues to timeout at exactly 20s. Any ideas?
Once parameter group values are changed, the instance you are connecting to still needs to be restarted for them to take effect. You can do this:
From the AWS Console (web page) for Neptune
From the CLI using aws neptune reboot-db-instance
I'm trying to make socket.io-client work in a svelte front end app to talk to an existing API server that already uses socket.io. After a number of challenges, I managed to make this work but I can only get this to work with sveltekit's preview and not in dev mode. Wondered if someone with some knowledge of those could explain why or suggest what I need to do to get it connecting in dev?
svelte 3.34.0
sveltekit next-169
socket.io(-client) 4.2.0
basic code as follows, currently within a file $lib/db.js where I define a few stores that are pulled into the layout for general use..
import { io } from "socket.io-client";
import { browser } from '$app/env';
const initSocket = async () => {
console.log('creating socket...');
let socket = io('http://192.168.1.5:4000', { 'connect timeout': 5000 });
socket.on("connect", () => {
// always works in preview...
console.log('socket created with ID:', socket.id);
});
socket.on("connect_error", (error) => {
// permanently fired in dev...
console.error('Failed to connect', error);
});
socket.on("error", (error) => {
console.error('Error on socket', error);
});
socket.on("foo", data => {
// works in preview when server emits a message of type 'foo'..
console.log("FOO:", data);
});
};
if (browser) {
initSocket();
}
// stores setup and exports omitted..
with svelte-kit preview --host I see the socket creation log message with the socket ID and the same can be seen on the api server where it logs the same ID. The socket works and data is received as expected.
with svelte-kit dev --host however, the log message from socket.on("connect").. is never output and I just see an endless stream of error messages in the browser console from the socket.on("connect_error").. call..
Failed to connect Error: xhr poll error
at XHR.onError (transport.js:31)
at Request.<anonymous> (polling-xhr.js:93)
at Request.Emitter.emit (index.js:145)
at Request.onError (polling-xhr.js:242)
at polling-xhr.js:205
Importantly, there is no attempt to actually contact the server at all. The server never receives a connection request and wireshark/tcpdump confirm that no packet is ever transmitted to 192.168.1.5:4000
Obviously having to rebuild and re-run preview mode on each code change makes development pretty painful, does anyone have insight as to what the issue is here or suggestions on how to proceed?
I've had a similar problem, I solved it by adding this code to svelte.config.js:
const config = {
kit: {
vite: {
resolve: {
alias: {
"xmlhttprequest-ssl": "./node_modules/engine.io-client/lib/xmlhttprequest.js",
},
},
},
},
};
The solution was provided by this comment from the vite issues.
Problem:
I’m using cypress with angular and apollo graphQl. I’m trying to mock the graph server so I write my tests using custom responses. The issue here is that all graph calls go on a single endpoint and that cypress doesn’t have default full network support yet to distinguish between these calls.
An example scenario would be:
access /accounts/account123
when the api is hit two graph calls are sent out - a query getAccountDetails and another one with getVehicles
Tried:
Using one stub of the graph endpoint per test. Not working as it stubs with the same stub all calls.
Changing the app such that the query is appended 'on the go' to the url where I can intercept it in cypress and therefore have a unique url for each query. Not possible to change the app.
My only bet seems to be intercepting the XHR call and using this, but I don't seem to be able to get it working Tried all options using XHR outlined here but to no luck (it picks only the stub declared last and uses that for all calls) https://github.com/cypress-io/cypress-documentation/issues/122.
The answer from this question uses Fetch and therefore doesn't apply:
Mock specific graphql request in cypress when running e2e tests
Anyone got any ideas?
With cypress 6.0 route and route2 are deprecated, suggesting the use of intercept. As written in the docs (https://docs.cypress.io/api/commands/intercept.html#Aliasing-individual-GraphQL-requests) you can mock the GraphQL requests in this way:
cy.intercept('POST', '/api', (req) => {
if (req.body.operationName === 'operationName') {
req.reply({ fixture: 'mockData.json'});
}
For anyone else hitting this issue, there is a working solution with the new cypress release using cy.route2()
The requests are sent to the server but the responses are stubbed/ altered on return.
Later Edit:
Noticed that the code version below doesn't alter the status code. If you need this, I'd recommend the version I left as a comment below.
Example code:
describe('account details', () => {
it('should display the account details correctly', () => {
cy.route2(graphEndpoint, (req) => {
let body = req.body;
if (body == getAccountDetailsQuery) {
req.reply((res) => {
res.body = getAccountDetailsResponse,
res.status = 200
});
} else if (body == getVehiclesQuery) {
req.reply((res) => {
res.body = getVehiclesResponse,
res.status = 200
});
}
}).as('accountStub');
cy.visit('/accounts/account123').wait('#accountStub');
});
});
Both your query and response should be in string format.
This is the cy command I'm using:
import * as hash from 'object-hash';
Cypress.Commands.add('stubRequest', ({ request, response, alias }) => {
const previousInteceptions = Cypress.config('interceptions');
const expectedKey = hash(
JSON.parse(
JSON.stringify({
query: request.query,
variables: request.variables,
}),
),
);
if (!(previousInteceptions || {})[expectedKey]) {
Cypress.config('interceptions', {
...(previousInteceptions || {}),
[expectedKey]: { alias, response },
});
}
cy.intercept('POST', '/api', (req) => {
const interceptions = Cypress.config('interceptions');
const receivedKey = hash(
JSON.parse(
JSON.stringify({
query: req.body.query,
variables: { ...req.body.variables },
}),
),
);
const match = interceptions[receivedKey];
if (match) {
req.alias = match.alias;
req.reply({ body: match.response });
}
});
});
With that is posible to stub exact request queries and variables:
import { MUTATION_LOGIN } from 'src/services/Auth';
...
cy.stubRequest({
request: {
query: MUTATION_LOGIN,
variables: {
loginInput: { email: 'test#user.com', password: 'test#user.com' },
},
},
response: {
data: {
login: {
accessToken: 'Bearer FakeToken',
user: {
username: 'Fake Username',
email: 'test#user.com',
},
},
},
});
...
Cypress.config is what make it possible, it is kind of a global key/val getter/setter in tests which I'm using to store interceptions with expected requests hash and fake responses
This helped me https://www.autoscripts.net/stubbing-in-cypress/
But I'm not sure where the original source is
A "fix" that I use is to create multiple aliases, with different names, on the same route, with wait on the alias between the different names, as many as requests you have.
I guess you can use aliases as already suggested in Answer by #Luis above like this. This is given in documentation too. Only thing you need to use here is multiple aliases as you have multiple calls and have to manage the sequence between them . Please correct me if i understood you question in other way ??
cy.route({
method: 'POST',
url: 'abc/*',
status: 200.
response: {whatever response is needed in mock }
}).as('mockAPI')
// HERE YOU SHOULD WAIT till the mockAPI is resolved.
cy.wait('#mockAPI')
I am trying to integrate socket.io with strapi. But unfortunately I have been unable to do so without any proper tutorial or documentation covering this aspect.
I followed along with the only resource I found online which is:
https://medium.com/strapi/strapi-socket-io-a9c856e915a6
But I think the article is outdated. I can't seem to run the code mentioned in it without running into tonnes of errors.
Below is my attempt to implement it and I have been trying to connect it through a chrome websocket plugin smart websocket client But I am not getting any response when I try to run the server.
I'm totally in the dark. Any help will be appreciated
module.exports = ()=> {
// import socket io
var io = require('socket.io')(strapi.server)
console.log(strapi.server) //undefined
// listen for user connection
io.on('connect', socket => {
socket.send('Hello!');
console.log("idit")
// or with emit() and custom event names
socket.emit('greetings', 'Hey!', { 'ms': 'jane' }, Buffer.from([4, 3, 3, 1]));
// handle the event sent with socket.send()
socket.on('message', (data) => {
console.log(data);
});
// handle the event sent with socket.emit()
socket.on('salutations', (elem1, elem2, elem3) => {
console.log(elem1, elem2, elem3);
});
});
};
So I found the solution. Yay. I'll put it here just in case anybody needs it.
boostrap.js
module.exports = async () => {
process.nextTick(() =>{
var io = require('socket.io')(strapi.server);
io.on('connection', async function(socket) {
console.log(`a user connected`)
// send message on user connection
socket.emit('hello', JSON.stringify({message: await strapi.services.profile.update({"posted_by"})}));
// listen for user diconnect
socket.on('disconnect', () =>{
console.log('a user disconnected')
});
});
strapi.io = io; // register socket io inside strapi main object to use it globally anywhere
})
};
Found this at: https://github.com/strapi/strapi/issues/5869#issuecomment-619508153_
Apparently, socket.server is not available when the server starts. So you have to make use of process.nextTick that waits for the socket.server to initialize.
I'll also add a few questions that I faced when setting this up.
How do i connect from an external client like nuxt,vue or react?
You just have to connect through "http://localhost:1337" that is my usual address for strapi.
I am using nuxt as my client side and this is how set up my socketio on the client side
I first installed nuxt-socket-io through npm
Edited the nuxt.config file as per it's documention
modules:[
...
'nuxt-socket-io',
...
],
io: {
// module options
sockets: [
{
name: 'main',
url: 'http://localhost:1337',
},
],
},
And then i finally added a listener in one of my pages.
created() {
this.socket = this.$nuxtSocket({})
this.socket.on('hello', (msg, cb) => {
console.log('SOCKET HI')
console.log(msg)
})
},
And it works.
A clean way to integrate third-party services into Strapi is to use hooks. They are loaded once during the server boot. In this case, we will create a local hook.
The following example has worked with strapi#3.6.
Create a hook for socket.io at ./hooks/socket.io/index.js
module.exports = strapi => {
return {
async initialize() {
const ioServer = require('socket.io')(strapi.server, {
cors: {
origin: process.env['FRONT_APP_URL'],
methods: ['GET', 'POST'],
/* ...other cors options */
}
})
ioServer.on('connection', function(socket) {
socket.emit('hello', `Welcome ${socket.id}`)
})
/* HANDLE CLIENT SOCKET LOGIC HERE */
// store the server.io instance to global var to use elsewhere
strapi.services.ioServer = ioServer
},
}
}
Enable the new hook in order for Strapi to load it - ./config/hook.js
module.exports = {
settings: {
'socket.io': {
enabled: true,
},
},
};
That's done. You can access the websocket server inside ./config/functions/bootstrap.js or models' lifecycle hooks.
// ./api/employee/models/employee.js
module.exports = {
lifecycles: {
async afterUpdate(result, params, data) {
strapi.services.ioServer.emit('update:employee', result)
},
},
};
For those who are looking the answer using Strapi version 4
var io = require("socket.io")(strapi.server.httpServer)
Good Day! I would like to implement a convenient method for uploading a multiple files to an sftp-server with methods of calling back each ofuploaded files to server.
I have already tried to implement some code that works, but I saw that there is a memory leak that does not allow to successfully close the connection to the sftp server server after all download.
it is absolutely not critical to constantly open the connection and close it for me.
I tweaked the code a little bit from here: how do I send (put) multiple files using nodejs ssh2-sftp-client?
code:
function sftpPutFiles(config, files, pathToDir, callbackStep, callbackFinish, callbackError) {
let Client = require('ssh2-sftp-client');
let PromisePool = require('es6-promise-pool');
const sendFile = (config, pathFrom, pathTo) => {
return new Promise(function (resolve, reject) {
let sftp = new Client();
console.log(pathFrom, pathTo);
sftp.on('keyboard-interactive', (name, instructions, instructionsLang, prompts, finish) => { finish([config.password]); });
sftp.connect(config).then(() => {
return sftp.put(pathFrom, pathTo);
}).then(() => {
console.log('finish '+pathTo);
callbackStep(pathTo);
sftp.end();
resolve(pathTo);
}).catch((err) => {
console.log(err, 'catch error');
callbackError(err);
});
});
};
// Create a pool.
let indexFile = 0;
let pool = new PromisePool(() => {
while (indexFile < files.length) {
let file = files[indexFile];
indexFile++;
return sendFile(config, file.path, `${pathToDir}/${file.name}`);
}
return null;
}, 10);
pool.start().then(function () {
console.log({"message":"OK"}); // res.send('{"message":"OK"}');
callbackFinish();
});
}
using
input.addEventListener('change', function (e) {
e.preventDefault();
sftpPutFiles(
{host: '192.168.2.201', username: 'crestron', password: 'ehAdmin'},
this.files,
`./Program01/test/`,
pathTo => {
let tr = document.createElement('tr');
let bodyTable = document.querySelector('.body');
tr.innerHTML = `<td>${bodyTable.children.length+1}</td><td>${pathTo}</td><td>OK</td>`;
bodyTable.appendChild(tr);
}, () => {
alert('Всё файлы загружены');
},
err => {
alert('Ошибка: '+err);
}
);
});
If there is an error uploading the file to the sftp server, the connection does not close and I cannot reconnect when I open the custom console. I would like to translate the code to Rxjs to better support and I think I can solve the problem of closing the connection and responsiveness of the application.
make sure your using the latest version of ssh2-sftp-client - there has been a fair amount of updates recently, including fixes to handle errors more consistently and ensure connections are closed correctly. (v4.1.0).
You are using sftp.on('keyboard-interaction', ...). There is nothing which emits events of this type in the module, so this listener will not fire.
If you just want to upload files, use the fastPut() method. It is much faster. Make sure the destination path includes the remote file name and not just the remote directory.
Have a look at Promise.all(). You could use this instead of the promise-pool and I think it would be a lot cleaner. Something like (untested)
let localPath = '/path/to/src-dir';
let remotePath = '/path/to/dst-dir';
let files = ['file1.txt', file2.txt','file3.txt'];
let client = new Client();
client.connect(config)
.then(() => {
let promises = [];
files.forEach(f => {
let from = path.join(localPath, f);
let to = path.join(remotePath, f);
promise.push(client.fastPut(from, to));
});
return Promise.all(promises);
}).then(res => { // res is array of resoved promise results
client.end();
}).catch(err => {
// deal with error
});