I'm using crossbar to test the websockets and the long polling.
But each time I try using long-polling as default transport, whatever the settings I set, I get a "connection lost" every 2 seconds in my console.
By the way, it works perfectly with the websocket.
Here's the settings I want to test:
On the server site:
{
"lp": {
"type": "longpoll",
"options": {
"request_timeout": 0,
"session_tiemout": 0,
"queue_limit_bytes": 0,
"queue_limit_messages": 0
}
}
}
On the client side:
var connection = new autobahn.Connection({
transports: [{
url: [my url],
type: "longpoll",
max_retries: 1,
initial_retry_delay: 1,
retry_delay_growth: 3,
retry_delay_jitter: 3
}], ...
I'm using python on the server side, Chrome 43 as default browser (also tested on firefox).
Is something wrong in my settings ?
Sorry, I cannot replicate this. I'm using the longpoll example (https://github.com/crossbario/crossbarexamples/tree/master/longpoll) and have modified the config and the connection data to mirror what you list here. (I assume that the "tiemout" is just a typo here, since Crossbar.io doesn't start with this.)
This works fine in Chrome 43.
My best guess is that the problem is with something you didn't list.
My suggestion: Start from the example, and see whether this works for you.
Related
I'm trying to do a performance test on a
SPA with a Frontend in React, deployed with Netlify
As a backend we're using Hasura Cloud Graphql (std version) https://hasura.io/, where everything from the client goes directly through Hasura to the DB.
DB is in Postgress housed in Heroku (Std 0 tier).
We're hoping to be able to have around 800 users simultaneous.
The problem is that i'm loss about how to do it or if i'm doing it correctly, seeing how most of our stuff are "subscriptions/mutations" that I had to transform into queries. I tried doing those test with k6 and Jmeter but i'm not sure if i'm doing them properly.
k6 test
At first, i did a quick search and collected around 10 subscriptions that are commonly used. Then i tried to create a performance test with k6 https://k6.io/docs/using-k6/http-requests/ but i wasn't able to create a working subscription test so i just transform each subscription into a query and perform a http.post with this setup:
export const options = {
stages: [
{ duration: '30s', target: 75 },
{ duration: '120s', target: 75 },
{ duration: '60s', target: 50 },
{ duration: '30s', target: 30 },
{ duration: '10s', target: 0 }
]
};
export default function () {
var res = http.post(prod,
JSON.stringify({
query: listaQueries.GetDesafiosCursosByKey(
keys.desafioCursoKey
)}), params);
sleep(1)
}
I did this for every query and ran each test individually. Unfortunately, the numbers i got were bad, and somehow our test environment was getting better times than production. (The only difference afaik is that we're using Hasura Cloud for production).
I tried to implement websocket, but i couldn't getthem work and configure them to do a stress/load test.
K6 result
Jmeter test
After that, i tried something similar with Jmeter, but again i couldn't figure how to set up a subscription test (after i while, i read in a blog that jmeter doesn't support it
https://qainsights.com/deep-dive-into-graphql-in-jmeter/ ) so i simply transformed all subscriptions into a query and tried to do the same, but the numbers I was getting were different and much higher than k6.
Jmeter query Config 1
Jmeter query config 2
Jmeter thread config
Questions
I'm not sure if i'm doing it correctly, if transforming every subscription into a query and perform a http request is a correct approach for it. (At least I know that those queries return the data correctly).
Should i just increase the number of VUS/threads until i get a constant timeout to simulate a stress test? There were some test that are causing a graphql error on the website Graphql error, and others were having a
""WARN[0059] Request Failed error="Post \"https://xxxxxxx-xxxxx.herokuapp.com/v1/graphql\": EOF""
in the k6 console.
Or should i just give up with k6/jmeter and try to search for another tool to perfom those test?
Thanks you in advance, and sorry for my English and explanation, but i'm a complete newbie at this.
I'm not sure if i'm doing it correctly, if transforming every
subscription into a query and perform a http request is a correct
approach for it. (At least I know that those queries return the data
correctly).
Ideally you would be using WebSocket as that is what actual clients will most likely be using.
For code samples, check out the answer here.
Here's a more complete example utilizing a main.js entry script with modularized Subscription code in subscriptions\bikes.brands.js. It also uses the Httpx library to set a global request header:
// main.js
import { Httpx } from 'https://jslib.k6.io/httpx/0.0.5/index.js';
import { getBikeBrandsByIdSub } from './subscriptions/bikes-brands.js';
const session = new Httpx({
baseURL: `http://54.227.75.222:8080`
});
const wsUri = 'wss://54.227.75.222:8080/v1/graphql';
const pauseMin = 2;
const pauseMax = 6;
export const options = {};
export default function () {
session.addHeader('Content-Type', 'application/json');
getBikeBrandsByIdSub(1);
}
// subscriptions/bikes-brands.js
import ws from 'k6/ws';
/* using string concatenation */
export function getBikeBrandsByIdSub(id) {
const query = `
subscription getBikeBrandsByIdSub {
bikes_brands(where: {id: {_eq: ${id}}}) {
id
brand
notes
updated_at
created_at
}
}
`;
const subscribePayload = {
id: "1",
payload: {
extensions: {},
operationName: "query",
query: query,
variables: {},
},
type: "start",
}
const initPayload = {
payload: {
headers: {
"content-type": "application/json",
},
lazy: true,
},
type: "connection_init",
};
console.debug(JSON.stringify(subscribePayload));
// start a WS connection
const res = ws.connect(wsUri, initPayload, function(socket) {
socket.on('open', function() {
console.debug('WS connection established!');
// send the connection_init:
socket.send(JSON.stringify(initPayload));
// send the chat subscription:
socket.send(JSON.stringify(subscribePayload));
});
socket.on('message', function(message) {
let messageObj;
try {
messageObj = JSON.parse(message);
}
catch (err) {
console.warn('Unable to parse WS message as JSON: ' + message);
}
if (messageObj.type === 'data') {
console.log(`${messageObj.type} message received by VU ${__VU}: ${Object.keys(messageObj.payload.data)[0]}`);
}
console.log(`WS message received by VU ${__VU}:\n` + message);
});
});
}
Should i just increase the number of VUS/threads until i get a
constant timeout to simulate a stress test?
Timeouts and errors that only happen under load are signals that you may be hitting a bottleneck somewhere. Do you only see the EOFs under load? These are basically the server sending back incomplete responses/closing connections early which shouldn't happen under normal circumstances.
My expectation is that your test should be replicating the real user activity as close as possible. I doubt that real users will be sending requests to GraphQL directly and well-behaved load test must replicate the real life application usage as close as possible.
So I believe you should move to HTTP protocol level and mimic the network footprint of the real browser instead of trying to come up with individual GraphQL queries.
With regards to JMeter and k6 differences it might be the case that k6 produces higher throughput given the same hardware and running requests at maximum speed as it evidenced by kind of benchmark in the Open Source Load Testing Tools 2021 article, however given you're trying to simulate real users using real browsers accessing your applications and the real users don't hammer the application non-stop, they need some time to "think" between operations you should be getting the same number of requests for both load testing tools, if JMeter doesn't give you the load you want to conduct make sure to follow JMeter Best Practices and/or consider running it in distributed mode .
My goal is to add a token in the socketio reconnection from the client (works fine on the first connection, but the query is null on the reconnection, if the server restarted while the client stayed on).
The documentation indicates I need to use the Manager to customize the reconnection behavior (and add a query parameter).
However, I'm getting trouble finding how to use this Manager: I can't find a way to connect to the server.
What I was using without Manager (works fine):
this.socket = io({
query: {
token: 'abc',
}
});
Version with the Manager:
const manager = new Manager(window.location, {
hostname: "localhost",
path: "/socket.io",
port: "8080",
query: {
auth: "123"
}
});
So I tried many approaches (nothing, '', 'http://localhost:8080', 'http://localhost:8080/socket.io', adding those lines to the options:
hostname: "localhost",
path: "/socket.io",
port: "8080" in the options,
But I couldn't connect.
The documentation indicates the default URL is:
url (String) (defaults to window.location)
For some reasons, using window.location as URL refreshes the page infinitely, no matter if I enter it as URL in the io() creator or in the new Manager.
I am using socket.io-client 3.0.3.
Could someone explain me what I'm doing wrong ?
Thanks
Updating to 3.0.4 solved the initial problem, which was to be able to send the token in the initial query.
I also found this code in the doc, which solves the problem:
this.socket.on('reconnect_attempt', () => {
socket.io.opts.query = {
token: 'fgh'
}
});
However, it doesn't solve the problem of the Manager that just doesn't work. I feel like it should be removed from the doc. I illustrated the problem in this repo:
https://github.com/Yvanovitch/socket.io/blob/master/examples/chat/public/main.js
Currently I have a videochat web app using WebRTC and written in Reactjs deployed on an AWS EC2 Instance. The videochat works with two users on two different computers on a local network or the same internet network and we can easily talk and see each other.
However when I try to videochat with another user who is on a different network, the videochat stops working and I got an error message in my Chrome browser console like this:
Uncaught (in promise) DOMException: Failed to execute 'addIceCandidate' on 'RTCPeerConnection': Error processing ICE candidate
and the other user gets:
ICE failed, add a STUN server and see about:webrtc for more details
I believe the issue is with the TURN server, however I have set up the TURN server using COTURN (https://github.com/coturn/coturn) on an AWS EC2 instance and it seems to work when I test it on https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/ with the same credentials when I try to see for relays.
I deployed the TURN server using instructions from this stackoverflow post:
How to create stun turn server instance using AWS EC2
I have also allowed inbound port access for UDP and TCP for a large range of ports on AWS security groups.
Some relevant code, this one processes the responses I get back from a WebRTC signalling server:
/**
* Parse a broadcast message and reply back with
* the appropriate details
*/
receiveBroadcast(packetObject) {
try {
var payload = JSON.parse(packetObject.Payload)
} catch (err) {
var payload = packetObject.Payload
}
if (payload.Type == 'Ice Offer') {
// Set remote descriptions and construct an ICE answer
var icePacket = new this.rtcSessionDescription({
type: payload.IcePacket.type,
sdp: payload.IcePacket.sdp,
})
this.peerConnection.setRemoteDescription(icePacket, function () {
this.peerConnection.createAnswer(this.onCreateAnswerSuccess.bind(this), this.onCreateSessionDescriptionError)
}.bind(this), this.onSetSessionDescriptionError)
} else if (payload.Type == 'Ice Answer') {
// Set the remote description
var icePacket = new this.rtcSessionDescription({
type: payload.IcePacket.type,
sdp: payload.IcePacket.sdp,
})
this.peerConnection.setRemoteDescription(icePacket, function () {
this.onSetRemoteSuccess()
}.bind(this), this.onSetSessionDescriptionError)
} else if (payload.Type == 'Ice Candidate') {
console.log('ICE payload :')
console.log(payload)
// Add the candidate to the list of ICE candidates
var candidate = new this.rtcIceCandidate({
sdpMLineIndex: payload.sdpMLineIndex,
sdpMid: payload.sdpMid,
candidate: payload.candidate,
})
this.peerConnection.addIceCandidate(candidate)
}
}
Its mainly the last line that is not working.
I set up console.logs to see what the process looks like:
Local stream set
bundle.js:1 Video Set
bundle.js:1 setRemoteDescription complete
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "candidate:0 1 UDP 2122252543 xxx.xxx.x.xx 57253 typ host", sdpMid: "0"}
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "candidate:1 1 UDP 2122187007 xx.xxx.x.xx 53622 typ host", sdpMid: "0"}
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "candidate:2 1 TCP 2105524479 xxx.xxx.x.xx 9 typ host tcptype active", sdpMid: "0"}
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "candidate:3 1 TCP 2105458943 xx.xxx.x.xx 9 typ host tcptype active", sdpMid: "0"}
bundle.js:1 ICE payload :
bundle.js:1 {Type: "Ice Candidate", sdpMLineIndex: 0, candidate: "", sdpMid: "0"}
Figured it out, it was a stupid mistake, the config JSON I was using for specifying the ICE servers had an extra layer and WebRTC just couldn't process it. However the WebRTC error messages are pretty much unusable and not very informational.
So for any future people stuck on debugging WebRTC, these are the steps I figured out and resources I used so that other people can better debug their problems.
1) Use Chrome
2) Open up in new tab chrome://webrtc-internals/
3) Open up your videochat app in another tab and observe whats happening in chrome://webrtc-internals/
4) Make sure not to close the tab with the videochat app, if you do chrome://webrtc-internals will refresh
5) Open up a working videochat app like https://morning-escarpment-67980.herokuapp.com/ which is an app built from this github repo: https://github.com/nguymin4/react-videocall
6) Compare the differences between yours and the successful video chat app's chrome://webrtc-internals
7) Use this resource to help understand error messages and more details: https://blog.codeship.com/webrtc-issues-and-how-to-debug-them/
Hopefully this helps.
so far I've managed to create two webhooks by using their official gem (https://github.com/bigcommerce/bigcommerce-api-ruby) with the following events:
store/order/statusUpdated
store/app/uninstalled
The destination URL is a localhost tunnel managed by ngrok (the https) version.
status_update_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/order/statusUpdated', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
uninstall_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/app/uninstalled', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
The webhooks seems to be active and correctly created as I can retrieve and list them.
Bigcommerce::Webhook.all(connection:connection)
I manually created an order in my store dashboard but no matter to which state or how many states I change it, no notification is fired. Am I missing something?
The exception that I'm seeing in the logs is:
ExceptionMessage: true is not a valid header value
The "is-active" flag should be sent as part of the request body--your headers, if you choose to include them, would be an arbitrary key value pair that you can check at runtime to verify the hook's origin.
Here's an example request body:
{
"scope": "store/order/*",
"headers": {
"X-Custom-Auth-Header": "{secret_auth_password}"
},
"destination": "https://app.example.com/orders",
"is_active": true
}
Hope this helps!
I've got my WDS running on port 9000, and the webpack bundles located under /dist/ I've got a back end server running on port 55555
Is there a way to get WDS to ignore (proxy to 55555) every call except those starting with /dist/?
I've got the following:
devServer: {
port: 9000,
proxy: {
"/dist": "http://localhost:9000",
"/": "http://localhost:55555"
}
}
Trouble is, that root ("/") just overrides everything...
Thanks for any advice you can offer.
UPDATE:
I've gotten a little farther with the following:
proxy: {
"/": {
target: "http://localhost:55555",
bypass: function(req, res, proxyOptions) {
return (req.url.indexOf("/dist/") !== -1);
}
}
},
But the bypass just seems to kill the connection. I was hoping it would tell the (9000) server to not proxy when the condition is true. Anybody know a good source explaining "bypass"?
Webpack allows glob syntax for these patterns. As a result, you should be able to use an exclusion to match "all-but-dist".
Something like this may work (sorry I don't have webpack in front of me at the moment):
devServer: {
port: 9000,
proxy: {
"!/dist/**/*": "http://localhost:55555"
}
}