NodeJS Request filestream EMFILE error - windows

request.post({url: 'http://service.com/upload', form: {data: fs.createReadStream('T:/a.png')}});
After running this every second for ~10 minutes I get this error Error: EMFILE (too many open files)
Why? Am I supposed to be manually closing these streams? Why wouldn't request do that, but if so, how do I close them?
EDIT: I'm not making a lot of async calls, only 1 request is active at a time.
EDIT: No streams are ever being closed, it errors out exactly after I create 2046 streams. Why aren't they being closed?

Look, it depends on what kind of service you are building.
If, for an example, you are executing that fs.createReadStream many times async (like thousands of times in parallel), Node.JS will throw that error.
Try to debug your application like this (checking the number of opened streams).
var fs = require("fs");
var streams = 0;
var i;
try {
for (i=0;i<10000;i++)
{
streams++;
fs.createReadStream('./file.png').once('end',function () { streams--; });
}
} catch (e) {
console.log(streams);
}

Related

Load/stress test in a SPA with Hasura Cloud Graphql as a backend and subscriptions

I'm trying to do a performance test on a
SPA with a Frontend in React, deployed with Netlify
As a backend we're using Hasura Cloud Graphql (std version) https://hasura.io/, where everything from the client goes directly through Hasura to the DB.
DB is in Postgress housed in Heroku (Std 0 tier).
We're hoping to be able to have around 800 users simultaneous.
The problem is that i'm loss about how to do it or if i'm doing it correctly, seeing how most of our stuff are "subscriptions/mutations" that I had to transform into queries. I tried doing those test with k6 and Jmeter but i'm not sure if i'm doing them properly.
k6 test
At first, i did a quick search and collected around 10 subscriptions that are commonly used. Then i tried to create a performance test with k6 https://k6.io/docs/using-k6/http-requests/ but i wasn't able to create a working subscription test so i just transform each subscription into a query and perform a http.post with this setup:
export const options = {
stages: [
{ duration: '30s', target: 75 },
{ duration: '120s', target: 75 },
{ duration: '60s', target: 50 },
{ duration: '30s', target: 30 },
{ duration: '10s', target: 0 }
]
};
export default function () {
var res = http.post(prod,
JSON.stringify({
query: listaQueries.GetDesafiosCursosByKey(
keys.desafioCursoKey
)}), params);
sleep(1)
}
I did this for every query and ran each test individually. Unfortunately, the numbers i got were bad, and somehow our test environment was getting better times than production. (The only difference afaik is that we're using Hasura Cloud for production).
I tried to implement websocket, but i couldn't getthem work and configure them to do a stress/load test.
K6 result
Jmeter test
After that, i tried something similar with Jmeter, but again i couldn't figure how to set up a subscription test (after i while, i read in a blog that jmeter doesn't support it
https://qainsights.com/deep-dive-into-graphql-in-jmeter/ ) so i simply transformed all subscriptions into a query and tried to do the same, but the numbers I was getting were different and much higher than k6.
Jmeter query Config 1
Jmeter query config 2
Jmeter thread config
Questions
I'm not sure if i'm doing it correctly, if transforming every subscription into a query and perform a http request is a correct approach for it. (At least I know that those queries return the data correctly).
Should i just increase the number of VUS/threads until i get a constant timeout to simulate a stress test? There were some test that are causing a graphql error on the website Graphql error, and others were having a
""WARN[0059] Request Failed error="Post \"https://xxxxxxx-xxxxx.herokuapp.com/v1/graphql\": EOF""
in the k6 console.
Or should i just give up with k6/jmeter and try to search for another tool to perfom those test?
Thanks you in advance, and sorry for my English and explanation, but i'm a complete newbie at this.
I'm not sure if i'm doing it correctly, if transforming every
subscription into a query and perform a http request is a correct
approach for it. (At least I know that those queries return the data
correctly).
Ideally you would be using WebSocket as that is what actual clients will most likely be using.
For code samples, check out the answer here.
Here's a more complete example utilizing a main.js entry script with modularized Subscription code in subscriptions\bikes.brands.js. It also uses the Httpx library to set a global request header:
// main.js
import { Httpx } from 'https://jslib.k6.io/httpx/0.0.5/index.js';
import { getBikeBrandsByIdSub } from './subscriptions/bikes-brands.js';
const session = new Httpx({
baseURL: `http://54.227.75.222:8080`
});
const wsUri = 'wss://54.227.75.222:8080/v1/graphql';
const pauseMin = 2;
const pauseMax = 6;
export const options = {};
export default function () {
session.addHeader('Content-Type', 'application/json');
getBikeBrandsByIdSub(1);
}
// subscriptions/bikes-brands.js
import ws from 'k6/ws';
/* using string concatenation */
export function getBikeBrandsByIdSub(id) {
const query = `
subscription getBikeBrandsByIdSub {
bikes_brands(where: {id: {_eq: ${id}}}) {
id
brand
notes
updated_at
created_at
}
}
`;
const subscribePayload = {
id: "1",
payload: {
extensions: {},
operationName: "query",
query: query,
variables: {},
},
type: "start",
}
const initPayload = {
payload: {
headers: {
"content-type": "application/json",
},
lazy: true,
},
type: "connection_init",
};
console.debug(JSON.stringify(subscribePayload));
// start a WS connection
const res = ws.connect(wsUri, initPayload, function(socket) {
socket.on('open', function() {
console.debug('WS connection established!');
// send the connection_init:
socket.send(JSON.stringify(initPayload));
// send the chat subscription:
socket.send(JSON.stringify(subscribePayload));
});
socket.on('message', function(message) {
let messageObj;
try {
messageObj = JSON.parse(message);
}
catch (err) {
console.warn('Unable to parse WS message as JSON: ' + message);
}
if (messageObj.type === 'data') {
console.log(`${messageObj.type} message received by VU ${__VU}: ${Object.keys(messageObj.payload.data)[0]}`);
}
console.log(`WS message received by VU ${__VU}:\n` + message);
});
});
}
Should i just increase the number of VUS/threads until i get a
constant timeout to simulate a stress test?
Timeouts and errors that only happen under load are signals that you may be hitting a bottleneck somewhere. Do you only see the EOFs under load? These are basically the server sending back incomplete responses/closing connections early which shouldn't happen under normal circumstances.
My expectation is that your test should be replicating the real user activity as close as possible. I doubt that real users will be sending requests to GraphQL directly and well-behaved load test must replicate the real life application usage as close as possible.
So I believe you should move to HTTP protocol level and mimic the network footprint of the real browser instead of trying to come up with individual GraphQL queries.
With regards to JMeter and k6 differences it might be the case that k6 produces higher throughput given the same hardware and running requests at maximum speed as it evidenced by kind of benchmark in the Open Source Load Testing Tools 2021 article, however given you're trying to simulate real users using real browsers accessing your applications and the real users don't hammer the application non-stop, they need some time to "think" between operations you should be getting the same number of requests for both load testing tools, if JMeter doesn't give you the load you want to conduct make sure to follow JMeter Best Practices and/or consider running it in distributed mode .

What defines the log type (Default, Alert, Error, Critical, etc) on logs out of a Cloud Run container instance?

I have an express server that is hosted on Cloud Run / Docker container.
This is the screen where we can view logs that come out of the deployed instance.
What defines the "type" of the log message: as in Alert, Critical, Error, Warning, Debug, Info, Notice and Default
If I log with console.error will it show up as an Error ?
What is the documentation on this subject?
UPDATE: Trying to log an error with the type Error
const logError = (msg: string | Error) => console.error(`[test:error] ${msg}`);
const testError = () : void => {
try {
throw new Error("TEST ERROR");
}
catch(err) {
const someError = new Error("HELLO ERROR");
console.log(someError);
console.error(someError);
logError(err);
logError("ERROR STRING MSG");
}
};
These were the results:
Not a single log with the type Error. Is this not supposed to be triggered by our code? When should it happen?
I'd like to filter logged messages from my catch blocks in some situations and I was hoping to filter for the Error log type. I guess I'll have to add the [error] string flag and filter for that.
How do people usually handle this?
Just install and use the Stackdriver node.js library if your goal is to send logs to Google Stackdriver (Operations Logging).

React Router code split "randomly" fails at loading chunks

I am struggling with a issue with react-router + webpack code split + servicer worker (or cache).
Basically the issue is the following, the code split is working properly but from time to time I get error reports from customers at sentry.io such as:
"Dynamic page loading failed Error: Loading chunk 19 failed."
My react-router code is the following:
const errorLoading = (err) => {
console.error('Dynamic page loading failed', err);
};
export default (
<Route path="/" component={App}>
<IndexRoute
getComponent={(nextState, cb) => {
System.import('./containers/home/home')
.then((module) => { cb(null, module.default); })
.catch(errorLoading);
}}
/>
</Route>
);
For my ServiceWorker I use OfflinePlugin with the following configuration:
new OfflinePlugin({
cacheName: 'cache-name',
cacheMaps: [
{
match: function(requestUrl) {
return new URL('/', location);
},
requestTypes: ['navigate']
}
],
externals: [
'assets/images/logos/slider.png',
'assets/images/banners/banner-1-320.jpg',
'assets/images/banners/banner-1-480.jpg',
'assets/images/banners/banner-1-768.jpg',
'assets/images/banners/banner-1-1024.jpg',
'assets/images/banners/banner-1-1280.jpg',
'assets/images/banners/banner-1-1400.jpg'
],
responseStrategy: 'network-first', // One of my failed attempts to fix this issue
ServiceWorker: {
output: 'my-service-worker.js'
}
})
The issue is not browser related because I have reports from IE11, safari, chrome, etc.
Any clues on what I might be doing wrong or how can I fix this issue?
Edit 2: I ended using chunks with hashes, and doing a window.location.reload() inside errorLoading's catch(), so when the browser fails to load a chunk it will reload the window and fetch the new file.
<Route path="about"
getComponent={(location, callback) => {
System.import('./about')
.then(module => { callback(null, module.default) })
.catch(() => {
window.location.reload()
})
}}
/>
It happens to me too and I don't think I have a proper solution yet, but what I noticed is this usually happens when I deploy a new version of the app, the hashes of the chunks change, and when I try to navigate to another address (chunk) the old chunk doesn't exist (it seems as if it wasn't cached) and I get the error.
I managed to reproduce this by removing the service worker that caches stuff and deploying a new version (which I guess simulates a user without the service worker running?).
remove the service worker code
unregister the service worker in devtools
reload the page
deploy a new app version
navigate to another chunk (for instance from /home to /about)
In my case it appears as if the error occurs when the old files are not cached (hence not available any more) and the user doesn't reload the page to request new ones. Reloading 'fixes' the issue because the app has the new chunk names, which correctly load.
Something else I tried was to name the chunk files without their hashes, so instead of 3.something.js they were only 3.js. When I deployed a new version the chunks were obviously still there, but this is not a good solution because the files will be cached by the browser instead of being cached by the caching plugin.
Edit: same setup as you, using sw-precache-webpack-plugin.

Cyclejs Read/write websocket driver?

I'm new to cyclejs and I'm looking for websocket support and I don't see any (apart from the read only websocket driver from the docs and some 0.1.2 node side npm package).
Am I supposed to create my own driver or am I missing something?
Thanks in advance
Does this page help you?
https://cycle.js.org/drivers.html
Specifically the example code mentioned:
function WSDriver(/* no sinks */) {
return xs.create({
start: listener => {
this.connection = new WebSocket('ws://localhost:4000');
connection.onerror = (err) => {
listener.error(err)
}
connection.onmessage = (msg) => {
listener.next(msg)
}
},
stop: () => {
this.connection.close();
},
});
}
If you add a sink this should be a write and read driver. From their documentation:
Most drivers, like the DOM Driver, take sinks (to describe a write) and return sources (to catch reads). However, we might have valid cases for write-only drivers and read-only drivers.
For instance, the one-liner log driver we just saw above is a write-only driver. Notice how it is a function that does not return any stream, it simply consumes the sink msg$ it receives.
Other drivers only create source streams that emit events to the main(), but don’t take in any sink from main(). An example of such would be a read-only Web Socket driver, drafted below:

Removing console.log from React Native app

Should you remove the console.log() calls before deploying a React Native app to the stores? Are there some performance or other issues that exist if the console.log() calls are kept in the code?
Is there a way to remove the logs with some task runner (in a similar fashion to web-related task runners like Grunt or Gulp)? We still want them during our development/debugging/testing phase but not on production.
Well, you can always do something like:
if (!__DEV__) {
console.log = () => {};
}
So every console.log would be invalidated as soon as __DEV__ is not true.
Babel transpiler can remove console statements for you with the following plugin:
npm i babel-plugin-transform-remove-console --save-dev
Edit .babelrc:
{
"env": {
"production": {
"plugins": ["transform-remove-console"]
}
}
}
And console statements are stripped out of your code.
source: https://hashnode.com/post/remove-consolelog-statements-in-production-in-react-react-native-apps-cj2rx8yj7003s2253er5a9ovw
believe best practice is to wrap your debug code in statements such as...
if(__DEV__){
console.log();
}
This way, it only runs when you're running within the packager or emulator. More info here...
https://facebook.github.io/react-native/docs/performance#using-consolelog-statements
I know this question has already been answered, but just wanted to add my own two-bits. Returning null instead of {} is marginally faster since we don't need to create and allocate an empty object in memory.
if (!__DEV__)
{
console.log = () => null
}
This is obviously extremely minimal but you can see the results below
// return empty object
console.log = () => {}
console.time()
for (var i=0; i<1000000; i++) console.log()
console.timeEnd()
// returning null
console.log = () => null
console.time()
for (var i=0; i<1000000; i++) console.log()
console.timeEnd()
Although it is more pronounced when tested elsewhere:
Honestly, in the real world this probably will have no significant benefit just thought I would share.
I tried it using babel-plugin-transform-remove-console but the above solutions didn't work for me .
If someone's also trying to do it using babel-plugin-transform-remove-console can use this one.
npm i babel-plugin-transform-remove-console --save-dev
Edit babel.config.js
module.exports = (api) => {
const babelEnv = api.env();
const plugins = [];
if (babelEnv !== 'development') {
plugins.push(['transform-remove-console']);
}
return {
presets: ['module:metro-react-native-babel-preset'],
plugins,
};
};
I have found the following to be a good option as there is no need to log even if __DEV__ === true, if you are not also remote debugging.
In fact I have found certain versions of RN/JavaScriptCore/etc to come to a near halt when logging (even just strings) which is not the case with Chrome's V8 engine.
// only true if remote debugging
const debuggingIsEnabled = (typeof atob !== 'undefined');
if (!debuggingIsEnabled) {
console.log = () => {};
}
Check if in remote JS debugging is enabled
Using Sentry for tracking exceptions automatically disables console.log in production, but also uses it for tracking logs from device. So you can see latest logs in sentry exception details (breadcrumbs).

Resources