Parse Live Query Memory Leak - parse-platform

I'm using parse live query (in js and vue.js) to update/replace an array on each create. I noticed however that live query is causing a memory leak. As I'm running 3 live queries at at time, after day of running I get up to +2GB of memory and and error page with "out of memory".
Here an example of how I open one of my subscriptions.
Parse.initialize(this.parse.initialize)
Parse.serverURL = this.parse.serverURL
const snapshots = Parse.Object.extend("snapshots");
const query = new Parse.Query(snapshots)
const subscriptionSnapshots = await query.subscribe()
subscriptionSnapshots.on('open', () => {
console.log(' -> Snapshots subscription opened');
});
I don't know if it's related, but in my browser console I see the following alerts
[Violation] 'message' handler took <N>ms
parse.min.js:13 [Violation] 'message' handler took 164ms
Any ideas where the leak comes from and how I can stop it ?

Related

Blazor Server Side - Error: WebSocket closed with status code: 1006 () after 15 seconds

I have a Blazor Server Side app that throws a "Error: WebSocket closed with status code: 1006 () " on the clients side after 15 seconds while waiting for a sql query to complete.
The sql query populates a report so it sometimes takes up to 30 sec to generate.
I need to either increase the 15 sec timeout or keep the connection alive long enough for the query to complete.
Does anyone know how I can extend the timeout or keep the page alive long enough for the query to complete ?
Edit:
Adding :
endpoints.MapBlazorHub(opts => opts.WebSockets.CloseTimeout = new TimeSpan(1, 1, 1) );
to the start.cs seems to increase the time out to 30sec.
Unfortunately this doesn't seem to be enough off a timeout and changing the TimeSpan value any higher does not seem to increase the timeout further.
Thanks
I managed to "fix" this issue by doing a few things.
First I added
endpoints.MapBlazorHub(opts => opts.WebSockets.CloseTimeout = new TimeSpan(1, 1, 1));
to
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
in the
Startup.cs
This didn't completely work, it just increased the timeout from 15 sec to 30sec.
After playing around a bit, I updated the project from .net5 to .net 6 and updated all my nuget packages.
This seemed to help the most as the report now populates faster (under 30sec).
If I try to generate too large a report that takes over 30sec I now end up with a new error:
Error: Connection disconnected with error 'Error: Server timeout elapsed without receiving a message from the server.'.
If i keep refreshing the page the large reports does seem to eventually load.
For now the above fix helps me with my initial report issue.
If anyone has a real solution to this please let me know.
Edit(12/12/2022):
Finally seem to have a fix for this.
In _Host.cshtml
I added the following inside the "body" tag...
<body>
<script src="_framework/blazor.server.js" autostart="false"></script>
<script>
Blazor.start({
configureSignalR: function (builder) {
let c = builder.build();
c.serverTimeoutInMilliseconds = 3000000;
c.keepAliveIntervalInMilliseconds = 1500000;
builder.build = () => {
return c;
};
}
});
</script>
</body>
This, coupled with the "endpoints" update seems to have solved my issue correctly.
You can try:
WebSocket ws;
protected override async Task OnAfterRenderAsync(bool firstRender)
{
if (firstRender)
{
ws = new WebSocket(new WebAssemblyHttpMessageHandler());
ws.SetReceiveTimeout(TimeSpan.FromSeconds(30));
await ws.ConnectAsync(new Uri("ws://localhost:5000/mywebsocket"));
}
}

How to fetch a message with the ID, without having it in cache

So I programmed a system that saves the ids of certain channels (plus messages) and saves them in a file. After restarting the bot, the IDs will be read and reused. For example, fetching the specific message to edit it, therefore. Unfortunately, the cache is undefined, empty or an error occurs. Could somebody show me how to fetch a message which isn't in the cache?
Example code for an existing message and channel:
const {Discord, Client, Intents, Permissions, MessageEmbed} = require('discord.js');
const bot = new Client({ intents: [Intents.FLAGS.GUILDS, Intents.FLAGS.GUILD_MEMBERS, Intents.FLAGS.GUILD_MESSAGES, Intents.FLAGS.GUILD_MESSAGE_REACTIONS, Intents.FLAGS.GUILD_MESSAGE_TYPING, Intents.FLAGS.GUILD_PRESENCES, Intents.FLAGS.DIRECT_MESSAGES, Intents.FLAGS.DIRECT_MESSAGE_REACTIONS, Intents.FLAGS.DIRECT_MESSAGE_TYPING, Intents.FLAGS.GUILD_VOICE_STATES]});
bot.channels.cache.get('597102497420673025').messages.cache.get('959413368911966251').embeds;
Error:
var test = bot.channels.cache.get('597102497420673025').messages.cache.get('959413368911966251').embeds;
^
TypeError: Cannot read properties of undefined (reading 'embeds')
at Client.<anonymous> (D:\E-verysBot\index.js:2288:99)
at Client.emit (node:events:402:35)
at WebSocketManager.triggerClientReady (D:\E-verysBot\node_modules\discord.js\src\client\websocket\WebSocketManager.js:384:17)
at WebSocketManager.checkShardsReady (D:\E-verysBot\node_modules\discord.js\src\client\websocket\WebSocketManager.js:367:10)
at WebSocketShard.<anonymous> (D:\E-verysBot\node_modules\discord.js\src\client\websocket\WebSocketManager.js:189:14)
at WebSocketShard.emit (node:events:390:28)
at WebSocketShard.checkReady (D:\E-verysBot\node_modules\discord.js\src\client\websocket\WebSocketShard.js:475:12)
at WebSocketShard.onPacket (D:\E-verysBot\node_modules\discord.js\src\client\websocket\WebSocketShard.js:447:16)
at WebSocketShard.onMessage (D:\E-verysBot\node_modules\discord.js\src\client\websocket\WebSocketShard.js:301:10)
at WebSocket.onMessage (D:\E-verysBot\node_modules\ws\lib\event-target.js:199:18)
Node.js v17.3.0
You fetch a message, to cache it.
// asynchronously
const channel = client.channels.cache.get(`channelId`);
const message = await channel.messages.fetch(`messageId`);
return message.embeds;
// synchronously
const channel = client.channels.cache.get(`channelId`);
channel.messages.fetch(`messageId`).then(message => {
return message.embeds;
})
I hope this fixes your problem!

I am trying to increase performance of my code and switching from XMLHttpRequest to fetch requests using asynchronous code

I want to confirm if I am not slowing down the code below. My goal is to increase the performance of the application. I was considering a promise.all but not sure if it is necessary as I believe the code is written now, all the fetch requests are running simultaneously?
The test functions don't need to wait for each other. I don't want to create a funnel where each test function waits for the other one to finish or each fetch request is waiting for the previous one to finish. The goal is to have them all running together. Is my code doing that at the moment? Thank you for your assistance.
function test1() {
//some code here
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest1=> //work with the data here)
.catch(error => {console.log('Request failed', error);});
}
function test2() {
//some code here
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest2 => //work with the data here)
.catch(error => {console.log('Request failed', error);});
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest3 => //work with the data here)
.catch(error => {console.log('Request failed', error);});
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest4 => //work with the data here)
.catch(error => {console.log('Request failed', error);});
}
//....and hundreds like the above test functions down here
const checkStatusandContentType = async response => {
const isJson = response.headers.get('content-type')?.includes('application/json');
const isHtml = response.headers.get('content-type')?.includes('text/html');
const data = isJson ? await response.json()
: isHtml ? await response.text()
: null;
// check for error response
if (!response.ok) {
// get error message from body or default to response status
const error = (data && data.message) || response.status;
return Promise.reject(error);
}
return data;
}
const HtmlToObject = data => {
const stringified = data;
const processCode = stringified.substring(stringified.lastIndexOf("<X1>") + 4, stringified.indexOf("</X1>"));
//CONTENT EXTRACT
data = JSON.parse(processCode);
return data;
};
TL;DR fetch and XmlHTTPRequest perform the same.
You said you want to increase your application's performance. It's usually wise to dream up a way of measuring the performance when you do that.
Your performance measurement may be for just one desktop user with an excellent connection to the network. Or, it may be for hundreds of mobile devices using your app at the same time.
Browser HTML / Javascript apps using XmlHTTPRequest (XHR for short) or fetch requests are often designed to display something useful to your user, and then use the received data to make that display even more useful. If your measure of performance is how long it takes for a single user to see something useful, you may already have acceptable peformance. But if you have hundreds of mobile users, performance is harder to define.
You asked about whether XHR or fetch has better performance. From the point of view of your server, they are the same: both generate requests that your server must satisfy. They both generate the same requests; your server can't tell them apart. fetch requests are easier to code, as you have discovered.
Your code runs many requests. You showed us three but you said you have many more. Browsers restrict the number of concurrent outbound requests, and others wait for an available slot. Here's information about concurrent requests in an answer. Most browsers allow six concurrent requests to any given domain, and ten overall.
So, your concurrent fetch operations (or concurrent XHR operations, it doesn't matter which) will hit your server with six connections at once. That's fine for low-volume applications with good bandwidth. But if your app scales up to many users or must work over limited (mobile) bandwidth, you should consider whether you will overload your users' networks or your server.
Reducing the number of requests coming from your browser app, and perhaps returning more complete information in each request, is a good way to manage this server and network load.

Parse.com with multiple live queries best practices

I’m using multiple live queries in my js app and was wondering what are the best practices on this subject.
I did read somewhere that opening multiple websockets at a time should be avoided. What are the recommendations using the Parse.com live queries ?
Currently, I'm launching 3 times the following function (and changing the document and function name)
startWebserver1: async function(param) {
Parse.initialize(this.parse.initialize)
Parse.serverURL = this.parse.serverURL
const snapshots = Parse.Object.extend("myDocument1");
const query = new Parse.Query(snapshots)
const subscriptionSnapshots = await query.subscribe()
subscriptionSnapshots.on('open', () => {
console.log(' -> Snapshots subscription opened');
});
}

Telegram Bot Random Images (How to send random images with Telegram-Bot)

const TeleBot = require('telebot');
const bot = new TeleBot({
token: 'i9NhrhCQGq7rxaA' // Telegram Bot API token.
});
bot.on(/^([Hh]ey|[Hh]oi|[Hh]a*i)$/, function (msg) {
return bot.sendMessage(msg.from.id, "Hello Commander");
});
var Historiepics = ['Schoolfotos/grr.jpg', 'Schoolfotos/boe.jpg',
'Schoolfotos/tobinsexy.jpg'];
console.log('Historiepics')
console.log(Math.floor(Math.random() * Historiepics.length));
var foto = Historiepics[(Math.floor(Math.random() * Historiepics.length))];
bot.on(/aap/, (msg) => {
return bot.sendPhoto(msg.from.id, foto);
});
bot.start();
The result I'm getting from this is just one picture everytime, but if I ask for another random picture it keeps showing me the same one without change.
I recently figured this out, so I'll drop an answer for anyone that runs into this issue.
The problem is with Telegram's cache. They cache images server side so that they don't have to do multiple requests to the same url. This protects them from potentially getting blacklisted for too many requests, and makes things snappier.
Unfortunately if you're using an API like The Cat API this means you will be sending the same image over and over again. The simplest solution is just to somehow make the link a little different every time. This is most easily accomplished by including the current epoch time as a part of the url.
For your example with javascript this can be accomplished with the following modifications
bot.on(/aap/, (msg) => {
let epoch = (new Date).getTime();
return bot.sendPhoto(msg.from.id, foto + "?time=" + epoch);
});
Or something similar. The main point is, as long as the URL is different you won't receive a cached result. The other option is to download the file and then send it locally. This is what Telebot does if you pass the serverDownload option into sendPhoto.

Resources