Including i18n translations in jade.renderFile() WITHOUT express - internationalization

I have an app that I've been doing translations with using the old i18n __(' ') syntax in my jade files, but now I am moving my emailing functions outside of my express controllers.
Now doing them using jade.renderFile(), it's not recognizing __(' ') anymore. Is there a way to include i18n in calls to renderFile?
Jade/Pug I suppose.
Thank you!

Sorry for late reply!
But if you still care about this issue, try the following code:
app.post('/render', (req, res) => {
var options.__ = res.__;//forward__function
jade.renderFile('code.jade', options, function(err, html){
if (err) throw err;
console.log(html);//completed html contain i18n value
});

Related

Using MS BotFramework NodeJS sdk WITHOUT LUIS

I am currently working on a project where visitors are normally using both English and Chinese to talk to each other.
Since LUIS did not support multi-language very well (Yes I know it can support in certain ways but I want a better service), I would like to build my own Neural Network as a REST API so that, when someone submits their text, we can simply predict the "Intent", while we are still using MS BotFramework (NodeJS).
By doing this we can bypass MS LUIS and using our own Language understanding service.
Here are my two questions:
Has anyone done that before? Any GitHub link I can reference to?
If I did that, what is the BotFramework API I should use? There is a recognizer called "Custom Recognizer" and I wonder if it really works.
Thank you very much in advance for all your help.
Another option apart from Alexandru's suggestions is to add a middleware which will call the NLP service of your choosing everytime the bot receive a chat/request.
Botbuilder allows middleware functions to be applied before handling any dialogs, I created a sample code for a better understanding below.
const bot = new builder.UniversalBot(connector, function(session) {
//pass to root
session.replaceDialog('root_dialog');
})
//custom middleware
bot.use({
botbuilder: specialCommandHandler
});
//dummy call NLP service
let callNLP = (text) => {
return new Promise((resolve, reject) => {
// do your NLP service API call here and resolve the result
resolve({});
});
}
let specialCommandHandler = (session, next) => {
//user message here
let userMessage = session.message.text;
callNLP.then(NLPresult => {
// you can save your NLP result to a session
session.conversationData.nlpResult = NLPResult;
// this will continue to the bot dialog, in this case it will continue to root
// dialog
next();
}).catch(err => {
//handle errors
})
}
//root dialog
bot.dialog('root_dialog', [(session, args, next) => {
// your NLP call result
let nlpResult = session.conversationData.nlpResult;
// do any operations with the result here, either redirecting to a new dialog
// for specific intent/entity, etc.
}]);
For Nodejs botframework implementation you have at least two ways:
With LuisRecognizer as a starting point to create your own Recognizer. This approach works with single intent NLU's and entities arrays (just like LUIS);
Create a SimpleDialog with a single handler function that calls the desired NLU API;

Auto-updates to Electron

I'm looking to deploy an auto-update feature to an Electron installation that I have, however I am finding it difficult to find any resources on the web.
I've built a self contained application using Adobe Air before and it seemed to be a lot easier writing update code that effectively checked a url and automatically downloaded and installed the update across Windows and MAC OSX.
I am currently using the electron-boilerplate for ease of build.
I have a few questions:
How do I debug the auto update feature? Do I setup a local connection and test through that using a local Node server or can I use any web server?
In terms of signing the application I am only looking to run apps on MAC OSX and particularly Windows. Do I have to sign the applications in order to run auto-updates? (I managed to do this with Adobe Air using a local certificate.
Are there any good resources that detail how to implement the auto-update feature? As I'm having difficulty finding some good documentation on how to do this.
I am also new to Electron but I think there is no simple auto-update from electron-boilerplate (which I also use). Electron's auto-updater uses Squirrel.Windows installer which you also need to implement into your solution in order to use it.
I am currently trying to use this:
https://www.npmjs.com/package/electron-installer-squirrel-windows
And more info can be found here:
https://github.com/atom/electron/blob/master/docs/api/auto-updater.md
https://github.com/squirrel/squirrel.windows
EDIT: I just opened the project to try it for a while and it looks it works. Its pretty straightforward. These are pieces from my gulpfile.
In current configuration, I use electron-packager to create a package.
var packager = require('electron-packager')
var createPackage = function () {
var deferred = Q.defer();
packager({
//OPTIONS
}, function done(err, appPath) {
if (err) {
gulpUtil.log(err);
}
deferred.resolve();
});
return deferred.promise;
};
Then I create an installer with electron-installer-squirrel-windows.
var squirrelBuilder = require('electron-installer-squirrel-windows');
var createInstaller = function () {
var deferred = Q.defer();
squirrelBuilder({
// OPTIONS
}, function (err) {
if (err)
gulpUtil.log(err);
deferred.resolve();
});
return deferred.promise;
}
Also you need to add some code for the Squirrel to your electron background/main code. I used a template electron-squirrel-startup.
if(require('electron-squirrel-startup')) return;
The whole thing is described on the electron-installer-squirrel-windows npm documentation mentioned above. Looks like the bit of documentation is enough to make it start.
Now I am working on with electron branding through Squirrel and with creating appropriate gulp scripts for automation.
You could also use standard Electron's autoUpdater module on OS X and my simple port of it for Windows: https://www.npmjs.com/package/electron-windows-updater
I followed this tutorial and got it working with my electron app although it needs to be signed to work so you would need:
certificateFile: './path/to/cert.pfx'
In the task config.
and:
"build": {
"win": {
"certificateFile": "./path/to/cert.pfx",
"certificatePassword": "password"
}
},
In the package.json
Are there any good resources that detail how to implement the auto-update feature? As I'm having difficulty finding some good documentation on how to do this.
You don't have to implement it by yourself. You can use the provided autoUpdater by Electron and just set a feedUrl. You need a server that provides the update information compliant to the Squirrel protocol.
There are a couple of self-hosted ones (https://electronjs.org/docs/tutorial/updates#deploying-an-update-server) or a hosted service like https://www.update.rocks
Question 1:
I use Postman to validate that my auto-update server URLs return the response I am expecting. When I know that the URLs provide the expected results, I know I can use those URLs within the Electron's Auto Updater of my Application.
Example of testing Mac endpoint with Postman:
Request:
https://my-server.com/api/macupdates/checkforupdate.php?appversion=1.0.5&cpuarchitecture=x64
JSON Response when there is an update available:
{
"url": "https:/my-server.com/updates/darwin/x64/my-electron=app-x64-1.1.0.zip",
"name": "1.1.0",
"pub_date": "2021-07-03T15:17:12+00:00"
}
Question 2:
Yes, your Electron App must be code signed to use the auto-update feature on Mac. On Windows I'm not sure because my Windows Electron app is code signed and I did not try without it. Though it is recommended that you sign your app even if the auto-update could work without it (not only for security reasons but mainly because otherwise your users will get scary danger warnings from Windows when they install your app for the first time and they might just delete it right away).
Question 3:
For good documentation, you should start with the official Electron Auto Updater documentation, as of 2021-07-07 it is really good.
The hard part, is figuring out how to make things work for Mac. For Windows it's a matter of minutes and you are done. In fact...
For Windows auto-update, it is easy to setup - you just have to put the RELEASES and nupkg files on a server and then use that URL as the FeedURL within your Electron App's autoUpdater. So if your app's update files are located at https://my-server.com/updates/win32/x64/ - you would point the Electron Auto Updater to that URL, that's it.
For Mac auto-update, you need to manually specify the absolute URL of the latest Electron App .zip file to the Electron autoUpdater. So, in order to make the Mac autoUpdater work, you will need to have a way to get a JSON response in a very specific format. Sadly, you can't just put your Electron App's files on your server and expect it to work with Mac just like that. Instead, the autoUpdater needs a URL that will return the aforementioned JSON response. So to do that, you need to pass Electron's Auto Updater feedURL the URL that will be able to return this expected kind of JSON response.
The way you achieve this can be anything but I use PHP just because that's the server I already paid for.
So in summary, with Mac, even if your files are located at https://my-server.com/updates/darwin/x64/ - you will not provide that URL to Electron's Auto Updater FeedURL. Instead will provide another URL which returns the expected JSON response.
Here's an example of my main.js file for the Electron main process of my App:
// main.js (Electron main process)
function registerAutoUpdater() {
const appVersion = app.getVersion();
const os = require('os');
const cpuArchitecture = os.arch();
const domain = 'https://my-server.com';
const windowsURL = `${domain}/updates/win32/x64`;
const macURL = `${domain}/api/macupdates/checkforupdate.php?appversion=${appVersion}&cpuarchitecture=${cpuArchitecture}`;
//init the autoUpdater with proper update feed URL
const autoUpdateURL = `${isMac ? macURL : windowsURL}`;
autoUpdater.setFeedURL({url: autoUpdateURL});
log.info('Registered autoUpdateURL = ' + (isMac ? 'macURL' : 'windowsURL'));
//initial checkForUpdates
autoUpdater.checkForUpdates();
//Automatic 2-hours interval loop checkForUpdates
setInterval(() => {
autoUpdater.checkForUpdates();
}, 7200000);
}
And here's an example of the checkforupdate.php file that returns the expected JSON response back to the Electron Auto Updater:
<?php
//FD Electron App Mac auto update API endpoint.
// The way Squirrel.Mac works is by checking a given API endpoint to see if there is a new version.
// If there is no new version, the endpoint should return HTTP 204. If there is a new version,
// however, it will expect a HTTP 200 JSON-formatted response, containing a url to a .zip file:
// https://github.com/Squirrel/Squirrel.Mac#server-support
$clientAppVersion = $_GET["appversion"] ?? null;
if (!isValidVersionString($clientAppVersion)) {
http_response_code(204);
exit();
}
$clientCpuArchitecture = $_GET["cpuarchitecture"] ?? null;
$latestVersionInfo = getLatestVersionInfo($clientAppVersion, $clientCpuArchitecture);
if (!isset($latestVersionInfo["versionNumber"])) {
http_response_code(204);
exit();
}
// Real logic starts here when basics did not fail
$isUpdateVailable = isUpdateAvailable($clientAppVersion, $latestVersionInfo["versionNumber"]);
if ($isUpdateVailable) {
http_response_code(200);
header('Content-Type: application/json;charset=utf-8');
$jsonResponse = array(
"url" => $latestVersionInfo["directZipFileURL"],
"name" => $latestVersionInfo["versionNumber"],
"pub_date" => date('c', $latestVersionInfo["createdAtUnixTimeStamp"]),
);
echo json_encode($jsonResponse);
} else {
//no update: must respond with a status code of 204 No Content.
http_response_code(204);
}
exit();
// End of execution.
// Everything bellow here are function declarations.
function getLatestVersionInfo($clientAppVersion, $clientCpuArchitecture): array {
// override path if client requests an arm64 build
if ($clientCpuArchitecture === 'arm64') {
$directory = "../../updates/darwin/arm64/";
$baseUrl = "https://my-server.com/updates/darwin/arm64/";
} else if (!$clientCpuArchitecture || $clientCpuArchitecture === 'x64') {
$directory = "../../updates/darwin/";
$baseUrl = "https://my-server.com/updates/darwin/";
}
// default name with version 0.0.0 avoids failing
$latestVersionFileName = "Finance D - Tenue de livres-darwin-x64-0.0.0.zip";
$arrayOfFiles = scandir($directory);
foreach ($arrayOfFiles as $file) {
if (is_file($directory . $file)) {
$serverFileVersion = getVersionNumberFromFileName($file);
if (isVersionNumberGreater($serverFileVersion, $clientAppVersion)) {
$latestVersionFileName = $file;
}
}
}
return array(
"versionNumber" => getVersionNumberFromFileName($latestVersionFileName),
"directZipFileURL" => $baseUrl . rawurlencode($latestVersionFileName),
"createdAtUnixTimeStamp" => filemtime(realpath($directory . $latestVersionFileName))
);
}
function isUpdateAvailable($clientVersion, $serverVersion): bool {
return
isValidVersionString($clientVersion) &&
isValidVersionString($serverVersion) &&
isVersionNumberGreater($serverVersion, $clientVersion);
}
function getVersionNumberFromFileName($fileName) {
// extract the version number with regEx replacement
return preg_replace("/Finance D - Tenue de livres-darwin-(x64|arm64)-|\.zip/", "", $fileName);
}
function removeAllNonDigits($semanticVersionString) {
// use regex replacement to keep only numeric values in the semantic version string
return preg_replace("/\D+/", "", $semanticVersionString);
}
function isVersionNumberGreater($serverFileVersion, $clientFileVersion): bool {
// receives two semantic versions (1.0.4) and compares their numeric value (104)
// true when server version is greater than client version (105 > 104)
return removeAllNonDigits($serverFileVersion) > removeAllNonDigits($clientFileVersion);
}
function isValidVersionString($versionString) {
// true when matches semantic version numbering: 0.0.0
return preg_match("/\d\.\d\.\d/", $versionString);
}

Very new to express and mongodb with gridfs-stream

and thanks in advance for any help you guys can give me. I am trying to use mongodb, mongoose, gridfs-strea, and express to store img files on mongodb.
I do not want to store the file in a folder. I want gfs.createWriteStream to take the file directly from app.post from express. App.post is currently saving some strings through a schema model into mongodb.
The form that routes to app.post contains strings and a input for img file.
I tried to do this(dnimgfront being the name of the input):
dnimgfront:req.pipe(gfs.createWriteStream('req.body.dnimgfront'))
I am getting back.
TypeError: Object function Grid(db, mongo) {
if (!(this instanceof Grid)) {
return new Grid(db, mongo);
}
mongo || (mongo = Grid.mongo ? Grid.mongo : undefined);
My problem is I have to be able to store the img and the strings from the same app.post save function. Any ideas?
What I'm using to connect to mongodb is:
mongoose.connect('mongodb://localhost/veeltaxi');
var con=mongoose.connection;
con.once('open', function(){
console.log('connected to mongodb succesfully!')
});
Thanks in advance for your help.
First, you need an instead of gridfs-stream:
var GridStream = require('gridfs-stream');
var mongodb = require('mongodb');
var gfs = new GridStream(mongoose.connection,mongodb);
At which point you can create the write stream and pipe:
var writeStream = gfs.createWriteStream({
filename: filename, // the name of the file
content_type: mimetype, // somehow get mimetype from request,
mode: 'w' // ovewrite
});
req.pipe(writeStream);
However, at this time the Node.js MongoDB Driver is at version 2.0 which uses Streams 2, so there is no reason to use gridfs-stream if you're using that version of the mongodb lib. See the source code for v2.0.13. FYI, the stream that is implemented in the updated driver is Duplex. (Aside: gridfs-stream has its share of problems, and while actively maintained, is not very frequent.)

why does socket.io error 500 with express.io?

Why does socket.io now give 500 (Internal Server Error) with express.io??
client side:
$(document).ready(function(){
$.getScript("http://www.mysite.com:8000/socket.io/socket.io.js",function(){
var socket = io.connect('http://www.mysite.com:8000'); //<<--error
socket.emit('ready');
});});
server side:
var express = require('express.io')
, engine = express().http().io();
engine.use(express.cookieParser());
engine.use(express.session({secret:'monkey'}));
engine.all('/',function(req,res,next){res.header("Access-Control-Allow-Origin","*");res.header("Access-Control-Allow-Headers","X-Requested-With");next();});
engine.get('/', function(req, res) {
req.session.loginDate = new Date().toString()
res.sendfile(__dirname)
});
engine.listen(8000);
engine.io.route('ready',function(socket){console.log('hellooooooooooo');});
I am following the docs on https://github.com/techpines/express.io, I have only changed two things: cross domain and app is called engine instead. I just can't see the problem Has anyone else got this to work?
Note: it's not using express.js it's using express.io (more compatable with socket.io)
It's like socket.io is not their listening on the server even though engine = express().http().io(); io is socket.io
I faced a similar problem, but I fixed it by copying and pasting the code sample in express.io sample code, and it worked. Then I compared them to check what the problem could be and observed that order of the code matters.
This order results in an error:
static
cookieParser
session
But when I followed the code provided in the sample code, I found out that this order works:
cookieParser
session
static
Hopefully this will also help you.
I believe the posted example is failing because you're using the call res.sendfile(__dirname) without supplying a filename.
This is coming from express.io, notice it uses res.sendfile(__dirname + '/client.html'):
express = require('express.io')
app = express().http().io()
// Setup your sessions, just like normal.
app.use(express.cookieParser())
app.use(express.session({secret: 'monkey'}))
// Session is automatically setup on initial request.
app.get('/', function(req, res) {
req.session.loginDate = new Date().toString()
res.sendfile(__dirname + '/client.html')
})

can't seem to get progress events from node-formidable to send to the correct client over socket.io

So I'm building a multipart form uploader over ajax on node.js, and sending progress events back to the client over socket.io to show the status of their upload. Everything works just fine until I have multiple clients trying to upload at the same time. Originally what would happen is while one upload is going, when a second one starts up it begins receiving progress events from both of the forms being parsed. The original form does not get affected and it only receives progress updates for itself. I tried creating a new formidable form object and storing it in an array along with the socket's session id to try to fix this, but now the first form stops receiving events while the second form gets processed. Here is my server code:
var http = require('http'),
formidable = require('formidable'),
fs = require('fs'),
io = require('socket.io'),
mime = require('mime'),
forms = {};
var server = http.createServer(function (req, res) {
if (req.url.split("?")[0] == "/upload") {
console.log("hit upload");
if (req.method.toLowerCase() === 'post') {
socket_id = req.url.split("sid=")[1];
forms[socket_id] = new formidable.IncomingForm();
form = forms[socket_id];
form.addListener('progress', function (bytesReceived, bytesExpected) {
progress = (bytesReceived / bytesExpected * 100).toFixed(0);
socket.sockets.socket(socket_id).send(progress);
});
form.parse(req, function (err, fields, files) {
file_name = escape(files.upload.name);
fs.writeFile(file_name, files.upload, 'utf8', function (err) {
if (err) throw err;
console.log(file_name);
})
});
}
}
});
var socket = io.listen(server);
server.listen(8000);
If anyone could be any help on this I would greatly appreciate it. I've been banging my head against my desk for a few days trying to figure this one out, and would really just like to get this solved so that I can move on. Thank you so much in advance!
Can you try putting console.log(socket_id);
after form = forms[socket_id]; and
after progress = (bytesReceived / bytesExpected * 100).toFixed(0);, please?
I get the feeling that you might have to wrap that socket_id in a closure, like this:
form.addListener(
'progress',
(function(socket_id) {
return function (bytesReceived, bytesExpected) {
progress = (bytesReceived / bytesExpected * 100).toFixed(0);
socket.sockets.socket(socket_id).send(progress);
};
})(socket_id)
);
The problem is that you aren't declaring socket_id and form with var, so they're actually global.socket_id and global.form rather than local variables of your request handler. Consequently, separate requests step over each other since the callbacks are referring to the globals rather than being proper closures.
rdrey's solution works because it bypasses that problem (though only for socket_id; if you were to change the code in such a way that one of the callbacks referenced form you'd get in trouble). Normally you only need to use his technique if the variable in question is something that changes in the course of executing the outer function (e.g. if you're creating closures within a loop).

Resources