socketio-jwt error in building typescript when importing library [Cannot find namespace 'SocketIO'.] - socket.io

hello guys I am working on a nestjs#v9 that has a chat functionality and I want to be able to authorize the tokens sent by the user while creating the websocket
the app has a build script "build":"npm run remove-build && tsc -p tsconfig.build.json"
when running this script using yarn build
it gives me the error
yarn run build stdout:
$ npm run remove-build && tsc -p tsconfig.build.json
> remove-build
> rm -rf dist
../../node_modules/socketio-jwt/types/index.d.ts(26,14): error TS2503: Cannot find namespace 'SocketIO'.
../../node_modules/socketio-jwt/types/index.d.ts(63,38): error TS2503: Cannot find namespace 'SocketIO'.
I found that if I remove this line
// import { authorize } from 'socketio-jwt';
the error disappears but unfortunately I need to use this authorize method
so my question how can I solve this Issue.
[NOTE] i have this packages inside my package.json
"socket.io": "^4.5.2",
"socket.io-redis": "^6.1.1",
"socketio-jwt": "^4.6.2",
the client should be connecting using this code
const socket = io('http://localhost:1080', {
transports: ['websocket'],
query: {
token: TOKEN,
},
});
the server should authorize the socket using
import { authorize } from 'socketio-jwt';
public createIOServer(port: number, options?: ServerOptions): any {...
const server = super.createIOServer(port, options);
server.adapter(this.redisAdapter);
server.use(
authorize({
decodedPropertyName: 'token',
handshake: true,
secret: secret,
}) as any,
);

Related

How to run nextjs in AWS lambda with `experimental-edge` runtime

I'm trying to find a way to run Next.js (v13.0.6) with OG image generation logic (using #vercel/og) in AWS Lambda
Everything works fine locally (in dev and prod mode) but when I try execute lambda handler getting "statusCode": 500,
It only fails for apis that involve ImageResponse (and runtime: 'experimental-edge' as a requirement for #vercel/og)
I'm pretty sure the problem is caused by Edge Runtime is not being configured correctly
There is my handler code
next build with next.config.js output: 'standalone' creates folder .next/standalone
insde standalone handler.js
const { parse } = require('url');
const NextServer = require('next/dist/server/next-server').default
const serverless = require('serverless-http');
const path = require('path');
process.env.NODE_ENV = 'production'
process.chdir(__dirname)
const currentPort = parseInt(process.env.PORT, 10) || 3000
const nextServer = new NextServer({
hostname: 'localhost',
port: currentPort,
dir: path.join(__dirname),
dev: false,
customServer: false,
conf: {...} // copied from `server.js` in there same folder
});
const requestHandler = nextServer.getRequestHandler();
// this is a AWS lambda handler that converts lambda event
// to http request that next server can process
const handler = serverless(async (req, res) => {
// const parsedUrl = parse(req.url, true);
try {
await requestHandler(req, res);
}catch(err){
console.error(err);
res.statusCode = 500
res.end('internal server error')
}
});
module.exports = {
handler
}
testing it locally with local-lambda, but getting similar results when test against AWS deployed lambda
what is confusing is that server.js (in .next/standalone) has a similar setup, it only involves http server on top of of it
update:
aws lambda logs show
ERROR [Error [CompileError]: WebAssembly.compile(): Compiling function #64 failed: invalid value type 'Simd128', enable with --experimental-wasm-simd #+3457 ]
update 2:
the first error was fixed by selecting Node 16 for AWS lambda, now getting this error
{
"errorType": "Error",
"errorMessage": "write after end",
"trace": [
"Error [ERR_STREAM_WRITE_AFTER_END]: write after end",
" at new NodeError (node:internal/errors:372:5)",
" at ServerlessResponse.end (node:_http_outgoing:846:15)",
" at ServerlessResponse.end (/var/task/node_modules/next/dist/compiled/compression/index.js:22:783)",
" at NodeNextResponse.send (/var/task/node_modules/next/dist/server/base-http/node.js:93:19)",
" at NextNodeServer.handleRequest (/var/task/node_modules/next/dist/server/base-server.js:332:47)",
" at processTicksAndRejections (node:internal/process/task_queues:96:5)",
" at async /var/task/index.js:34:5"
]
}
At the moment of writing Vercel's runtime: 'experimental-edge' seems to be unstable (run into multiple issues with it)
I ended up recreating #vercel/og lib without wasm and next.js dependencies, can be found here
and simply use it in AWS lambda. It depends on #resvg/resvg-js instead of wasm version, which uses binaries, so there should not be much perf loss comparing to wasm

Lambda Layers not installing with Serverless

Currently getting the following error with MongoDB:
no saslprep library specified. Passwords will not be sanitized
We are using Webpack so simply installing the module doesn't work (Webpack just ignores it). I found this thread which talks about how to exclude it from Webpack compilations, but then I have to manually load it into every Lambda function which led me to Lambda Layers.
Following the Serverless guide on using Lambda layers allowed me to get my layer published to AWS and included in all of my functions, but for some reason, it doesn't install the modules. If I download the layer using the AWS GUI, I get a folder with just the package.json and package-lock.json files.
My file structure is:
my-project
|_ layers
|_ saslprep
|_ package.json
and my serverless.yml is:
layers:
saslprep:
path: layers/saslprep
compatibleRuntimes:
- nodejs14.x
This is not my preferred solution as I'd like to use 256, but the way I got around this error/warning was by changing the authMechanism from SCRAM-SHA-256 to SCRAM-SHA-1 in the connection string. The serverless-bundle most likely needs to add this dependency into their package to enable support for Mongo 4.0 SHA256 (my best guess!).
You can specify this authentication mechanism by setting the authMechanism parameter to the value SCRAM-SHA-1 in the connection string as shown in the following sample code.
const { MongoClient } = require("mongodb");
// Replace the following with values for your environment.
const username = encodeURIComponent("<username>");
const password = encodeURIComponent("<password>");
const clusterUrl = "<MongoDB cluster url>";
const authMechanism = "SCRAM-SHA-1";
// Replace the following with your MongoDB deployment's connection string.
const uri =
`mongodb+srv://${username}:${password}#${clusterUrl}/?authMechanism=${authMechanism}`;
// Create a new MongoClient
const client = new MongoClient(uri);
// Function to connect to the server
async function run() {
try {
// Connect the client to the server
await client.connect();
// Establish and verify connection
await client.db("admin").command({ ping: 1 });
console.log("Connected successfully to server");
} finally {
// Ensures that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);

Next-optimized-images: Error Module parse failed, Unexpected character ''"

I'm trying to optimize my nextjs page images with next-optimized-images
This is the next.config.js:
module.exports = {
...
withOptimizedImages: withOptimizedImages({
webpack(config) {
config.resolve.alias.images = path.join(__dirname, 'public')
return config
}
}),
...
Here is how I import image to components:
require(`public/assets/icons/${iconName}`)
My Error:
./public/assets/icons/website/information/hiring-black.svg 1:0
Module parse failed: Unexpected token (1:0)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
I'm using latest version of next-optimized-image and tried different guides but still no luck.
Please help
Next.js now optimize images by default.
Refer: next/image
If you need svg, you need to try adding svgr-webpack loader.
Install: yarn add #svgr/webpack -D
To configure this, update following in next.config.js
module.exports = {
...
webpack(config) {
config.module.rules.push({
test: /\.svg$/,
use: ['#svgr/webpack'],
});
return config;
},
...
};
Use it as following:
...
import Star from './star.svg'
...
<Star />
...

NestJS on Heroku always failing to bind port

I am trying to deploy my NestJS REST API on Heroku but I always get the following error:
Web process failed to bind to $PORT within 60 seconds of launch
My configuration is pretty straight forward:
In my main.ts I start my server with:
await app.listen(process.env.PORT || AppModule.port);
I added a Procfile in the root directory of my project which contains:
web: npm run start:prod
My package.json files contains these scripts:
"build": "tsc -p tsconfig.build.json",
"prestart:prod": "rimraf dist && npm run build",
"start:prod": "node dist/main.js",
The process on Heroku builds succesfully, prints out these seamingly reassuring lines:
TypeOrmModule dependencies initialized
SharedModule dependencies initialized
AppModule dependencies initialized
But then immediately crashes with:
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
I use .env configuration across my application but I removed all HOST and PORT variables (and code references), so I have no clue what could be the cause of this error.
Am I missing something?
EDIT
I am hereby sharing my app.module and main.ts files:
app.module.ts
#Module({
imports: [
SharedModule,
TypeOrmModule.forRootAsync({
imports: [SharedModule],
inject: [ConfigService],
useFactory: async (configService: ConfigService) => ({
type: 'postgres',
host: configService.getString('POSTGRES_HOST'),
port: configService.getNumber('POSTGRES_DB_PORT'),
username: configService.getString('POSTGRES_USER'),
password: configService.getString('POSTGRES_PASSWORD'),
database: configService.getString('POSTGRES_DB'),
entities: [__dirname + '/**/*.entity{.ts,.js}'],
} as PostgresConnectionOptions),
}),
UserModule,
],
controllers: [
AppController,
],
providers: [
AppService,
],
})
export class AppModule {
static port: number;
static isDev: boolean;
constructor(configurationService: ConfigService) {
console.log(process.env.PORT);
AppModule.port = configurationService.getNumber('PORT');
AppModule.isDev = configurationService.getBoolean('ISDEV');
}
}
My configuration.service.ts is a simple utility that reads from .env files:
import * as dotenv from 'dotenv';
import * as path from 'path';
#Injectable()
export class ConfigService {
constructor() {
const filePath = path.resolve('.env');
dotenv.config({
path: filePath,
});
}
getNumber(key: string): number | undefined {
return +process.env[key] as number | undefined;
}
getBoolean(key: string): boolean {
return process.env[key] === 'true';
}
getString(key: string): string | undefined {
return process.env[key];
}
}
And finally my main.ts file:
async function bootstrap() {
console.log(process.env.PORT);
const app = await NestFactory.create(AppModule);
app.enableCors();
app.useGlobalPipes(new ValidationPipe(), new TimeStampPipe());
app.use(json({ limit: '5mb' }));
app.setGlobalPrefix('api/v1');
await app.listen(process.env.PORT || AppModule.port);
}
bootstrap();
Could it be that my configuration.service.ts is interfering with heroku's env file?
If you are using fastify instead express as your platform, you need to define the host to 0.0.0.0 explicitly like this :
const port = process.env.PORT || AppModule.port;
const host = '0.0.0.0';
await app.listen(port, host);
This problem is caused by the fastify library. See the related discussion here: Fastify with Heroku.
Just as a summary, be careful on the database connection timeout that could lead to a global timeout of the heroku bootstrap as describe above

Does Mocha support multiple before hook for creating independent http server?

Here is my project structure:
src/
- demo-1/
- server.ts
- server.spec.ts
- demo-2/
- server.ts
- server.spec.ts
Each server.spec.ts has below setup:
import { start } from './server';
let server: http.Server;
before('start server', (done: Done) => {
server = start(done);
});
after('stop server', (done: Done) => {
server.close(done);
});
describe('test suites', () => {
//...
})
Here is my package.json scripts:
"scripts": {
"test": "NODE_ENV=test mocha --timeout=3000 --require=ts-node/register ./src/**/*.spec.ts"
},
When I run npm test, it gives me an error:
1) "before all" hook: start server:
Uncaught Error: listen EADDRINUSE :::4000
at Object._errnoException (util.js:1022:11)
at _exceptionWithHostPort (util.js:1044:20)
at Server.setupListenHandle [as _listen2] (net.js:1351:14)
at listenInCluster (net.js:1392:12)
at Server.listen (net.js:1476:7)
at Function.listen (node_modules/express/lib/application.js:618:24)
at Object.start (src/constructor-types/server.ts:33:14)
at Context.before (src/constructor-types/server.spec.ts:12:12)
at Server.app.listen (src/aliases/server.ts:42:7)
at emitListeningNT (net.js:1378:10)
at _combinedTickCallback (internal/process/next_tick.js:135:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
2) "after all" hook: stop server:
Error: Not running
at Server.close (net.js:1604:12)
at emitCloseNT (net.js:1655:8)
at _combinedTickCallback (internal/process/next_tick.js:135:11)
at Immediate._tickCallback (internal/process/next_tick.js:180:9)
I expect each server.spec.ts works independently which means start the server and run its test suites one by one in order to avoiding http port conflict. Because these servers have a same http port.
Mocha run test files in parallel. Servers will be created same time into one port.
The workaround is assigning random port for each file (change start function to add dynamic port param)
import { start } from './server';
const PORT = Math.floor((Math.random() * 100000) + 1);
let server: http.Server;
before('start server', (done: Done) => {
// change start function accept port param
server = start(PORT, done);
});
after('stop server', (done: Done) => {
server.close(done);
});
Another solution is serial-mocha that support running test synchronous. However the package is old, and I don't know if it still work

Resources