NestJS & Heroku - Connecting to the gateway requires enabling the VPN - heroku

async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.useWebSocketAdapter(new GatewayAdapter(app));
app.use(cookieParser());
app.use(helmet());
await app.listen(process.env.PORT || 8080);
}
bootstrap();
#WebSocketGateway({
namespace: 'chat'
})
export class ChatGateway {}
Connecting to the backend and calling APIs work fine, but connecting to the websocket requires enabling the VPN. After enabling the VPN, a connection will be established. This issue only occurs when the app is deployed to Heroku while it works fine on localhost.

May be its a heroku's free tier limitation. Try another platform like netlify.

Related

Apollo monorepo - WS connection

I'm working on an Apollo Graphql real time app which is deployed to Heroku and contains both client and server files in the same project (the server is serving the create-react-app build files).
I'm trying to open a websocket connection between the client and the server and I can't figure out how to do so.
I tried to set the ws uri in the client to "wss://" but apollo returns an error which is saying that it's not a valid uri.
Previously I had two different projects (client / server) on heroku and I set in the uri the path to the server app and it's worked properly.
Thanks very much in advance for any help.

SocketException in .net core 3.1 against SQS endpoint but not in Python or CLI

I'm getting an exception when running the following little console app on a windows 2016 ec2 instance (.NET Core 3.1):
"A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond." System.Exception {System.Net.Sockets.SocketException}
using System.Threading.Tasks;
using Amazon.SQS;
using Amazon.SQS.Model;
using Amazon.S3;
namespace TestSqsConsole
{
class Program
{
static async Task Main(string[] args)
{
AmazonSQSClient _client = new AmazonSQSClient(Amazon.RegionEndpoint.USEast1);
string url = (await _client.GetQueueUrlAsync(new GetQueueUrlRequest { QueueName = "my-queue" }))?.QueueUrl;
Console.WriteLine(url);
}
}
}
What's odd is that I can run this command through the CLI and it works fine:
aws sqs get-queue-url --queue-name my-queue
I also tried to do a simple S3 ListObjectsV2Async command from my .NET Core app just to make sure there wasn't some broader issue and that worked. I tried a couple of different versions of the AWS SDK for .NET and tried a few different version of .NET Core framework. With .NET 4.7.2 I got a slightly more useful exception that said:
SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.163.7.205:443 This is the IP of the SQS Endpoint in my VPC. The EC2 instance and the endpoint are in different subnets but that should be ok I thought. Lastly, I wrote the equivalent app in Python and that works fine so I'm very confused.
well, we figured out what was wrong but it raises other questions that I don't understand. It turned out that the security group for the SQS Endpoint did not have any inbound rules specified. Once we add the appropriate rules, the .NET core app started working. But how does the CLI or the python app work without these inbound rules is a mystery

How to connect to on-premise Oracle-db using Azure Functions?

I'm trying to create an Azure Function to connect and query an on-premise Oracle-DB. I can not see that the Oracle Client or an ODBC-driver is installed on the servers to handle this.
Are there any solutions to this using JS or Python?
I have tried using the node-odbc driver, but the server is missing the Oracle client.
Partially answer for connecting to on-premise service like Oracle-db from Azure Functions, there is an existing SO thread How to Azure function configure for Site-to-Site Connectivity? had answered it, which you can refer to. So first, you must make sure networking access to on-premise server available.
Then, if you want to query oracle database via odbc, the oracle odbc driver must be installed on the client-side. However, the oracle odbc driver is a commerce componet, which you need to pay for getting it, and install it manually in Azure Functions. So even you want to use JS or Python to connect it, I think using Java with Oracle jdbc driver is a better solution from Azure Functions to connect Oracle DB to avoid the additional installation.
The other way I thought is to deploy a REST API app as proxy on your on-premise server to handle the query request from Azure Functions with JS or Python to help connecting Oracle DB.
Create and deploy your code to Azure Functions as a custom Docker container using a Linux base image. Install the Oracle client libraries in custom image. Check out my blog on the topic:
https://www.ravitella.com/azure-function-container/
You will need to establish a hybrid connection. see https://learn.microsoft.com/en-us/azure/app-service/app-service-hybrid-connections
Then you can refer to this thread to establish a context:
https://social.msdn.microsoft.com/Forums/en-US/2434fe10-3219-4625-bdbf-b9e96421230b/entity-framework-with-oracle-and-sql
For a nodejs solution, I would add the NPM and install it using Kudu using node-oracledb:
https://oracle.github.io/node-oracledb/doc/api.html#getstarted
// myscript.js
// This example uses Node 8's async/await syntax.
var oracledb = require('oracledb');
var mypw = ... // set mypw to the hr schema password
async function run() {
let connection;
try {
connection = await oracledb.getConnection( {
user : "hr",
password : mypw,
connectString : "localhost/XEPDB1"
});
let result = await connection.execute(
`SELECT manager_id, department_id, department_name
FROM departments
WHERE manager_id = :id`,
[103], // bind value for :id
);
console.log(result.rows);
} catch (err) {
console.error(err);
} finally {
if (connection) {
try {
await connection.close();
} catch (err) {
console.error(err);
}
}
}
}
run();
sample code

Remote access to Heroku Postgres Hobby tier without SSL

Why am I able to access a hobby-dev Postgres in Heroku without SSL?
This is my Node code:
const { Client } = require('pg');
const connectionString = process.env.DATABASE_URL;
const client = new Client({
connectionString: connectionString,
ssl: false
})
heroku pg:credentials:url DATABASE returns sslmode=require yet I am also able to connect remotely with psql DATABASE_URL without the sslmode=require as a query param.
According to Heroku docs and code samples it shouldn't be the case. The only thing that could explain it is that Heroku does not support encryption at rest on the Hobby tier so why should it in transit?
I contacted Heroku support and the response I got is that it's currently not enforced on the server side
Their data team confirms this is a "known-but-undocumented" thing. The docs unfortunately are misleading e.g. here and here:
If you leave off sslmode=require you will get a connection error
Encryption at rest has little to do with secure connections to the Postgres instance. After all, 'at rest' is quite the opposite of moving bytes back and forth between the client and the database host.
This is almost certainly due to the configuration of the database host itself but it's impossible to say for certain. At a guess, the reasoning here is to lower the difficulty in using the service since the hobby tier is free and popular with hobbyist and newer developers.

Multiple Express with express-session Applications on Single Server, Different Ports

I am running multiple MEAN applications using express-session on a single server under multiple ports. When I authenticate into application A, the established token for the other (application B) is modified. The authenticated user session for application B is then denied.
How do I configure all of my applications to persist and verify tokens independently? For example, on localhost, application A runs on port 80, and application B runs on port 90. I want a user to be able to authenticate and use application A without disrupting another user's Application B session.
Here is the code in my app.js file that should be relevant to my issue:
// Connect to database
mongoose.connect(config.mongo.uri, config.mongo.options);
var connection = mongoose.createConnection(config.mongo.uri, config.mongo.options);
var app = express();
// enable CORS
app.use(cors());
// enable cookieParser
app.use(cookieParser());
// enable session
app.use(session({
secret: config.secrets.session,
resave: false,
saveUninitialized: true,
name: 'uniqueSessionId',
store: new MongoStore(
{
mongooseConnection: connection
}
)
}));
Cookies (which are used to store the session identifier in) are shared across all ports on a given hostname, which is why your apps are interfering with each other.
The documentation of express-session suggests the following:
if you have multiple apps running on the same hostname (this is just the name, i.e. localhost or 127.0.0.1; different schemes and ports do not name a different hostname), then you need to separate the session cookies from each other. The simplest method is to simply set different names per app.

Resources