Listen to XRPL payments: Google Cloud Run container failed to start and listen to the port - websocket

I have an XRP wallet and I'm trying to monitor incoming payments to this address with WebSockets.
My code works fine locally.
After I subscribed to the wallet address, I simply listen to "transaction" events and get the details about the payments made to that address.
But, I need it to run 24/7 on the cloud. Nothing heavy is here. It is simply a single connection running forever.
The problem is, when I deploy my code to Google Cloud Run, I get the following error:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
So here is my question:
Given that I'm listening to "transactions" via wss://xrplcluster.com/ (with no ports), how should I modify my code to resolve the Cloud Run complain.
Here you are my code and thanks in advance.
import { Client } from 'xrpl';
const client = new Client("wss://xrplcluster.com/");
async function main() {
await client.connect();
const response = await client.request({
command: "subscribe",
accounts: ["my-wallet-address"]
})
client.on("transaction", tx => {
if (tx.validated) {
const transaction = {
amount: tx.transaction.Amount,
from: tx.transaction.Account,
};
console.log(transaction);
}
});
client.on("disconnected", async code => {
console.log(`The wss client is disconnected with code: ${code}`);
await client.connect();
});
}
main();

Related

Svelte/Sveltekit and socket.io-client not working in dev (works in preview)

I'm trying to make socket.io-client work in a svelte front end app to talk to an existing API server that already uses socket.io. After a number of challenges, I managed to make this work but I can only get this to work with sveltekit's preview and not in dev mode. Wondered if someone with some knowledge of those could explain why or suggest what I need to do to get it connecting in dev?
svelte 3.34.0
sveltekit next-169
socket.io(-client) 4.2.0
basic code as follows, currently within a file $lib/db.js where I define a few stores that are pulled into the layout for general use..
import { io } from "socket.io-client";
import { browser } from '$app/env';
const initSocket = async () => {
console.log('creating socket...');
let socket = io('http://192.168.1.5:4000', { 'connect timeout': 5000 });
socket.on("connect", () => {
// always works in preview...
console.log('socket created with ID:', socket.id);
});
socket.on("connect_error", (error) => {
// permanently fired in dev...
console.error('Failed to connect', error);
});
socket.on("error", (error) => {
console.error('Error on socket', error);
});
socket.on("foo", data => {
// works in preview when server emits a message of type 'foo'..
console.log("FOO:", data);
});
};
if (browser) {
initSocket();
}
// stores setup and exports omitted..
with svelte-kit preview --host I see the socket creation log message with the socket ID and the same can be seen on the api server where it logs the same ID. The socket works and data is received as expected.
with svelte-kit dev --host however, the log message from socket.on("connect").. is never output and I just see an endless stream of error messages in the browser console from the socket.on("connect_error").. call..
Failed to connect Error: xhr poll error
at XHR.onError (transport.js:31)
at Request.<anonymous> (polling-xhr.js:93)
at Request.Emitter.emit (index.js:145)
at Request.onError (polling-xhr.js:242)
at polling-xhr.js:205
Importantly, there is no attempt to actually contact the server at all. The server never receives a connection request and wireshark/tcpdump confirm that no packet is ever transmitted to 192.168.1.5:4000
Obviously having to rebuild and re-run preview mode on each code change makes development pretty painful, does anyone have insight as to what the issue is here or suggestions on how to proceed?
I've had a similar problem, I solved it by adding this code to svelte.config.js:
const config = {
kit: {
vite: {
resolve: {
alias: {
"xmlhttprequest-ssl": "./node_modules/engine.io-client/lib/xmlhttprequest.js",
},
},
},
},
};
The solution was provided by this comment from the vite issues.

Failure to use Netnut.io proxy with Apify Cheerio scraper

I develop web scraper and I want to integrate Proxy from Netnut into it.
Netnut integration given:
Proxy URL: gw.ntnt.io
Proxy Port: 5959
Proxy User: igorsavinkin-cc-any
Proxy Password: xxxxx
Example Rotating IP format (IP:PORT:USERNAME-CC-COUNTRY:PASSWORD):
gw.ntnt.io:5959:igorsavinkin-cc-any:xxxxx
In order to change the country, please change 'any' to your desired
country. (US, UK, IT, DE etc.) Available countries:
https://l.netnut.io/countries
Our IPs are automatically rotated, if you wish to make them Static
Residential, please add a session ID in the username parameter like
the example below:
Username-cc-any-sid-any_number
The code:
Apify.main(async () => {
const proxyConfiguration = await Apify.createProxyConfiguration({
proxyUrls: [
'gw.ntnt.io:5959:igorsavinkin-DE:xxxxx'
]
});
// Add URLs to a RequestList
const requestQueue = await Apify.openRequestQueue(queue_name);
await requestQueue.addRequest({ url: 'https://ip.nf/me.txt' });
// Create an instance of the CheerioCrawler class - a crawler
// that automatically loads the URLs and parses their HTML using the cheerio library.
const crawler = new Apify.CheerioCrawler({
// Let the crawler fetch URLs from our list.
requestQueue,
// To use the proxy IP session rotation logic, you must turn the proxy usage on.
proxyConfiguration,
// Activates the Session pool.
minConcurrency: 10,
maxConcurrency: 50,
// On error, retry each page at most once.
maxRequestRetries: 2,
// Increase the timeout for processing of each page.
handlePageTimeoutSecs: 50,
// Limit to 10 requests per one crawl
maxRequestsPerCrawl: 1000,
handlePageFunction: async ({ request, $/*, session*/ }) => {
const text = $('body').text();
log.info(text);
...
});
await crawler.run();
});
The error: RequestError: getaddrinfo ENOTFOUND 5959 5959:80
Seems the crawlwer mixes with url ports 5959 and 80...
ERROR CheerioCrawler: handleRequestFunction failed, reclaiming failed request
back to the list or queue {"url":"https://ip.nf/me.txt","retryCount":3,"id":
"F32s4Txz0fBUmwd"}
RequestError: getaddrinfo ENOTFOUND 5959 5959:80
at ClientRequest.request.once (C:\Users\User\Documents\RnD\Node.js\merc
ateo-scraper\node_modules\got\dist\source\core\index.js:953:111)
at Object.onceWrapper (events.js:285:13)
at ClientRequest.emit (events.js:202:15)
at ClientRequest.origin.emit.args (C:\Users\User\Documents\RnD\Node.js\
mercateo-scraper\node_modules\#szmarczak\http-timer\dist\source\index.js:39:2
0)
at onerror (C:\Users\User\Documents\RnD\Node.js\mercateo-scraper\node_m
odules\agent-base\dist\src\index.js:115:21)
at callbackError (C:\Users\User\Documents\RnD\Node.js\mercateo-scraper\
node_modules\agent-base\dist\src\index.js:134:17)
at processTicksAndRejections (internal/process/next_tick.js:81:5)
Any way out ?
Try to use it in this format:
http://username:password#host:port

Websocket request succeeds in writing to DynamoDB but returns Internal Server Error

I created an AWS API Gateway route for Websocket connections. I started with the AWS provided Simple Web Chat templates but have modified it to fit my needs. The API Gateway calls a Lambda function that writes to a DynamoDB table.
I am able to make a websocket connection but when I make my next request to insert some data the data appears successfully in my DynamoDB table but the response I get back is Internal Server Error.
I don't understand what is causing the Internal Server Error. When I look in the CloudWatch logs I just see normal traffic with no errors.
I could use some help understanding what is going wrong or how I can troubleshoot this better.
Here is the Lamba function that is being called:
const AWS = require("aws-sdk");
const customId = require("custom-id");
const ddb = new AWS.DynamoDB.DocumentClient({
apiVersion: "2012-08-10",
region: process.env.AWS_REGION,
});
exports.handler = async (event) => {
const uniqueId = customId({
randomLength: 1,
});
const data = {
uniqueId: uniqueId,
members: [
{
connectionId: event.requestContext.connectionId,
},
],
events: [],
parameters: [],
};
const putParams = {
TableName: process.env.EVENT_TABLE_NAME,
Item: data,
};
try {
await ddb.put(putParams).promise();
} catch (err) {
return {
statusCode: 400,
body: "Failed to create: " + JSON.stringify(err),
};
}
return { statusCode: 200, body: putParams };
};
Image of AWS CloudWatch Logs
The error returned by wcat looks like this:
{"message": "Internal server error", "connectionId":"NZxV_ddNIAMCJrw=", "requestId":"NZxafGiyoAMFoAA="}
I just had the same problem. The issue in my case was because API Gateway did not have permission to call the Lambda function in order to process a message arriving from the websocket. The 'internal server error' in this case is API Gateway saying it had some problem when it tried to invoke the Lambda function to handle the websocket message.
I was using CDK to deploy the infrastructure, and I created one WebSocketLambdaIntegration for the connect, disconnect and default websocket handlers, but this doesn't work. You have to create separate WebSocketLambdaIntegration instances even if you are calling the same Lambda function for all websocket events, otherwise CDK does not set the correct permissions.
I could see this was the problem because 1) I was using the same Lambda function for the connect, disconnect and default routes, and 2) in CloudWatch Logs I was only seeing log messages for one of these routes, in this case the 'connect' one. When I sent a message over the websocket, I was not seeing the expected log messages from the Lambda that was supposed to be handling incoming websocket messages. When I disconnected from the websocket, I did not see the expected log messages from the 'disconnect' handler.
This was because CDK had only given Lambda invoke permission to specific routes on the API Gateway websocket stage, and it had only authorised the 'connect' route, not the others.
Fixing the CDK stack so that it correctly assigned permissions, allowing API Gateway to invoke my Lambda for all websocket routes, fixed the problem.
I see it now. It was the last line. I changed it and now it works fine.
return { statusCode: 200, body: JSON.stringify(putParams) };

How to wait for WebSocket STOMP messages in Cypress.io

In one of my tests I want to wait for WebSocket STOMP messages. Is this possible with Cypress.io?
If the websocket you'd like to access is being established by your application, you could follow this basic process:
Obtain a reference to the WebSocket instance from inside your test.
Attach an event listener to the WebSocket.
Return a Cypress Promise that is resolved when your WebSocket receives the message.
This is a bit difficult for me to test out, absent a working application, but something like this should work:
In your application code:
// assuming you're using stomp-websocket: https://github.com/jmesnil/stomp-websocket
const Stomp = require('stompjs');
// bunch of app code here...
const client = Stomp.client(url);
if (window.Cypress) {
// running inside of a Cypress test, so expose this websocket globally
// so that the tests can access it
window.stompClient = client
}
In your Cypress test code:
cy.window() // yields Window of application under test
.its('stompClient') // will automatically retry until `window.stompClient` exists
.then(stompClient => {
// Cypress will wait for this Promise to resolve before continuing
return new Cypress.Promise(resolve => {
const onReceive = () => {
subscription.unsubscribe() // clean up our subscription
resolve() // resolve so Cypress continues
}
// create a new subscription on the stompClient
const subscription = stompClient.subscribe("/something/you're/waiting/for", onReceive)
})
})

Connect to a sails.js instance via websockets

Is it possible to connect any external application to my sails.js application via WebSockets?
I can use the underlying socket.io embedded in sails.js to talk between the client and server, that's not a problem.
But, I would like to connect another, separate, application to the sails.js server via websockets, and have those two communicate with each other that way, and I am wondering if this is possible?
If so, how can we do this?
Thanks.
Based on SailsJS documentation, we have access to the io object of socket.io, via sails.io.
From that, I just adjusted boostrap.js:
module.exports.bootstrap = function (cb) {
sails.io.on('connection', function (socket) {
socket.on('helloFromClient', function (data) {
console.log('helloFromClient', data);
socket.emit('helloFromServer', {server: 'says hello'});
});
});
cb();
};
Then, in my other nodejs application, which could also be another SailsJS application and I will test that later on, I simply connected and sent a message:
var io = require('socket.io-client');
var socket = io.connect('http://localhost:3000');
socket.emit('helloFromClient', {client: 'says hello'});
socket.on('helloFromServer', function (data) {
console.log('helloFromServer', data);
});
And here are the outputs.
In SailsJS I see:
helloFromClient { client: 'says hello' }
In my client nodejs app I see:
helloFromServer { server: 'says hello' }
So, it seems to be working just fine.

Resources