I am trying to figure out if there is a way to use a unique (per request) session ID in all of the winston logger calls when an HTTP request is made.
Elaboration on the issue:
Given a scenario that several hundred requests hit a website per minute and each request passes through different functions which log out various messages.
My goal is to log messages including a unique session ID per request using winston logger, until the response is sent.
I generate a unique session ID for the request using app.use(session(...)) from express-session library.
Using morgan, the HTTP logs are printed with a unique session ID like so:
logger = winston.createLogger(...);
const myStream = {
write: (text: string) => {
logger.info(text);
}
}
morgan.token('sessionid', function (req, res) { return req['sessionID'] });
app.use(morgan(':remote-addr - :remote-user [:date[clf]] ":method :url HTTP/:http-version" :status :res[content-length] ":referrer" ":user-agent" ["SESSION_ID :sessionid"]', { stream: myStream }));
However, I also want to use the same session ID in other logger.* functions elsewhere in the code. I am able to do that but as the number of simulataneous requests (using k6 load test) increases, the session ID gets overwritten by a new session ID of another request.
My code for using the session ID in request in a winston transport is:
public static initializeLogger(appInstance: express.Application) {
if (!appInstance) throw new Error(`Cannot initialize logger. Invalid express.Application instance passed. Logging may not be available`);
appInstance.use((req, res, next) => {
//this.m_sessionID = req["sessionID"];
this.m_logger.clear();
this.m_logger = winston.createLogger({
level: LOG_LEVEL,
levels: winston.config.syslog.levels,
format: winston.format.json(),
transports: [
new winston.transports.Console({ format: winston.format.simple() }),
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'debug.log', level: 'debug' }),
new WinstonCloudWatch({
logGroupName: CLOUDWATCH_LOG_GROUP_NAME,
logStreamName: function () {
let date = new Date().toISOString().split('T')[0];
return 'k-server-logs-' + date;
},
awsRegion: AWS_REGION,
awsAccessKeyId: process.env.AWS_ACCESS_KEY_ID,
awsSecretKey: process.env.AWS_SECRET_ACCESS_KEY,
retentionInDays: process.env.CLOUDWATCH_LOG_RETENTION_DAYS ? Number(process.env.CLOUDWATCH_LOG_RETENTION_DAYS) : 30,
messageFormatter: (log) => {
return `${JSON.stringify({
message: log.message,
sessionID: req["sessionID"],
level: log.level
})}`
}
})
],
});
next();
});
}
I was hoping putting the winston logger in app.use(...) middleware would set up the cloudwatch transport for the winston logger along with using the req.sessionID as each request comes in.
However, this setup isn't working. If I send even 10 simultaneous requests, this code breaks and the sessionID is incorrectly stamped on logger.* messages and/or duplicated across multiple messages.
I reviewed other implementations such as https://solidgeargroup.com/en/express-logging-global-unique-request-identificator-nodejs/ but could not get it to work.
Hoping for some advice - I am sure my setup is off.
Thank you in advance.
Key hint from https://solidgeargroup.com/en/express-logging-global-unique-request-identificator-nodejs/
Use express-http-context which has a set and get function that will ensure that the unique session ID is available throughout your code.
import httpContext from 'express-http-context';
...
...
logger.add(new WinstonCloudWatch({
level:LOG_LEVEL,
logGroupName: CLOUDWATCH_LOG_GROUP_NAME,
logStreamName: function () {
let date = new Date().toISOString().split('T')[0];
return `${process.env.CLOUDWATCH_LOG_FILE_NAMEPREFIX}-logs-${date}`;
},
awsRegion: AWS_REGION,
awsAccessKeyId: process.env.AWS_ACCESS_KEY_ID,
awsSecretKey: process.env.AWS_SECRET_ACCESS_KEY,
retentionInDays: process.env.CLOUDWATCH_LOG_RETENTION_DAYS ? Number(process.env.CLOUDWATCH_LOG_RETENTION_DAYS) : 30,
messageFormatter: (log) => {
return `${JSON.stringify({
message: log.message,
**sessionID: httpContext.get('reqId')**,
level: log.level
})}`
}
}));
Related
I am using Apollo Graphql on a NodeJS server. I recently notices that my requests were taking a lot of time and decided to get into the issue. I added log timestamp and added console logs to various locations in my server to figure out the bottleneck. My server code is as follows:
(async function () {
const app = express();
const httpServer = createServer(app);
const wsServer = new WebSocketServer({
server: httpServer,
path: "/graphql",
});
const serverCleanup = useServer({ schema }, wsServer);
const server = new ApolloServer({
schema,
plugins: [
ApolloServerPluginDrainHttpServer({ httpServer }),
{
async serverWillStart() {
return {
async drainServer() {
await serverCleanup.dispose();
},
};
},
},
myPlugin
],
healthCheckPath: '/health',
async onHealthCheck() {
return
},
});
await server.start();
app.use(
'/',
cors(),
// 50mb is the limit that `startStandaloneServer` uses, but you may configure this to suit your needs
bodyParser.json({ limit: '50mb' }),
// expressMiddleware accepts the same arguments:
// an Apollo Server instance and optional configuration options
expressMiddleware(server, {
context: async ({ req }) => {
let decodedToken
try {
if (env === 'development') {
decodedToken = {
uid: "test"
}
} else {
decodedToken = await verifyIdToken(req.headers?.authorization?.replace('Bearer ', ''))
}
} catch (error) {
decodedToken = null
}
return {
decodedToken,
jwt: decodedToken
}
}
}),
);
await new Promise((resolve) => httpServer.listen({ port: 4000 }, resolve));
console.log(`🚀 Server ready at http://localhost:4000/`);
})()
Then in my graphql resolvers I have code similar to this
const { AuthenticationError } = require('#apollo/server/express4');
const mutations = {
createPost: async(_, { createPostInput }, context) => {
console.log('In graphql mutation')
if (!context.decodedToken || !Object.keys(context.decodedToken).length) {
throw new AuthenticationError('Unauthenticated');
}
console.log('In graphql mutation 2')
return await createPostApi(createPostInput);
}
}
code for "my plugin" passed to apollo server is taken from apollo docs. It only prints logs for various events.
const myPlugin = {
// Fires whenever a GraphQL request is received from a client.
async requestDidStart(requestContext) {
console.log('Request started!');
return {
// Fires whenever Apollo Server will parse a GraphQL
// request to create its associated document AST.
async parsingDidStart(requestContext) {
console.log('Parsing started!');
},
// Fires whenever Apollo Server will validate a
// request's document AST against your GraphQL schema.
async validationDidStart(requestContext) {
console.log('Validation started!');
},
async executionDidStart(requestContext) {
console.log('Execution started!');
},
};
},
};
I have installed log-timestamp package to print timestamp for each log and here is the output
[2023-01-31T18:23:02.428Z] Request started!
[2023-01-31T18:23:02.430Z] Parsing started!
[2023-01-31T18:23:02.432Z] Validation started!
[2023-01-31T18:23:02.450Z] Execution started!
[2023-01-31T18:23:03.081Z] Request started!
[2023-01-31T18:23:03.081Z] Parsing started!
[2023-01-31T18:23:03.081Z] Validation started!
[2023-01-31T18:23:03.083Z] Execution started!
[2023-01-31T18:23:03.380Z] Request started!
[2023-01-31T18:23:03.381Z] Execution started!
[2023-01-31T18:23:18.290Z] Request started!
[2023-01-31T18:23:18.291Z] Execution started!
[2023-01-31T18:23:22.878Z] Request started!
[2023-01-31T18:23:22.878Z] Execution started!
[2023-01-31T18:23:23.878Z] Request started!
[2023-01-31T18:23:23.878Z] Execution started!
[2023-01-31T18:23:24.869Z] Request started!
[2023-01-31T18:23:24.869Z] Execution started!
[2023-01-31T18:23:30.389Z] Request started!
[2023-01-31T18:23:30.390Z] Execution started!
[2023-01-31T18:23:41.372Z] Request started!
[2023-01-31T18:23:41.373Z] Execution started!
[2023-01-31T18:24:01.046Z] Request started!
[2023-01-31T18:24:01.047Z] Execution started!
[2023-01-31T18:24:02.040Z] Request started!
[2023-01-31T18:24:02.041Z] Execution started!
[2023-01-31T18:24:03.180Z] In graphql mutation
[2023-01-31T18:24:03.180Z] In graphql mutation 2
// logs below this point are from my actual mutation. Actual log output has been redacted
[2023-01-31T18:24:03.180Z] Starting ...
[2023-01-31T18:24:03.181Z] Inside function
[2023-01-31T18:24:03.181Z] Sorting ...
[2023-01-31T18:24:03.181Z] Getting from db ...
[2023-01-31T18:24:03.311Z] Got ...
[2023-01-31T18:24:03.311Z] Creating ... input
[2023-01-31T18:24:03.312Z] Creating ... input
[2023-01-31T18:24:03.312Z] Creating ... Input
[2023-01-31T18:24:03.312Z] Creating in db
[2023-01-31T18:24:03.702Z] Fetching fetching from db
[2023-01-31T18:24:03.756Z] parsing
[2023-01-31T18:24:03.756Z] Starting another thing
[2023-01-31T18:24:03.756Z] In that other thing
[2023-01-31T18:24:03.756Z] Starting a third thing
[2023-01-31T18:24:03.760Z] Creating (db call) ...
[2023-01-31T18:24:03.803Z] Finding (db call) ...
[2023-01-31T18:24:03.836Z] Creating (another db call) ...
[2023-01-31T18:24:03.838Z] Creating (db call) ...
[2023-01-31T18:24:03.838Z] Creating (db call) ...
[2023-01-31T18:24:03.839Z] Creating (db call)...
[2023-01-31T18:24:03.840Z] Finishing ...
As you can see, the request started at 18:23:02 and reached the resolver at 18:24:03, a full minute later. There is no middleware involved, this is my local machine so there is no network latency issue or wait for token verification either. The actual business logic gets executed within the same second but overall time becomes 1min+. How can I reduce this lag?
Weirdly enough, a simple machine restart fixed the problem. Although I still do not understand what caused this in the first place. Could be a problem related to MacOS or Apollo server.
As I understand RSocket-JS supports routing messages using encodeCompositeMetadata and encodeRoute, however, I cannot get the server to accept a fireAndForget message. The server constantly logs the following message:
o.s.m.r.a.support.RSocketMessageHandler : No handler for fireAndForget to ''
This is the server mapping I am trying to trigger:
#Controller
public class MockController {
private static final Logger LOGGER = LoggerFactory.getLogger(MockController.class);
#MessageMapping("fire-and-forget")
public Mono<Void> fireAndForget(MockData mockData) {
LOGGER.info("fireAndForget: {}", mockData);
return Mono.empty();
}
}
This is the TypeScript code that's trying to make the connection:
client.connect().subscribe({
onComplete: socket => {
console.log("Connected to socket!")
socket.fireAndForget({
data: { someData: "Hello world!" },
metadata: encodeCompositeMetadata([[MESSAGE_RSOCKET_ROUTING, encodeRoute("fire-and-forget")]])
});
},
onError: error => console.error(error),
onSubscribe: cancel => {/* call cancel() to abort */ }
});
I've also tried adding the route in other ways (metadata: String.fromCharCode('route'.length)+'route') I found on the internet, but none seem to work.
What do I need to do to format the route in a way that the Spring Boot server recognizes it and can route the message correctly?
Binary only communication when using CompositeMetadata
Please make sure that you have configured your ClientTransport with binary codecs as follows:
new RSocketWebSocketClient(
{
url: 'ws://<host>:<port>'
},
BufferEncoders,
),
Having Binary encoders you will be able to properly send your routes using composite metadata.
Also, please make sure that you have configured metadataMimeType as:
...
const metadataMimeType = MESSAGE_RSOCKET_COMPOSITE_METADATA.string; // message/x.rsocket.composite-metadata.v0
new RSocketClient<Buffer, Buffer>({
setup: {
...
metadataMimeType,
},
transport: new RSocketWebSocketClient(
{
url: 'ws://<host>:<port>',
},
BufferEncoders,
),
});
Note, once you enabled BufferEncoders your JSONSeriallizer will not work and you would need to encode your JSON to binary yours selves ( I suggest doing that since in the future versions we will remove support of Serializers concept completely). Therefore, your request has to be adjusted as it is in the following example:
client.connect().subscribe({
onComplete: socket => {
console.log("Connected to socket!")
socket.fireAndForget({
data: Buffer.from(JSON.stringify({ someData: "Hello world!" })),
metadata: encodeCompositeMetadata([[MESSAGE_RSOCKET_ROUTING, encodeRoute("fire-and-forget")]])
});
},
onError: error => console.error(error),
onSubscribe: cancel => {/* call cancel() to abort */ }
});
Use #Payload annotation for your payload at spring backend
Also, to handle any data from the client and to let Spring know that the specified parameter argument is your incoming request data, you have to annotate it with the #Payload annotation:
#Controller
public class MockController {
private static final Logger LOGGER = LoggerFactory.getLogger(MockController.class);
#MessageMapping("fire-and-forget")
public Mono<Void> fireAndForget(#Payload MockData mockData) {
LOGGER.info("fireAndForget: {}", mockData);
return Mono.empty();
}
}
I am using the Node.js ws library, to listen to events in user accounts on a 3rd party API. For each user, I open a websocket to listen to the events in the user's account.
Turns out, the 3rd-party API doesn't provide a userID for each event, so if I have 10 websocket connections to user-accounts, I cannot determine which account an event came from.
I have access to a unique userId prior to starting each of my connections.
Is there a way to append or wrap the websocket connection with the userId identifier, to each connection I make, such that when I receive an event, I can access the custom identifier, and subsequently know which user's account the event came from?
The code below is a mix of real code, and pseudocode (i.e customSocket)
const ws = new WebSocket('wss://thirdparty-api.com/accounts', {
port: 8080,
});
ws.send(
JSON.stringify({
action: 'authenticate',
data: {
oauth_token: access_token,
},
})
);
// wrap and attach data here (pseudocode at top-level)
customSocket.add({userId,
ws.send(
JSON.stringify({
action: 'listen',
data: {
streams: ['action_updates'],
},
})
)
})
// listen for wrapper data here, pseudocode at top level
customSocket.emit((customData) {
ws.on('message', function incoming(data) {
console.log('incoming -> data', data.toString());
})
console.log('emit -> customData', customData);
})
Looking at the socket.io library, the namespace feature may solve for this, but I can't determine if that's true or not. Below is an example in their documentation:
// your application has multiple tenants so you want to dynamically create one namespace per tenant
const workspaces = io.of(/^\/\w+$/);
workspaces.on('connection', socket => {
const workspace = socket.nsp;
workspace.emit('hello');
});
// this middleware will be assigned to each namespace
workspaces.use((socket, next) => {
// ensure the user has access to the workspace
next();
});
I found a solution to this which is fairly simple. First create a message handler function:
const eventHandler = (uid, msg) => {
console.log(`${uid} did ${msg}`);
};
Then, when you create the websocket for the given user, wrap the .on event with the handler:
const createSocketForUser = (uid, eventHandler) => {
const socket = new WebSocket(/* ... */);
socket.onmessage = (msg) => {
eventHandler(uid, msg)
};
return socket;
}
Problem:
I’m using cypress with angular and apollo graphQl. I’m trying to mock the graph server so I write my tests using custom responses. The issue here is that all graph calls go on a single endpoint and that cypress doesn’t have default full network support yet to distinguish between these calls.
An example scenario would be:
access /accounts/account123
when the api is hit two graph calls are sent out - a query getAccountDetails and another one with getVehicles
Tried:
Using one stub of the graph endpoint per test. Not working as it stubs with the same stub all calls.
Changing the app such that the query is appended 'on the go' to the url where I can intercept it in cypress and therefore have a unique url for each query. Not possible to change the app.
My only bet seems to be intercepting the XHR call and using this, but I don't seem to be able to get it working Tried all options using XHR outlined here but to no luck (it picks only the stub declared last and uses that for all calls) https://github.com/cypress-io/cypress-documentation/issues/122.
The answer from this question uses Fetch and therefore doesn't apply:
Mock specific graphql request in cypress when running e2e tests
Anyone got any ideas?
With cypress 6.0 route and route2 are deprecated, suggesting the use of intercept. As written in the docs (https://docs.cypress.io/api/commands/intercept.html#Aliasing-individual-GraphQL-requests) you can mock the GraphQL requests in this way:
cy.intercept('POST', '/api', (req) => {
if (req.body.operationName === 'operationName') {
req.reply({ fixture: 'mockData.json'});
}
For anyone else hitting this issue, there is a working solution with the new cypress release using cy.route2()
The requests are sent to the server but the responses are stubbed/ altered on return.
Later Edit:
Noticed that the code version below doesn't alter the status code. If you need this, I'd recommend the version I left as a comment below.
Example code:
describe('account details', () => {
it('should display the account details correctly', () => {
cy.route2(graphEndpoint, (req) => {
let body = req.body;
if (body == getAccountDetailsQuery) {
req.reply((res) => {
res.body = getAccountDetailsResponse,
res.status = 200
});
} else if (body == getVehiclesQuery) {
req.reply((res) => {
res.body = getVehiclesResponse,
res.status = 200
});
}
}).as('accountStub');
cy.visit('/accounts/account123').wait('#accountStub');
});
});
Both your query and response should be in string format.
This is the cy command I'm using:
import * as hash from 'object-hash';
Cypress.Commands.add('stubRequest', ({ request, response, alias }) => {
const previousInteceptions = Cypress.config('interceptions');
const expectedKey = hash(
JSON.parse(
JSON.stringify({
query: request.query,
variables: request.variables,
}),
),
);
if (!(previousInteceptions || {})[expectedKey]) {
Cypress.config('interceptions', {
...(previousInteceptions || {}),
[expectedKey]: { alias, response },
});
}
cy.intercept('POST', '/api', (req) => {
const interceptions = Cypress.config('interceptions');
const receivedKey = hash(
JSON.parse(
JSON.stringify({
query: req.body.query,
variables: { ...req.body.variables },
}),
),
);
const match = interceptions[receivedKey];
if (match) {
req.alias = match.alias;
req.reply({ body: match.response });
}
});
});
With that is posible to stub exact request queries and variables:
import { MUTATION_LOGIN } from 'src/services/Auth';
...
cy.stubRequest({
request: {
query: MUTATION_LOGIN,
variables: {
loginInput: { email: 'test#user.com', password: 'test#user.com' },
},
},
response: {
data: {
login: {
accessToken: 'Bearer FakeToken',
user: {
username: 'Fake Username',
email: 'test#user.com',
},
},
},
});
...
Cypress.config is what make it possible, it is kind of a global key/val getter/setter in tests which I'm using to store interceptions with expected requests hash and fake responses
This helped me https://www.autoscripts.net/stubbing-in-cypress/
But I'm not sure where the original source is
A "fix" that I use is to create multiple aliases, with different names, on the same route, with wait on the alias between the different names, as many as requests you have.
I guess you can use aliases as already suggested in Answer by #Luis above like this. This is given in documentation too. Only thing you need to use here is multiple aliases as you have multiple calls and have to manage the sequence between them . Please correct me if i understood you question in other way ??
cy.route({
method: 'POST',
url: 'abc/*',
status: 200.
response: {whatever response is needed in mock }
}).as('mockAPI')
// HERE YOU SHOULD WAIT till the mockAPI is resolved.
cy.wait('#mockAPI')
I am using kafka in my project using kafka-node package...
I have introduced a method and inside it i am trying to use a kafka module for eg:
Meteor.methods
kafka: (topic, message) ->
if(Meteor.isServer)
message = JSON.stringify(message)
kafka = Meteor.npmRequire 'kafka-node'
HighLevelProducer = kafka.HighLevelProducer
Client = kafka.Client
client = new Client
producer = new HighLevelProducer(client)
payloads =[{topic: topic, messages: [message]}]
producer.on 'ready', ->
producer.send payloads, (error,data) ->
if not error
HighLevelConsumer = kafka.HighLevelConsumer
Client = kafka.Client
client = new Client('localhost:2181')
topics = [ { topic: topic } ]
options =
autoCommit: true
fetchMaxWaitMs: 1000
fetchMaxBytes: 1024 * 1024
consumer = new HighLevelConsumer(client, topics, options)
consumer.on 'message',(message) ->
console.log message.value
#Meteor.call 'saveMessage', message.value, (error,data) ->
return
consumer.on 'error', (err) ->
console.log 'error', err
return
producer.on 'error', (err) ->
console.log 'error', err
Everything was fine until i decided to use meteor.call and call a method to save that message..
It gives me this error.
Meteor code must always run within a Fiber. Try wrapping callbacks
that you pass to non-Meteor libraries with Meteor.bindEnvironment
I tried encapsulating it inside Fiber, used Meteor.wrapAsync(), Neither helped,
Please guys can you help me, i am having difficult time solving this issue...
If you're using node style callbacks, you can use Meteor.bindEnvironment around the callback. For example:
let Sockets = new Mongo.Collection('connections');
function createConnection (name) {
check(name, String);
let socket = net.connect(23, '192.168.1.3', Meteor.bindEnvironment(function () {
Sockets.upsert({ name: name }, { $set: { status: 'connected' } });
}));
}