Get all Seq logs from requests that meet some condition - seq-logging

I'm using Seq to capture logs on a local API and I want to find log messages for slow requests.
Each request is writing several logs and one of them includes the total time the request took. I can use something like RequestTime > 500 to find those logs but it doesn't include the other logs for those requests (understandably). That tells me which APIs are slow but not why they're slow, the other logs will provide that information.
Is there a way to ask Seq to return all log messages for requests that meet a condition (like the one for total request time above)? They all have a RequestId value that can be used to identify which logs belong to each request.
I'm aware I can export the results of the first query and use an excel like tool to grab all the request IDs and do an IN clause. I'm looking for a single step option if it exists.

There's no single-step option in Seq for this, today.

Related

SMB2 QUERY_DIRECTORY Compounded Response

Im trying to implement a lightweight SMB2 server on a very low resource system (no dynamic memory allocation). The system also only allows me to iterate directory contents so I have no idea the total file count etc. In response to a QUERY_DIRECTORY request, I think I have 2 options
1.) Enumerate the directory twice, firstly to calculate the total length of the response and the second time stream the results back
2.) My hope is that I can use compounded responses and return a response for each file using the NEXT_COMMAND field to indicate it is compounded however it isnt clear to me that the spec allows for this sort of behaviour or whether the compound response is ONLY to allow 2 seperate requests to be answered in one response.
My real goal is to implement a minimal functionality to be compatible with vanilla windows explorer to list/read and write files without too many bells and wistles

Quarkus - Code to implement different actions based on different timeout values

I have a requirement to call REST API and implement different actions based on response times. For example, if the response is less than 30 secs - do process A, if between 31 - 60 seconds - do process B and timeout after 60 seconds. Is there any sample code to implement this in Quarkus/Mutiny? Any help is appreciated.
It is hard to provide some code since your question does not provide any details. It is because there are various libraries and solutions that you can use.
In general, I find it helpful to use a simple time diff method. This applies regardless of whether the code is implemented synchronously or asynchronously. The overall progress will be something like this:
Create a variable that stores the timestamp just before sending an HTTP request
Send the HTTP request
Retrieve the response and create a new timestamp, then compare it with the previous time.
When you compare these two timestamps you have the pending time between the request initiation and the resolution of its response.
By the way, If you have some base code that you desire to extend to achieve this functionality, please add it to the question and I might be able to edit it and show how the code may generally look like.

How to efficiently log metrics from API Gateway when using cache?

My scenario is this:
API Gateway which has a single endpoint serves roughly 250 million requests per month, backed by a Lambda function.
Caching is enabled, and 99% of the requests hits the cache.
The request contains query parameters which we want to derive statistics from.
Since cache is used, most requests never hit the Lambda function. We have currently enabled full request/response logging in API Gateway to capture the query parameters in CloudWatch. Once a week, we run a script to parse the logs and compile the statistics we are interested in.
Challenges with this setup:
Our script takes ~5 hours to run, and only gives a snapshot for the last week. We would ideally be interested in tracking the statistics continuously over time, say every 5 minutes or every hour.
Using full request/response logging produces HUGE amounts of logs, most of which does not contain anything that we are interested in.
Ideally we would like to turn of full request/response logging but still get the statistics we are interested in. I have considered logging to CloudWatch from Lambda#Edge to be able to capture the query parameters before the request hits the cache, and then use a metric filter, or perhaps Kinesis to get the statistics we want.
Would this be a viable solution, are can you propose another setup that could solve our problems in a more efficient way without costing too much?
You can configure access logging on your API ( https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html ) which give way to select (portion of request and response) and publish more structured logs to cloudwatch.
You can then use cloudwatch filter pattern (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html ) to generate some metrics or feed logs to your analytical engine ( or run script as you are running now ).

How to implement a recursive call inside jmeter?

I need to simulate a test scenario where my application sends a request with 100s of queries. On the back-end, this request is broken down into requests containing a single query each. So a request from Jmeter with 100 queries will become 100 requests on the back-end. Now - the response from the back-end could either contain the requested data for each of those queries OR contain a unique queryID. Sending back a queryID is server's way of telling that this query is still running. For example, if Jmeter sends a request with 100 queries, it might get back data for 80 and 20 unique queryIDs. So my application under test makes a callback request with those 20 queryIDs every 15 seconds until it gets back the requested data or timeout.
Here is what I have implemented so far.
-main_request_with_100_queries
--XPath_extractor_to_extract_any_queryIDs_found
-if_controller_to_check_if_queryID_MatchNr_is_greater_than_0
--15_second_pause
--beanshell_preprocessor_to_create_the_request_body_with_all_queryIDs
--callback_request_with_queryIDs
What I want to implement is to have another XPath extractor for my callback_request and if any queryIDs are found, then go back to the if_controller
I'm trying to make this work by using a module_controller but so far no luck. Has anyone ever implemented something like this? Can anyone suggest some ideas?
You can use While Controller to keep making the request until there is a queryID in the response.
While Controller [ "${querid.present}"=="true" ]
HTTP Request
Pre Processor [to_create_the_request_body_with_all_queryIDs]
Post Processor [to check for query ID. if no query id - change querid.present to false ]
If possible, try to use Regular Expression Extractor. xpath is very slow and might affect your performance of the script. Check here for more details.
Creating modular test script in JMeter.

Fiddler filter to hide recurring requests

Is there any way to tell Fiddler not to log requests that have already been sent/logged previously?
Or even to filter them after you stop the capture, so as to get a smaller list to process?
Having a huge list of multiple identical requests is really difficult to debug...
Seemed simple but after many tries, i couldn't find anything.
Thanks in advance!
EDIT
To clarify things :
I am trying to debug a sort of monitoring system, in which the requests and responses change through time but could be hours and thousands of queries before an event changes the system state, hence the request response data. So i would like to skip logging identical request/response sets.
The easiest way to do this would be to write a bit of FiddlerScript (Rules > Customize Rules).
However, how exactly do you define "identical"? The same URL? The same request headers? The same response body? etc.
The definition you choose obviously has a significant impact on what the necessary FiddlerScript will look like.

Resources