Sumologic query to show data by date - dashboard

I have created Sumologic dashboard to show some errors in the application. What I want is show the error per date. It show the error but it doesn't aggregate the same error messages as the messages have some GUID.
This is the sample part of the query:
_sourceCategory="playGames/web-app-us"
and ERR
| timeslice 1d
| count _timeslice, message
enter image description here

I believe you need to format the message and remove the GUID. So all the messages with GUID will be aggregate to single message.
You can use regex to format the messages and remove the GUID. The sample query look like this and use as needed.
The sample error message is like this
Error occurred. Exception: System.Exception: my custom error message: 1121fd05-065b-499f-b174-2a13efdaf8b5
And the Sumologic query
_sourceCategory="dev/test-app"
and "[Error]"
and "Error occurred"
// | timeslice 1d
| formatDate(_receiptTime, "yyyy-MM-dd") as date
| replace(_raw,/my custom error message: ([0-9A-Fa-f\-]{36})/,"my custom error message") as finalMessage
| count date, finalMessage
| transpose row date column finalMessage
This video shows step by step guidance. https://youtu.be/Nxzp7G-rUh8

Related

Sending a `create` RPC message to SurrealDB returns a "There was a problem with the database: The table does not exist" error

I'm debugging some tests for the .NET SurrealDB library. I can open connections to the database just fine but when I send a create RPC message to the db (docker container) it returns an error that reads "There was a problem with the database: The table does not exist"
TRACE tungstenite::protocol Received message {"id":"02B70C1AFE5D","async":true,"method":"create","params":["users",{"username":"john","password":"test123"}]}
...
16 13:46:45] DEBUG surrealdb::dbs Executing: CREATE $what CONTENT $data RETURN AFTER
surreal_1 | [2022-09-16 13:46:45] TRACE surrealdb::dbs Iterating: CREATE $what CONTENT $data RETURN AFTER
code: -32000, message: "There was a problem with the database: The table does not exist"
Any idea why that would happen? The table, of course, doesn't exist since I'm trying to create it. Would there be another reason in the Surreal code that such an error would be returned?
The error message was a red herring. The actual issue was that the client had an error that didn't allow it to sign in correctly so it wasn't authorized to make changes to the database.
Offending code:
// The table doesn't exist
Err(Error::TbNotFound) => match opt.auth.check(Level::Db) {
// We can create the table automatically
true => {
run.add_and_cache_ns(opt.ns(), opt.strict).await?;
run.add_and_cache_db(opt.ns(), opt.db(), opt.strict).await?;
run.add_and_cache_tb(opt.ns(), opt.db(), &rid.tb, opt.strict).await
}
// We can't create the table so error
false => Err(Error::TbNotFound), // Wrong Error Message
},
This has since been fixed and should now return a query permission error if the client is unauthorized.

How to let fluent-bit skip the field that can not be parserd?

I am trying to send data from fluent_bit to the Elastic search
Here is my fluent-bit parser:
[PARSER]
Name escape_utf8_log
Format json
# Command | Decoder | Field | Optional Action
# =============|=====================|=================
Decode_Field_As escaped_utf8 log
Decode_Field json log [PARSER]
Name escape_message
Format json
# Command | Decoder | Field | Optional Action
# =============|=================|=================
Decode_Field_As escaped_utf8 message
Decode_Field json message
Here is my fluent-bit config:
[FILTER]
Name parser
Match docker_logs
Key_Name message
Parser escape_message
Reserve_Data True
In some cases, other people would put the log data to the fluent-bit in the wrong format so that we can get "mapper_parsing_exception" (example: filed to parse field [id] of type long in document).
I am trying to skip parsing a log and then send it to ES anyway if the fluent can not parse that log. so that we would not get the parser error even if someone sends the wrong format to fluent_bit. Is this possible to do that?

Parse message in CloudWatch Logs Insights

Here are two example messages of the lambda:
WARNING:
Field Value
#ingestionTime 1653987507053
#log XXXXXXX:/aws/lambda/lambda-name
#logStream 2022/05/31/[$LATEST]059106a15343448486b43f8b1168ec64
#message 2022-05-31T08:58:18.293Z b1266ad9-95aa-4c4e-9416-e86409f6455e WARN error catched and errorHandler configured, handling the error: Error: Error while executing handler: TypeError: Cannot read property 'replace' of undefined
#requestId b1266ad9-95aa-4c4e-9416-e86409f6455e
#timestamp 1653987498296
ERROR:
Field Value
#ingestionTime 1653917638480
#log XXXXXXXX:/aws/lambda/lambda-name
#logStream 2022/05/30/[$LATEST]bf8ba722ecd442dbafeaeeb3e7251024
#message 2022-05-30T13:33:57.406Z 8b5ec77c-fb30-4eb3-bd38-04a10abae403 ERROR Invoke Error {"errorType":"Error","errorMessage":"Error while executing configured error handler: Error: No body found in handler event","stack":["Error: Error while executing configured error handler: Error: No body found in handler event"," at Runtime.<anonymous> (/var/task/index.js:3180:15)"]}
#requestId 8b5ec77c-fb30-4eb3-bd38-04a10abae403
#timestamp 1653917637407
errorMessage
Error while executing configured error handler: Error: No body found in handler event
errorType
Error
stack.0 Error: Error while executing configured error handler: Error: No body found in handler event
stack.1 at Runtime.<anonymous> (/var/task/index.js:3180:15)
Can you help me understand how to set up the query in order to have a table with the following columns and their values:
from #message extract timestamp, requestID, type (WARN or ERROR), errorMessage and if feasible also the name of the lambda from #log and the #logStream.
If we'd look at the documentation on AWS Insights parse method
We can use asterisks * to capture details which for you would be:
fields #timestamp, #message, #log, #logStream, #requestId
| parse #message "* * * *" as timestamp, requestId, type, body
| display #timestamp, #requestId, #log, #logStream, body
If you'd like to also capture the error message try to now parse the body as well:
fields #timestamp, #message, #log, #logStream, #requestId
| parse #message "* * * *" as timestamp, requestId, type, body
| parse body "*,\"errorMessage\":\"*\"*" as startBody, errorMessage, endBody
| display #timestamp, #requestId, #log, #logStream, body, errorMessage
Should work but please feel free to look up any additional information in the AWS documentation, they've made it very thorough👌🏽

CloudWatch Insights: get logs of errored lambdas

A lambda can have a result that is either a success or an error.
I want to see the logs of lambda that errored. I am trying to do that via a CloudWatch Insights query.
How can I do this?
If someone comes here looking for a solution, here's what I use:
filter #message like /(?i)(Exception|error|fail)/| fields #timestamp, #message | sort #timestamp desc | limit 20
I use the below query to get those errors which are not covered by the query mentioned in answer and I can only see failure in monitoring dashboard.
fields #timestamp, #message
| sort #timestamp desc
| filter #message not like 'INFO'
| filter #message not like 'REPORT'
| filter #message not like 'END'
| filter #message not like 'START'
| limit 20
Here is some example that cover by this query
timeout
#ingestionTime
1600997135683
#log
060558051165:/aws/lambda/prod-
#logStream
2020/09/25/[$LATEST]abc
#message
2020-09-25T01:25:35.623Z d0801056-abc-595a-b67d-47b14d3e9a20 Task timed out after 30.03 seconds
#requestId
d0801056-abc-595a-b67d-47b14d3e9a20
#timestamp
1600997135623
innovation error
#ingestionTime
1600996797947
#log
060558051165:/aws/lambda/prod-****
#logStream
2020/09/25/[$LATEST]123
#message
2020-09-25T01:19:48.940Z 7af13cdc-74fb-5986-ad6b-6b3b33266425 ERROR Invoke Error {"errorType":"Error","errorMessage":"QueueProcessor 4 messages failed processing","stack":["Error:QueueProcessor 4 messages failed processing"," at Runtime.handler (/var/task/lambda/abc.js:25986:11)"," at process._tickCallback (internal/process/next_tick.js:68:7)"]}
#requestId
7af13cdc-74fb-5986-ad6b-6b3b33266425
#timestamp
1600996788940
errorMessage
QueueProcessor 4 messages failed processing
errorType
Error
stack.0
Error: QueueProcessor 4 messages failed processing
stack.1
at Runtime.handler (/var/task/lambda/abcBroadcast.js:25986:11)
stack.2
at process._tickCallback (internal/process/next_tick.js:68:7)
another example with node run time
Value
#ingestionTime
1600996891752
#log
060558051165:/aws/lambda/prod-
#logStream
2020/09/24/[$LATEST]abc
#message
2020-09-25T01:21:31.213Z 32879c8c-abcd-5223-98f9-cb6b3a192f7c ERROR (node:6) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
#requestId
32879c8c-7242-5223-abcd-cb6b3a192f7c
#timestamp
1600996891214
If anyone looking how to search an error or a log in AWS Log insights, can use this query to search:
fields #timestamp, #message
| filter #message like /text to search/
| sort #timestamp desc
| limit 20
Actually just selecting log group(s) and adding a new line as | filter #message like /text to search/ into the query editor is enough. The rest comes by default.
Also, keep in mind to configure the time span for the search history in case if you cannot find the relevant results. By default, it only searches for the last 1h.
In your console, navigate to your lambda's configuration page. In the top left, click Monitoring, then View logs in CloudWatch on the right.
you can run the following query in the CloudWatch Logs Insights.
filter #type = "REPORT"
| stats max(#memorySize / 1000 / 1000) as provisonedMemoryMB,
min(#maxMemoryUsed / 1000 / 1000) as smallestMemoryRequestMB,
avg(#maxMemoryUsed / 1000 / 1000) as avgMemoryUsedMB,
max(#maxMemoryUsed / 1000 / 1000) as maxMemoryUsedMB,
provisonedMemoryMB - maxMemoryUsedMB as overProvisionedMB

SparkR: "Cannot resolve column name..." when adding a new column to Spark data frame

I am trying to add some computed columns to a SparkR data frame, as follows:
Orders <- withColumn(Orders, "Ready.minus.In.mins",
(unix_timestamp(Orders$ReadyTime) - unix_timestamp(Orders$InTime)) / 60)
Orders <- withColumn(Orders, "Out.minus.In.mins",
(unix_timestamp(Orders$OutTime) - unix_timestamp(Orders$InTime)) / 60)
The first command executes ok, and head(Orders) reveals the new column. The second command throws the error:
15/12/29 05:10:02 ERROR RBackendHandler: col on 359 failed
Error in select(x, x$"*", alias(col, colName)) :
error in evaluating the argument 'col' in selecting a method for function
'select': Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :
org.apache.spark.sql.AnalysisException: Cannot resolve column name
"Ready.minus.In.mins" among (ASAP, AddressLine, BasketCount, CustomerEmail, CustomerID, CustomerName, CustomerPhone, DPOSCustomerID, DPOSOrderID, ImportedFromOldDb, InTime, IsOnlineOrder, LineItemTotal, NetTenderedAmount, OrderDate, OrderID, OutTime, Postcode, ReadyTime, SnapshotID, StoreID, Suburb, TakenBy, TenderType, TenderedAmount, TransactionStatus, TransactionType, hasLineItems, Ready.minus.In.mins);
at org.apache.spark.sql.DataFrame$$anonfun$resolve$1.apply(DataFrame.scala:159)
at org.apache.spark.sql.DataFrame$$anonfun$resolve$1.apply(DataFrame.scala:159)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.sql.DataFrame.resolve(DataFrame.scala:158)
at org.apache.spark.sql.DataFrame$$anonfun$col$1.apply(DataFrame.scala:650)
at org.apa
Do I need to do something to the data frame after adding the new column before it will accept another one?
From the link, just use backsticks, when accessing the column, e.g.:
From using
df['Fields.fields1']
or something, use:
df['`Fields.fields1`']
Found it here: spark-issues mailing list archives
SparkR isn't entirely happy with "." in a column name.

Resources