On the Parse.com cloud-code console, I can see logs, but they only go back maybe 100-200 lines. Is there a way to see or download older logs?
I've searched their website & googled, and don't see anything.
Using the parse command-line tool, you can retrieve an arbitrary number of log lines:
Usage:
parse logs [flags]
Aliases:
logs, log
Flags:
-f, --follow=false: Emulates tail -f and streams new messages from the server
-l, --level="INFO": The log level to restrict to. Can be 'INFO' or 'ERROR'.
-n, --num=10: The number of the messages to display
Not sure if there is a limit, but I've been able to fetch 5000 lines of log with this command:
parse logs prod -n 5000
To add on to Pascal Bourque's answer, you may also wish to filter the logs by a given range of dates. To achieve this, I used the following:
parse logs -n 5000 | sed -n '/2016-01-10/, /2016-01-15/p' > filteredLog.txt
This will get up to 5000 logs, use the sed command to keep all of the logs which are between 2016-01-10 and 2016-01-15, and store the results in filteredLog.txt.
Related
I am selecting error log details from a docker container and decide within a shell script, how and when to alert about the issue by discord and/or email.
Because I am receiving the email alerts too often with the same information in the email body, I want to implement the following two adjustments:
Fatal error log selection:
FATS="$(docker logs --since 24h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"
Email sent, in case FATS has some content:
swaks --from "$MAILFROM" --to "$MAILTO" --server "$MAILSERVER" --auth LOGIN --auth-user "$MAILUSER" --auth-password "$MAILPASS" --h-Subject "FATAL ERRORS FOUND" --body "$FATS" --silent "1"
How can I send the email only in the case, FATS has another content than the previous run of the script? I have thought about a hash about its content, which is stored and read in a text file. If the hash is the same than the previous script run, the email will be skipped.
Another option could be a local, temporary variable in the global user's bash profile, so that there is no file to be stored on the file system (to avoid read / writes).
How can I do that?
When you are writing a script for your monitoring, add functions for additional functionality, like:
logging all the alerts that have been send
make sure you don't send more than 1 alert each hour
consider sending warnings only during working hours
escalate a message when it fails N times without intermediate success
possible send an alert to different receivers (different email adresses or also to sms or teams)
make an interface for an operator so he can look back when something went wrong the first time.
When you have control which messages you send, it is easy to filter duplicate meassages (after changing --since).
I‘ve chosen the proposal of #ralf-dreager and reduced selection to 1d and 1h. Consequently, I‘ve changed my monitoring script to either go through the results of 1d or just 1h, without the need to select each time again and again. Huge performance improvement and no need to store anything else in a variable or on the file system.
FATS="$(docker logs --since 1h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"
I am getting a lot of 'del' call on GCP Memorystore approx at a rate of 6k/sec. But I am unable to identify the source that's making these 'del' calls.
I have tried accessing logs of the particular memory-store server but didn't get anything related with calls information.
I need to figure out who is making these 'del' calls on my memorystore.
Any suggestions......
Thanks
You may use monitor command to list every command processed by Redis server. You need to use with grep to filter DEL commands from the whole stream. By default grep is case sensitive, -i is added for to filter both DEL and del.
redis-cli -h your.host.name monitor | grep -i del
it will print in following format. You may use ip address to identify who is deleting.
1588013292.976045 [0 127.0.0.1:44098] "del" "foo"
1588013294.875606 [0 127.0.0.1:44098] "DEL" "foo"
1588013298.285791 [0 127.0.0.1:44098] "dEl" "foo"
Using monitor is not going to be free, please check the benchmark numbers.
cat <<EOF | curl --data-binary #- http://localhost:9091/metrics/job/pushgetway/instance/test_instance
http_s_attack_type{hostname="test1",scheme="http",src_ip="192.168.33.86",dst_ip="192.168.33.85",port="15555"} 44
http_s_attack_type{hostname="other",scheme="tcp",src_ip="1.2.3.4",dst_ip="192.168.33.85",port="15557"} 123
EOF
Change data and write again:
cat <<EOF | curl --data-binary #- http://localhost:9091/metrics/job/pushgetway/instance/test_instance
http_s_attack_type{hostname="test2",scheme="http",src_ip="192.168.33.86",dst_ip="192.168.33.85",port="15555"} 55
http_s_attack_type{hostname="other3",scheme="tcp",src_ip="1.2.3.4",dst_ip="192.168.33.85",port="15557"} 14
EOF
View the data on localhost:9091 becomes the last write data, the data written for the first time is overwritten。
Is there a problem with my operation? Please tell me how to continuously introduce new data without being overwritten or replaced
This is working exactly as designed. The pushgateway is meant to hold the results of batch jobs when they exit, so on the next run the results will replace the previous run.
It sounds like you're trying to do event logging. Prometheus is not a suitable tool for that use case, you might want to consider something like the ELK stack instead.
I'd like to monitor syslog events every hour. I use dategrep to get the last hour but on log rotation the last hour may span to the previous syslog.
Is there an expansion to achieve listing the two recent syslog files in ascending order?
$(ls -tr syslog* | tail -n 2)
The output should be
syslog.1 syslog # when syslog.1 exists
or
syslog # when it doesn't
I've tried syslog{.1,} but it always outputs syslog.1.
Thank you!
Is there a way to make sure that AB gets proper responses from server? For example:
To force it to output the response of a single request to STDOUT OR
To ask it to check that some text fragment is included into the response body
I want to make sure that authentication worked properly and i am measuring response time of the target page, not the login form.
Currently I just replace ab -n 100 -c 1 -C "$MY_COOKIE" $MY_REQUEST with curl -b "$MY_COOKIE" $MY_REQUEST | lynx -stdin .
If it's not possible, is there an alternative more comprehensive tool that can do that?
You can use the -v option as listed in the man doc:
-v verbosity
Set verbosity level - 4 and above prints information on headers, 3 and above prints response codes (404, 200, etc.), 2 and above prints warnings and info.
https://httpd.apache.org/docs/2.4/programs/ab.html
So it would be:
ab -n 100 -c 1 -C "$MY_COOKIE" -v 4 $MY_REQUEST
This will spit out the response headers and HTML content. The 3 value will be enough to check for a redirect header.
I didn't try piping it to Lynx but grep worked fine.
Apache Benchmark is good for a cursory glance at your system but is not very sophisticated. I am currently attempting to tune a web service and am finding that AB does not measure complete response time when considering the transfer of the body. Also as you mention you can not verify what is returned.
My current recommendation is Apache JMeter. http://jmeter.apache.org/
I am having much better success with it. You may find the Response Assertion useful for your situation. http://jmeter.apache.org/usermanual/component_reference.html#Response_Assertion