tailing aws lambda/cloudwatch logs - aws-lambda

Found out how to access lambda logs from another answer
Is it possible to tail them? (manually pressing refresh is cumbersome)

Since you mentioned tail-ing, I'm expecting that you are comfortable with working on the terminal with CLI tools.
You can install awslogs locally and use it to tail Cloudwatch.
e.g.
$ awslogs get /aws/lambda/my-api-lambda ALL --watch --profile production
Aside from not needing to refresh anything anymore (that's what tail is for), I also like that you don't have to worry about jumping between different LogGroups (unlike in the CloudWatch console).

Aside: We've noticed that tailing logs gets really slow after an AWS Lambda Function has had a lot of invocations. Even looking at logs through the AWS Console is incredibly slow. This is because "tail" type utilities need to connect to each log stream. Log events get expired due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs.

Actually, there is a better way with Insights (in the same CloudWatch).
Run query like on a log group and you will get what you want:
fields #timestamp, #message
| sort #timestamp desc
| limit 20
You can also add it to Dashboard to always have it "nearby"

If you are using the awscli
From the command line you can also do:
aws logs tail <your_log_group_name> --follow
The --follow flag will pull for new logs continuously
You can also use --since to set from what time to begin displaying logs
It supports:
s - seconds
m - minutes
h - hours
d - days
w - weeks
i.e
aws logs tail <your_log_group_name> --since 20m

Related

Heroku get logs of particular date

I am using Heroku and now i need to see old logs by date.I googled but i didn't got any solution .any one know how to get logs of particular date ?
heroku logs --app myproject -n 200000
also tail command i tried
heroku logs --source app --tail
But above command only return max of 1500 lines even if i increase then i get only 2k lines.
Even i read document
https://devcenter.heroku.com/articles/logging
The Heroku Logplex only stores the last 1500 log lines from your application dynos and add-ons. To store and search logs long-term, you'll need to stream your logs to an external service provider. Heroku provide many logging add-ons: https://elements.heroku.com/addons, or you can use your own service by setting up a log drain: https://devcenter.heroku.com/articles/log-drains.

Stop Logstash agent on inactivity?

The central log server I'm working on uses two Logstash agents, each running in its own screen:
a shipper to collect logs from front servers
an indexer to send the logs into Elasticsearch
Sometimes, it can be useful to re-import some logs (on failure, to re-format the logs etc...). For this purpose, I execute a third agent called importer whose job is to re-import old logs.
The problem I'm facing is that I have to monitor the re-import processus until it's completely done and hence becomes killable.
So, I would like to know if there's some kind of option able to stop an agent on idleness.
You might be able to do something with the exec input. (http://logstash.net/docs/1.4.2/inputs/exec). I'm thinking something like cat /some/reload/file; sleep 30; kill $LS_PID not quite sure on how you'd get $LS_PID assigned though.

In rsyslog, how can I trigger an hourly email aggregating entries in one of our logs?

One of our apps has been configured to log certain errors to a log on a remote server using rsyslog. I've been asked to provide an hourly email that lists the errors logged within the last hour. I've looked at ommail, but it doesn't seem to do exactly this. Any suggestions on how best to do this?
I would go low tech on this:
put error messages in a separate file like
*.error /var/log/error.log
then rotate it hourly via logrotate
From logrotate, you can run a script in the prerotate or postrotate part, where you can take the contents of the file and send them via Email.
ommail is more for sending logs matching a certain filter, so it would be hacky to make it send such "digests".

Getting Heroku logs for past few weeks

I'm trying to get the production logs for the past few weeks off of heroku but when I do heroku logs, it just returns a few lines showing the production log for today.
Any way to get heroku logs for the past few weeks?
Thanks.
Any number up to 500 lines can now be retrieved using the -n flag.
heroku logs -n 420
As well you can also run:
heroku logs -t
And let that run for a while.
EDIT: And you can use third party tools like papertrail. See : papertrail link
(Correcting my own old response) Previously, Heroku only provided you with access to the last 100 lines. Now this limit apppears to have been raised.
There's also this pretty cool sounding logentries addon, with generous free offerings.
Not sure about going back given their limitations - but going forward you can forward your logs to an external syslog server.
Syslog drains - Premium Add-on

Reading syslog output on a Mac

I have a program that was written for linux and I am trying to build and run it on my MacOS 10.5 machine. The program builds and runs without problem, however it makes many calls to syslog. I know that syslogd is running on my mac, however I can't seem to find where my syslog calls are output to.
The syslog calls are of the form
syslog (LOG_WARNING, "Log message");
Any idea where I might find my log output?
/var/log/system.log
You can monitor it easily using tail -f /var/log/system.log
See also the "logger" (man logger) and "syslog" (man syslog).
You should probably use the Console.app to view logfiles. It's purdy.
Select your device on the left and filter messages on the right:
Maybe interesting to note: Apple was using a real syslogd in the past but meanwhile all of this has switched to ASL (Apple System Log). The syslog command is still available, but it will only access this one log. If you want to access all log messages of ASL across all log files configured, use the log command.
E.g. the following shows all log messages produced by Safari within the last two days (be patient, can take a while):
log show --predicate 'process == "Safari"' --last 2d
See man log for all the actions you can perform, all the parameters it knows and what attributes you can filter for.
When in doubt, there's always man syslog.
You can find your messages in /var/log/syslog; my machine is set up out of the box to only include high level messages so you may need to have your settings.
You can also read the messages through syslog(1), or create a test message with a command like
$ syslog -s -l INFO "Hello, world."
use a severity of P ("panic") and you'll get an exciting message on your console immediately.
Mac OS X implements a superset of syslog's functionality. All of syslog is there, but as part of ASL.
Console, mentioned by Matthew Schinckel in his answer, is the GUI on ASL. It'll show you any messages that exist in the database, as fetched by queries listed in the sidebar. There are two queries by default; one only shows messages sent with the Console facility (as used by NSLog, among other things), whereas the other shows all log messages. Check the all-messages query; you'll probably find your message there.
That “all” does come with an asterisk. If you look in /etc/asl.conf, you'll see this line:
# save everything from emergency to notice
? [<= Level notice] store
Fortunately, in your case, the message will pass this check, since warning outranks (is a lesser number than) notice.
If you need complex syslog analysis (navigation hour by hour in terminal, regexp, comparing in real time w\ other files or even running SQL over syslog) lnav would seamlessly provide it for you.
Installation:
brew install lnav
Usage:
lnav /var/log/system.log
UI itself:
Building on Charlie's answer, I would like to add that you should take a look at the manpage of syslog.conf(5) and also take a peek at the file /etc/syslog.conf (which is where the syslog configuration is defined by default and also, as I see it, on OS X 10.5.x).
Check for a call to openlog somewhere in the program. After a call to openlog, syslog will save its output to that log file instead of the default location.
Big Sur
Unfortunately, non of the stated answers worked for me.
What Worked for me:
The system mail accessed using the mail program from the terminal had all the /usr/sbin/cron logs in emails.

Resources