"Missing application logs in drains Heroku Status #1372" - heroku

What does this mean? It's just a warning on my Heroku metrics dashboard and isn't seemingly affecting me in a negative way, but I'm worried it's secretly messing with something?

The Heroku Status Twitter account has 3 tweets starting yesterday regarding this incident. The incident began 2 days ago (January 10th as of this writing) as the bottom of the page says:
Issue
From 18:48 UTC (1:48 PM ET) we have been experiencing losses in log drains.
Application log drains (add-ons and custom drains to external services) may have missing log lines.
During this incident, heroku logs --tail can be used to monitor incoming logs and should not have missing log lines. This output can be redirected to a file for later use if desired: heroku logs --tail > /tmp/.log
POSTED 2 DAYS AGO, JAN 10, 2018 20:22 UTC
It might affect you in a negative way if you rely on application logs for further analysis and record keeping.

Related

Heroku Connect: Extend Error Log Duration, and UNABLE_TO_LOCK_ROW error

I'm not too familiar with Heroku Connect so excuse me if these are newbie questions.
Is there a way to view Salesforce Error logs beyond a day old? I get the email notification for errors writing to Salesforce but if I don't check on them within a day, I can't access them anymore.
If the problem is "UNABLE_TO_LOCK_ROW", should we configure for the Retry stated here: https://devcenter.heroku.com/articles/heroku-connect-faq#can-i-retry-records-that-failed-to-write-to-salesforce ?
Thanks
In case of writes toward Salesforce, all the changes are kept into Postgres: on the _trigger_log table once occurred and then moved to _trigger_log_archive once processed. On demo plan these records are preserved up to 7 days, on paid plan up to 30 days. You can then look at the archive table to have more insights of the failure and manually resubmit the change, as explained in the link you have mentioned.

Heroku get logs of particular date

I am using Heroku and now i need to see old logs by date.I googled but i didn't got any solution .any one know how to get logs of particular date ?
heroku logs --app myproject -n 200000
also tail command i tried
heroku logs --source app --tail
But above command only return max of 1500 lines even if i increase then i get only 2k lines.
Even i read document
https://devcenter.heroku.com/articles/logging
The Heroku Logplex only stores the last 1500 log lines from your application dynos and add-ons. To store and search logs long-term, you'll need to stream your logs to an external service provider. Heroku provide many logging add-ons: https://elements.heroku.com/addons, or you can use your own service by setting up a log drain: https://devcenter.heroku.com/articles/log-drains.

tailing aws lambda/cloudwatch logs

Found out how to access lambda logs from another answer
Is it possible to tail them? (manually pressing refresh is cumbersome)
Since you mentioned tail-ing, I'm expecting that you are comfortable with working on the terminal with CLI tools.
You can install awslogs locally and use it to tail Cloudwatch.
e.g.
$ awslogs get /aws/lambda/my-api-lambda ALL --watch --profile production
Aside from not needing to refresh anything anymore (that's what tail is for), I also like that you don't have to worry about jumping between different LogGroups (unlike in the CloudWatch console).
Aside: We've noticed that tailing logs gets really slow after an AWS Lambda Function has had a lot of invocations. Even looking at logs through the AWS Console is incredibly slow. This is because "tail" type utilities need to connect to each log stream. Log events get expired due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs.
Actually, there is a better way with Insights (in the same CloudWatch).
Run query like on a log group and you will get what you want:
fields #timestamp, #message
| sort #timestamp desc
| limit 20
You can also add it to Dashboard to always have it "nearby"
If you are using the awscli
From the command line you can also do:
aws logs tail <your_log_group_name> --follow
The --follow flag will pull for new logs continuously
You can also use --since to set from what time to begin displaying logs
It supports:
s - seconds
m - minutes
h - hours
d - days
w - weeks
i.e
aws logs tail <your_log_group_name> --since 20m

LaunchServices logs XPC_ERROR_CONNECTION_INTERRUPTED in mac os x console

1.Service only ran for 0 seconds. Pushing respawn out by 10 seconds
2.LaunchServices: received XPC_ERROR_CONNECTION_INTERRUPTED trying to map database database
launchservices: database mapping failed with result -10822, retrying
I found this two logs related to my Application in console those logs are generated at every 10 seconds.
I search about it but didn’t get proper reason
https://discussions.apple.com/thread/7263229?tstart=0
https://forums.developer.apple.com/thread/16788
Any idea about this logs? Any help would be useful
For 1st warning please check all launch service(lauchchd) commands, And check which command cause this logs. please provide more logs relative to first warning.
(by restart your machine may stop log this warning)

Getting Heroku logs for past few weeks

I'm trying to get the production logs for the past few weeks off of heroku but when I do heroku logs, it just returns a few lines showing the production log for today.
Any way to get heroku logs for the past few weeks?
Thanks.
Any number up to 500 lines can now be retrieved using the -n flag.
heroku logs -n 420
As well you can also run:
heroku logs -t
And let that run for a while.
EDIT: And you can use third party tools like papertrail. See : papertrail link
(Correcting my own old response) Previously, Heroku only provided you with access to the last 100 lines. Now this limit apppears to have been raised.
There's also this pretty cool sounding logentries addon, with generous free offerings.
Not sure about going back given their limitations - but going forward you can forward your logs to an external syslog server.
Syslog drains - Premium Add-on

Resources