I have an ECS Fargate application which log are saved in cloudwatch using awslog driver.
Logging works very well, the only annoying thing is that each container creates a different log stream, which name is, as explained on the documentation here
prefix-name/container-name/ecs-task-id
I make an extensive use of autoscaling, creating a lot of tasks, which in turn produce a lot of log streams.
I was wondering if it's possible to have all the logs into the same log stream, that would help me a lot but it looks like it's not possible off the shelf. How could I achieve my goal ?
This is not a solution to your usecase, but a workaround you could use is to just search on your log group instead of going into your task-id. You can also use range queries on a log group so this ends up providing pretty much the same thing as going into the specific log stream of the each task-id. Each line of the log in the log group also has a link to the task specific logs stream.
Another thing you could try to do is using elastic search to maintain your logs. Querying on elastic search is extremely easy (it comes built in with kibana which is a pretty powerful off the shelf filtering tool.
Related
Assuming I have many Python processes running on an automation server such as Jenkins, let's say I want to use Python's native logging module and, other than writing to the Jenkins console or to a log file, I want to store & centralize the logs somewhere.
I thought of using ELK for that, but then I realized that I can just as well create a dedicated log table in an existing database (I'm using Redshift), use something like Grafana for log dashboards/visualization and save myself the trouble of deploying a new system (most of the people in my team are familiar with Redshift but not with ElasticSearch).
Although it sounds straightforward, I feel like I'm not looking at the big picture and that I would be missing some powerful capabilities that components like Logstash were written for the in the first place. What would these capabilities be and how would it be advantageous to use ELK instead of my solution?
Thank you!
I have implemented a full ELK stack in my company in the past year.
The project was huge and took a lot of time to properly implement. The advantages of using ELK and not implementing our own centralized logging solution would be:
Not needing to re-invent the wheel- There is already a product that is doing just that. (and the installation part is extremely easy)
It is battle tested and can stand huge amount of logs in a short time.
As your business and product grows and shift you will need to parse more logs with different structure which will mean DB changes on self built system. logstash will give you endless possibilities of filtering and parsing those new formatted logs.
It has Cluster and HA capabilities, and you can scale your logging system vertically and horizontally.
Very easy to maintain and change over time.
It can send the needed output to a variety of products including Zabbix, Grafana, elasticsearch and many more.
Kibana will give you ability to view the logs, build graphs and dashboards, alerts and more...
The options with ELK are really endless and the more I work with it, the more I find new ways it can help me. not just from viewing logs on distributed remote server systems, but also security alerts and SLA graphs and many other insights.
We want to set up a common logging interface across all the product teams in our company. We chose ELK for this and i want some advice regarding the set up:
One way is to have centralized ELK set up and all teams can use some sort of log forwarder e.g. FileBeat to send logs to common logstash. The issue with this i feel is : If teams want to use filters on the logs for analyzing log messages, they would need to access the common ELK machine to add filters as Beats doesn't support groking or any other filtering.
Second way is to have different logstash servers per team and all those will point to common Elastic Search server. This way teams are free to modify/add grok filters.
Please enlighten me if i am missing something or may be i am wrong in understanding. Other ideas are welcome.
Have you considered using fluentd instead? Lightweight, similar to filebeat, and allows groking and parsing.
Of course, your other alternative is to use a centralized Logstash instance and have different configuration files for each entity.
I'm still a bit confused after reading documentation provided by logstash. I'm planning on writing an Android app, and I want to log the activity of the app. Logs will be sent over the network. is logstash not the right solution? because it needs to have an "agent" installed on systems that produces log.
I want a system that can store log from the app activity, but it also needs to be able to export the collected logs into plain text file. I know logstash can output to elasticsearch, but i'm not sure if it can export to plaintext file at the same time. or is this a task that ElasticSearch should do?
thanks a ton for any input you can provide
Logstash forwarder isn't currently available for android/ios unfortunately, nor could I find any existing solution for it from the community. (I asked the same question here but was voted off-topic because it was deemed asking for tool/library suggestions).
Your best bet unfortunately is either to write one yourself (which isn't trivial: you'll need to factor in offline connectivity, batching, scheduling, compressions, file-tracking, and so on), or to use other (usually commercial) logging services such as LogEntries.
By the way, the android/ios clients for LogEntries is open source. I'm not clear on its OSS licensing, but if you're to write an agent for logstash yourself, you could perhaps start by looking at LogEntries' android agent implementation, which already solves all the technical problems mentioned above. https://github.com/logentries/le_android.
And to answer your other question, yes logstash should receive your log (from the mobile-device), usually via lumberjack input (aka logstash forwarder). Logstash can then persist & index these log files to elasticsearch, providing it's configured that way
Context:
Seeing a null pointer in one of the integration test, which runs in a locally spawned stom cluster. Increased the log level and could not figure it out what is really happening. Any help would be appreciated.
Your question doesn't quite match your title. If you're looking for better access to logs for scalable apps (whether on Hadoop or Storm) then check out tools that collect and aggregate logs from multiple nodes and systems. I'm familiar with PaperTrail and GreyLog, but I'm sure there are others. These tools, in conjunction with judicious use of log levels, can help you quickly find errors in your scalable apps.
If you're looking to get a better idea of how your system is performing (this is what I think of when I hear "visualization") then check out distributed monitoring tools. We've had very good success with the both the visualization of Storm bolt/spout performance and alert processing with CopperEgg, for example.
since Umbraco v6 decided to implement logging to a text file by default, I would like to ask you guys what kind of logging you use.
Do you log to a text file on a production website, or do you log to a database table? Or do you implement any other kind of logging?
And what are the performance implications of this?
I do both type of logging file as well as DB on production environment, as I need to audit logs so need to have everything actual and saved.
I use nLog.
http://nlog-project.org/
Its robust, fast and good and have been using it in production environment from last year.
Its good and gives you logging at various levels.
I would recommend you to use NLog.
At one time I investigated question about the best frameworks for logging and stopped on NLog.
I have already used it on different projects and it always show good results.
With NLog you can sent your logs to a different targets:
file, database, event log, console, email, nlogviewer and so forth.
You can set up all configuration on config files. It's very cool and useful. You can easily set up how and where you want to write your logs.
At your disposal is also Wrapper Targets (see datail in documentation). In my opinion the most useful target is AsyncWrapper (provides asynchronous, buffered execution of target writes). It will give you good performance.
There are also a lot of another cool featers.