Reading syslog output on a Mac - macos

I have a program that was written for linux and I am trying to build and run it on my MacOS 10.5 machine. The program builds and runs without problem, however it makes many calls to syslog. I know that syslogd is running on my mac, however I can't seem to find where my syslog calls are output to.
The syslog calls are of the form
syslog (LOG_WARNING, "Log message");
Any idea where I might find my log output?

/var/log/system.log
You can monitor it easily using tail -f /var/log/system.log
See also the "logger" (man logger) and "syslog" (man syslog).

You should probably use the Console.app to view logfiles. It's purdy.
Select your device on the left and filter messages on the right:

Maybe interesting to note: Apple was using a real syslogd in the past but meanwhile all of this has switched to ASL (Apple System Log). The syslog command is still available, but it will only access this one log. If you want to access all log messages of ASL across all log files configured, use the log command.
E.g. the following shows all log messages produced by Safari within the last two days (be patient, can take a while):
log show --predicate 'process == "Safari"' --last 2d
See man log for all the actions you can perform, all the parameters it knows and what attributes you can filter for.

When in doubt, there's always man syslog.
You can find your messages in /var/log/syslog; my machine is set up out of the box to only include high level messages so you may need to have your settings.
You can also read the messages through syslog(1), or create a test message with a command like
$ syslog -s -l INFO "Hello, world."
use a severity of P ("panic") and you'll get an exciting message on your console immediately.

Mac OS X implements a superset of syslog's functionality. All of syslog is there, but as part of ASL.
Console, mentioned by Matthew Schinckel in his answer, is the GUI on ASL. It'll show you any messages that exist in the database, as fetched by queries listed in the sidebar. There are two queries by default; one only shows messages sent with the Console facility (as used by NSLog, among other things), whereas the other shows all log messages. Check the all-messages query; you'll probably find your message there.
That “all” does come with an asterisk. If you look in /etc/asl.conf, you'll see this line:
# save everything from emergency to notice
? [<= Level notice] store
Fortunately, in your case, the message will pass this check, since warning outranks (is a lesser number than) notice.

If you need complex syslog analysis (navigation hour by hour in terminal, regexp, comparing in real time w\ other files or even running SQL over syslog) lnav would seamlessly provide it for you.
Installation:
brew install lnav
Usage:
lnav /var/log/system.log
UI itself:

Building on Charlie's answer, I would like to add that you should take a look at the manpage of syslog.conf(5) and also take a peek at the file /etc/syslog.conf (which is where the syslog configuration is defined by default and also, as I see it, on OS X 10.5.x).

Check for a call to openlog somewhere in the program. After a call to openlog, syslog will save its output to that log file instead of the default location.

Big Sur
Unfortunately, non of the stated answers worked for me.
What Worked for me:
The system mail accessed using the mail program from the terminal had all the /usr/sbin/cron logs in emails.

Related

Is there a way in Laravel to send an output to both console and log?

I am writing a command which has a lot of service information that I need to see during the command is running.
I am outputing this info simply running echo "some text", and that way I can see what happens when running this command. When the same command is run with scheduler I have to log all this info. So I have to duplicate all the same messages with: Log::info("some text").
If I want to avoid duplication I can create a helper class that can have all this, that I then include in all the service classes that are related to this command and use this helper class to avoid code duplication, but I still feel that this is not ideal solution. Is there maybe a built in way in Laravel on how to sent to console output and Log at the same time?
You could add: ->appendOutputTo('path')); when running your task that execute your command, to store the output messages in your log file. Although, I'm not sure if this will log all console I/O (it will be good to know in case you test it).
Check this.

tailing aws lambda/cloudwatch logs

Found out how to access lambda logs from another answer
Is it possible to tail them? (manually pressing refresh is cumbersome)
Since you mentioned tail-ing, I'm expecting that you are comfortable with working on the terminal with CLI tools.
You can install awslogs locally and use it to tail Cloudwatch.
e.g.
$ awslogs get /aws/lambda/my-api-lambda ALL --watch --profile production
Aside from not needing to refresh anything anymore (that's what tail is for), I also like that you don't have to worry about jumping between different LogGroups (unlike in the CloudWatch console).
Aside: We've noticed that tailing logs gets really slow after an AWS Lambda Function has had a lot of invocations. Even looking at logs through the AWS Console is incredibly slow. This is because "tail" type utilities need to connect to each log stream. Log events get expired due to the policy you set on the Log Group itself, but the Log Streams never get cleaned up. I made a few little utility scripts to help with that:
https://github.com/four43/aws-cloudwatch-log-clean
Hopefully that save you some agony over waiting for those logs.
Actually, there is a better way with Insights (in the same CloudWatch).
Run query like on a log group and you will get what you want:
fields #timestamp, #message
| sort #timestamp desc
| limit 20
You can also add it to Dashboard to always have it "nearby"
If you are using the awscli
From the command line you can also do:
aws logs tail <your_log_group_name> --follow
The --follow flag will pull for new logs continuously
You can also use --since to set from what time to begin displaying logs
It supports:
s - seconds
m - minutes
h - hours
d - days
w - weeks
i.e
aws logs tail <your_log_group_name> --since 20m

Clarion based application in WinServer 2016, frozen. Suggestions needed on tools we could use to gather more information

I have a clarion application running in Win Server 2016 talking to a sybase DB, over the past few weeks we find the application gets frozen for different users at a given time. However the user can leave the session as such and start a new one and that works good. The users are known to use multiple instances of the same application in one remote server or on multiple servers. Having said that, I wanted to get more information on the freezeup and looked through the application event logs in the system where I see explorer.exe crashes but these correlate to the time of occurrence of the issue at certain times but not always, checked the DB transaction logs from Sybase and I do not find any crashes, errors or stuck connections. Having said that, since I have exhausted all possible options I am reaching out to you guys to know if there are any other places that i can look for to gather more information.
I would love to know of any application / tools that we could use to gather logs of a frozen clarion application on windows. Also good to know if anyone has faced such a situation and where and how have you guys looked into the issue.
Thanks in advance for your help in this.
Clarion's runtime library and database drivers expect a persistent connection. Disconnects that are normal with remote ODBC can cause a problem (including app hangs) unless you test for them at the ABC file mgr level and reconnect, or use similar steps to test and recover.
If you're looking for specifics about what's going on between the driver and the SQL backend, I suggest using Clarion's database driver trace facilities. From the help topic: "Logging Driver I/O for debugging":
To view the trace details in debugview, name the target trace file "DEBUG:"
Logging opens the named logfile for exclusive access. If the file exists, the new log data is appended to the file.
On Demand Logging
For on-demand logging you can use property syntax within your program to conditionally turn various levels of logging on and off. The logging is effective for the target table and any view for which the target table is the primary table.
file{PROP:Profile}=Pathname !Turns Clarion I/O logging on
file{PROP:Profile}="DEBUG:" !Turns Clarion I/O logging on and
!sends output via OutputDebugString()
!(viewable via debugview, etc)
file{PROP:Profile}='' !Turns Clarion I/O logging off
PathName = file{PROP:Profile} !Queries the name of the log file
file{PROP:Log}=string !Writes the string to the log file
file{PROP:Log}="DEBUG:" !Writes the string to the log file
file{PROP:Details}=1 !Turns Record Buffer logging on
fFile{PROP:Details}=0 !Turns Record Buffer logging off
where Pathname is the full pathname or the filename of the log file to create. If you do not specify a path, the driver writes the log file to the current directory.
You can also accomplish on demand logging with a SEND() command and the LOGFILE driver string. See LOGFILE for more information.
Example I use frequently, which was based on the help above:
SYSTEM{PROP:DriverTracing} = '1'
CRMNotes{PROP:TraceFile} = 'DEBUG:'
CRMNotes{PROP:Details}=1
CRMNotes{PROP:Profile}= 'DEBUG:'
CRMNotes{PROP:LogSQL} = 1

Logstash close file descriptors?

BACKGROUND:
We have rsyslog creating log files directories like: /var/log/rsyslog/SERVER-NAME/LOG-DATE/LOG-FILE-NAME
So multiple servers are spilling out their logs of different dates to a central location.
Now to read these logs and store them in elasticsearch for analysing I have my logstash config file something like this:
file{
path => /var/log/rsyslog/**/*.log
}
ISSUE :
Now as number of log files in the directory increase, logstash opens file descriptors (FD) for new files and will not release FDs for already read log files.
Since log files are generated per date, once it is read, it is of no use after that since it will not be updated after that date.
I have increased the file openings limit to 65K in /etc/security/limits.conf
Can we make logstash close the handle after some time so that number of file handles opened do not increase too much ??
I think you may have hit this bug: http://github.com/elastic/logstash/issues/1604. Do you have the same symptoms? Exceptions in logs after some time? If you run sudo lsof | grep java | wc -l do you see the descriptors steadily increasing over time? (some of them might close, but some will stay open and their number will increase)
I've been tracking this issue for some time, and I don't know that it's properly solved.
We were in a similar boat, perhaps bigger: Logstash couldn't open handles for hundreds of thousands of log files on a box, even though very few of them written to actively. LOGSTASH-271 captured this issue, and there were some attempts to patch Logstash, including PR #1260.
It seems a fix may have made it's way into Logstash 1.5 with PR #1545, but I've never tested this personally. We ended up forking the underlying library Logstash uses to implement the file input, called FileWatch, into FFileWatch, which adds an "eviction mechanism".
The basic idea behind this approach is to only keep files open while they're being written. Normally, Logstash will open a handle on the file and keep it open forever, but FFileWatch adds an option to close the handle if the file has not changed recently (eviction_interval). I then created a custom build of Logstash using the forked gem.
Obviously this is less than ideal, but it worked for us. Eventually we dropped Logstash entirely for picking up log files, although we still use it further down the log processing pipeline. We implemented our own lightweight log shipper (Franz), which does not suffer from this issue.

What is good/free software for monitoring IIS in Windows Vista?

I always forget to check what's going on in IIS on our webservers, and am wondering: is there some stupid applet or something that always runs locally that I can click on to check event logs and IIS logs on a remote machine?
Mark
You can set up samurize to follow the output of the logging on the local and remote machines but it requires some setup.
You can use a remote shell utility such as OpenSSH to connect to remote machines securely.
One at a time. Compmgmt.msc -> connect to another computer.
But one at a time is boring. Monitoring dozens of machines? I've been using logparser from MS for my log monitoring needs. I run a query that dumps errors and warnings to a csv file a few times a day.
So far, I've only used it to aggregate event logs across the dozen servers in our QA environment, but it appears to take many forms on log input, including IIS. A pseudo log file query (don't have samples with me)
logparser "Select [eventtype], [message] into output.csv FROM \\server1\system, \\server2\system" -i EVT
This shows: You can aggregate multiple servers. You tell it the input format - it supports a dozen log types. You can dump it into a csv file. It looks sort of like SQL. This article in security focus has an IIS log sample.
I'm not an applet type of guy, so I haven't though much about desktop widgets to do this.

Resources