stderr is a file descriptor opened automatically (along with stdin and stdout) for any new process on a Unix system that is intended as a simple way for developers to provide feedback to a user without assuming any fixed format.
stderr is always exposed to a process as file descriptor 2.
If a process is launched from a shell attached to a tty, stderr usually appears inline with the rest of the program's output.
In the shell, stderr may be redirected in order to differentiate it from stdout or to send it to a file.
It is important to note that we used the append `>>` operation here, rather than a single `>` which would overwrite find.log every time this command was run.
This sort of redirection is a feature of the shell that is used quite commonly in init scripts and cron tasks to capture the output of both short lived commands and long running daemons.
The `tail` command's `--follow` or `-f` flag is an extremely useful tool for viewing new lines appending to a file in realtime.
For example, if you were running a web server like nginx or Apache that was configured to send it's access log to a file, you could use tail to see new requests.
With the `-f` option, tail won't exit on it's own, it will continue to wait for new lines to be written to the log file and write them to the terminal until it receives a signal or encounters an error.
It is important to understand that after a file is opened for writing, the process writing the file only refers to that file by its file handle, which is a number assigned by the kernel.
If that file is renamed with `mv` or deleted with `rm`, writes to that file handle will still succeed.
This can sometimes lead to counter-intuitive situations where log messages are being written to a file that's been renamed for archival or to an inode that no longer has a filename associated with it.
Some daemons provide mechanisms for closing and reopening their log files upon receiving a signal like SIGHUP but quite a few don't.
Logging directly to files can add a lot of complexity to an application as close attention has to be paid to the use of file handles, log directory permissions, timezones, etc.
Syslog was created to provide a simple logging interface to application developers while offloading the tasks of sorting and storing logs to a separate daemon.
All modern Linux distributions ship with a syslog daemon and most of them are pre-configured to write messages to various files in `/var/log/`, depending on their facility or priority.
While the exact names of these files are not consistent across different Linux distributions, a few common ones like `/var/log/auth.log` and `/var/log/kern.log` almost always exist.
If you haven't already, take a look at the files in `/var/log/` on your system to get a sense of the types of log messages available in these files.
One of the advantages of using a syslog daemon is that the format of log lines can be configured in a single place and standardized for all services using syslog on a single host.
In this example, every line starts with a timestamp, the server's hostname, the name of the program and a PID.
While the name of the program is set when the connection to syslog is first opened, the rest of these fields are generated by the syslog daemon itself and added to every line.
Many different syslog implementations exist with a variety of configuration mechanisms and design philosophies.
Most current Linux distributions ship with a syslog daemon that implements some superset of the original Unix syslogd's functionality.
The following examples will use rsyslogd, which is currently included in Ubuntu Linux and according to it's manpage "is derived from the sysklogd package which in turn is derived from the stock BSD sources."
As soon as you start to manage more than a couple of servers, you start to think about ways to aggregate the logs from all of those servers in a single place so that you don't have to login to each one individually to find an issue.
Remote log aggregation is also often used to provide an audit trail for security events or a source of data that can be fed into a metrics system like Graphite or Ganglia.
There is a standard protocol for sending syslog events over a network to another host over UDP port 514.
As UDP is connectionless and makes no delivery guarantees, syslog messages sent to a remote host using this standard protocol can be dropped, delayed, or intercepted without any real indication to the user.
For these reasons, many syslog daemons implement different extensions and mechanisms for transporting this stream reliably.
The simplest option is to replace UDP with TCP to provide a reliable transport layer.
When configuring syslog aggregation, attention and care should be paid to security as syslog messages are often used as an audit trail and need to be protected against eavesdropping and manipulation.
Read your syslog daemon's documentation to understand what options are supported.
No matter which logging option you choose, logging directly to files or using syslog, log files grow large and unwieldy over time and become difficult to use, for example identifying specific events.
It allows automatic rotation, compression, removal, and mailing of log files.
The log files may be handled at intervals (daily, weekly and monthly) or when they grow too large.
It is usually scheduled to run daily.
Everything about the log files to be handled by logrotate as well as the actions to be carried out on them is read from the logrotate configuration files.
The main configuration file is ``/etc/logrotate.conf``.
Applications can also create configuration files in the ``/etc/logrotate.d`` directory, logrotate automatically includes all configuration files in this directory.