Logfiles may be generated either by the default Linux installation or by installing utilities to create additional logging. These logfiles assist process accounting by retaining details that can aid in troubleshooting and detecting attacks. Normally, you can find these files in the
var/log
directory, broken up into groups relating to networks, users, and processes. The default Red Hat installation provides mechanisms for gathering information on network connections. Two network logs are automatically created, one for tracking FTP connections,
/var/log/xferlog
,
and the other listing all failed remote connection attempts,
/var/log/secure
.
Both these files can be viewed using the
less
or
head
command, which will show you the latest additions.
Logfiles are also automatically created to keep track of what users are doing. The file
/var/run/utmp
provides a listing of all currently connected users, which you can view by issuing the
who
command. A history of all users logged in is kept in the
/var/log/utmp
file, and can also be viewed by issuing the
who
command. Viewing the files may assist you in finding people who are abusing your system.
The logfiles that keep track of process executions are not created automatically for you, and must be configured into your system. Once installed and enabled, the file
/var/log/pacct
will contain the information of process execution and may be viewed by issuing the
lastcomm
command. This information may be valuable in troubleshooting and security problems.
The following
series of images describe what these log files look like.
In Red Hat Linux, the command to view log files is "tail". The "tail" command allows you to view the last few lines of a log file, which can be useful for monitoring system activity and troubleshooting issues. To view the last 10 lines of a log file, you can use the following command:
tail -n 10 /var/log/<logfile>
Replace "<logfile>" with the name of the log file you want to view. For example, to view the last 10 lines of the system log file, you can use:
tail -n 10 /var/log/messages
If you want to continuously monitor the log file and view new entries as they are added, you can use the "tail -f" command. For example:
tail -f /var/log/messages
This command will display the last few lines of the log file and then continuously monitor the file for new entries. Any new entries will be displayed in real-time as they are added to the log file. To exit the "tail -f" command, press "Ctrl + C".
The following screen illustrates the results when viewing logfiles.
All of the following commands can be used in Red Hat Linux to view and interact with log files.
Here's a brief overview of how each command can be used:
-
`less` command: Opens a file for viewing one page at a time. You can scroll up and down through the log file. Useful for large log files.
- Example:
less /var/log/syslog
-
`more` command: Similar to `less`, but you can only scroll forward. It’s an older command and not as flexible as `less`.
- Example:
more /var/log/syslog
-
`cat` command: Displays the entire content of a file at once. If the log file is large, it may be difficult to read.
- Example:
cat /var/log/syslog
-
`grep` command: Searches for specific patterns or strings in a file. Useful for filtering log entries.
- Example:
grep "error" /var/log/syslog
-
`tail` command: Displays the last few lines of a file. It’s useful for monitoring log files in real time using the `-f` option.
- Example:
tail -f /var/log/syslog
-
`zcat` command: Used to view the contents of compressed log files (`.gz` format) without decompressing them.
- Example:
zcat /var/log/syslog.gz
-
`zgrep` command: Similar to `grep`, but works on compressed files (`.gz` format).
- Example:
zgrep "error" /var/log/syslog.gz
-
`zmore` command: Similar to `more`, but for compressed files. It allows viewing a compressed file page by page.
- Example:
zmore /var/log/syslog.gz
These commands are all commonly used in Red Hat Linux for inspecting log files, and each has its own advantages depending on your needs, such as viewing large logs, searching for specific entries, or working with compressed log files.
The next lesson concludes this module.