The more you know about how Linux works, the better you’ll be able do some good troubleshooting when you run into a problem. In this post, we’re going to dive into a problem that a contact of mine, Chris Husted, recently ran into and what he did to determine what was happening on his system, stop the problem in its tracks, and make sure that it was never going to happen again.
Disaster strikes
It all started when Chris’ laptop reported that it was running out of disk space–specifically that only 1GB of available disk space remained on his 1TB drive. He hadn’t seen this coming. He also found himself unable to save files and in a very challenging situation since it is the only system he has at his disposal and he needs the system to get his work done.
When he was prompted by the system to “Examine or Ignore” the problem, he chose to examine it. Looking around, he noticed that his /var/log directory had become extremely large. Examining the directory more closely, he saw that his syslog file had grown to 365GB. Imagine being Chris and looking at something like this:
$ ls -lh /var/log/syslog -rw-r----- 1 syslog adm 365G Jun 5 12:11 /var/log/syslog
Searching for help
Hunting around on the web, Chris found this post on stackoverflow that encouraged capping the size of the syslog file.
The first thing he did was run these three commands:
$ sudo su – # sudo > /var/log/syslog # sudo systemctl restart syslog
The first command allowed him to take on root privileges, the second emptied the syslog file on the system and the third restarted the syslog daemon so it would continue to collect information about what was happening on the system. He still needed to track down the culprit.
Credits: NetworkWorld