Posts

Showing posts from 2010

Checking Memory usage using free command

Linux uses it's memory in a very efficient way . Here I am not going to write on memory management in Linux , but would try to understand how to view the memory usage in a Linux box. Linux will always try to use free RAM for caching stuff, so "free" will report almost always very low free memory . We have the free command as a reporting tool for memory usage . Let us now look at the output of free command .

-bash-3.00$ free -m
                     total        used        free     shared     buffers     cached
Mem:           2007       1902        105          0          150           761
-/+ buffers/cache:        990       1016
Swap:         1963          0       1963


Initially was having the understanding that the free column in the above output gives me the free memory in my box and my box is really low in memory. But this is not exactly the case. We will see how .

Interpreting output of free Command:

Let's now understand the headers from the output of the above comman…

Using up all the the free Inodes

Today my mind was stucked with a question. How the Linux system will behave when all the number of free inodes in a  file system has been filled up , and there are no free inodes ?.
For this I first tried to identify the number of free inodes in my system. One option for me is to use the tune2fs command to view the superblock information which will have the entry for the number of free inodes. This is a snippet of tune2fs -l from my system .
$ sudo tune2fs -l /dev/sda6
tune2fs 1.41.10 (10-Feb-2009)
Filesystem volume name:  
Last mounted on:          /home
Filesystem UUID:          f7abadee-5b6a-4c30-b117-ba097e4c6123
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Err…

using tee to echo to a system file with sudo privileges

It is always a good practice not to execute privileged by logging as root. We can avoid
that by executing with sudo privileges. Many times , we need to change kernel parameter
for changing the behaviour of a Linux system . Like recently there was a need for me to
change the CPU governor from 'userspace' to 'ondemand' on a Linux system.

I have to use the following command to complete the task .

$ sudo echo userspace > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor

We have to execute the command as priviliged user , so I was trying to do it using sudo . But it was failing with the following error. But if I do by logging with uid=0 , then it succeeds . But I want to su - to root.

$ sudo echo ondemand > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
-bash: /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: Permission denied

This was because the above command has two parts and we are using sudo on the first part (sudo echo userspace) , which itself d…

Preventing SSH timeouts: A probable solution

Recently, we had seen some dropped SSH connections causing our process to fail which are dependent on SSH. I tried some analysis on both the hosts for the SSH connection drop and presenting below some analysis  that might have triggered the failure .

Analyzing SSH connection drops due to network inactivity:

The connection tracking procedures implemented in proxies and firewalls  keeps track of all connections that pass through them. Because of the physical limits of these machines, they can only keep a finite number of connections in their memory. The most common and logical policy is to keep newest connections and to discard old and inactive connections first. This can be one of the reason for connection drops but does not looks to be the reason in our case as our hosts are not behind NAT . For scenarios where hosts are behind NAT and are seeing dropped SSH connections , we may probably want to set the keep-alive time (/proc/sys/net/ipv4/tcp_keepalive_time) to a value less than the N…

Configuring Nagios for monitoring of basic services

The Nagios documentation says:

""Nagios is quite powerful and flexible, but it can take a lot of work to get it configured just the way you’d like. Once you become familiar with how it works and what it can  do for you, you’ll never want to be without it.
""
The saying was true for me as I struggled a lot initially while configuring nagios . After configuring once to monitor the localhost , I tried again and it was straight forward for me  . I was able to configure it without much issues and thought of documenting it. Initially we will see how to configure nagios to monitor services on localhost , and later  we will see how to monitor services on different hosts .

The following steps are need to be followed while configuring Nagios to monitor services on localhost.

1. Download nagios and nagios-plugin from the nagios repository
         http://www.nagios.org/download/    If you want to get it through commandline , then you can download it using

wget http://sour…

S.M.A.R.T : Using smartctl

Production systems running critical applications has a high requirement to be UP all the time . But there are times when the system suddenly crashes loosing critical data . If it is a disk failure , then we have to reinstall all applications and do the necessary configurations to make the system running again. We may also loose some amount of critical data , even if proper backup is in place. It is always good if we get some kind of alerting that the disk is going to fail in near future . In that case , we can have a scheduled downtime , intimating consumers about the scheduled down time . That will help us to come out of this catastrophe with a minimal impact. SMART monitoring tools is all about this. Let's now have a brief overview of SMART and also how to use SMART tools


What is S.M.A.R.T

S.M.A.R.T stands for Self monitoring , analysis and reporting technology. It is  the industry-standard reliability prediction indicator for both IDE/ATA and SCSI hard …

Using screen utility in Linux

We had a requirement of building the file system on around 30 hosts. These hosts have around 2TB of space and takes around 2 to 3 days to build the file system. The file system building that I am referring is to our proprietary application file system which is written on top of unix file system . Looks like the program is not well written , so it takes around 2 or 3 to build the file system. Never looked into the code of the program , so don't know much . Anyway my concern is that I cannot leave my terminal open for 2 to 3 days for file system creation to finish. My colleague told me of the screen utility which has facility of de-attaching from the terminal leaving the program running . I tried using screen and phew! it helped. I invoked the file system creation program under screen session and detached from the terminal after invoking the command . I logged off and went home. Next day , after logging in back to my system tried reattaching to the screen session. I connected and co…