Showing posts with label interview. Show all posts
Showing posts with label interview. Show all posts

Sunday, April 16, 2017

Linux Common Interview Questions and Answers ?

Q.0 What does the last two sections define in fstab file?

Ans: The 5th column tells the dump information if whether the partition has to be backed up. It it is "0" the filesystem will be ignored
The 6th column tells the order in which fsck command would check the filesystem on boot. If it is "0" then fsck won't check the filesystem

Q:1 How To check the uptime of a Linux Server ?

Ans: Using uptime command we can determine how long a linux box has been running , also uptime can be viewed by the top & w command.

Q:2 How to check which Redhat version is installed on Server ?

Ans: Use the command cat /etc/redhat-release , output of this command will tell you the redhat version.

Q:3 How to install rpm packages in Redhat & CentOS linux ?

Ans: rpm and yum command are used to install packages in redhat linux and CentOS.

Q:4 How to check the ip address of LAN Card ?

Ans: Using ‘ifconfig’ & ‘ip address’ command we can determine the ip address of LAN Card.

Q:5 How to determine the hostname of a linux box ?

Ans: On typing the hostname command on terminal we can determine the hostname of a linux server.

Q:6 How To check the default gatway ?

Ans: Using ‘route -n’ command we can determine the default gateway in linux.

Q:7 Which Command is used to check the kernel Version ?

Ans: ‘uname -r’

Q:8 How to check the current runlevel of a linux box ?

Ans : ‘who -r’ and ‘runlevel’ , both of these command are used to find current run level.

Q:9 What is Initrd ?

Ans: Initrd stands for initial ram disk , which contains the temporary root filesystem and neccessary modules which helps in mounting the real root filesystem in read mode only.

Q:10 What is Bootloader ?

Ans: Bootloader is a program that boots the operating system and decides from which kernel OS will boot.

Q:11 How to list hidden files from the command line ?

Ans: ‘ls -a’ <Folder_Name>

Q:12 What is soft link ?

Ans: Soft link is a method to create short cuts in linux. It is similar to windows short cut feature.

Q:13 How to create a blank file in linux from command line ?

Ans: Using the command ‘touch <file-name>’

Q:14 What is run level 2 ?

Ans: Run level 2 is the multi-user mode without networking.

Q:15 Why linux is called OpenSource ?

Ans: Because One can customize the existing code and can redistribute it.

Q:16 How to check all the installed Kernel modules ?

Ans: Using the Command ‘lsmod’ we can see the installed kernel modules.

Q:17 What is the default uid & gid of root user ?

Ans: Default uid & gid of root user is 0.

Q:18 How To change the password of user from the Command Line ?

Ans: ‘passwd <User-Name>’

Q:19 What is a Process ?

Ans: Any program in execution is called a process.

Q:20 What is name of first process in linux ?

Ans: ‘init’ is the first process in linux which is started by kernel and whose pid is 1.


Thursday, June 23, 2016

linux interview questions and answers for experienced ?

Q. What are the differences between a regular file and a directory.
A. A directory is marked with a different file type in its i-node entry and it is a file with a special organization. Specifically, it is a table consisting of file names and i-node numbers.

Q. Where are the file names stored on a file system?
A. The actual file names are stored in the directory file.

Q. What is an i-node?
A. An i-node (short for index node) is a pointer to a data structure that contains the following information describing a file on the filesystem:
* File type (e.g., regular file, directory, symbolic link, character device).
* Owner (also referred to as the user ID or UID) for the file.
* Group (also referred to as the group ID or GID) for the file.
* Access permissions for three categories of user: owner (sometimes referred to as user), group, and other (the rest of the world). Section 15.4 provides further details.
* Three timestamps: time of last access to the file (shown by ls –lu), time of last modification of the file (the default time shown by ls –l), and time of last status change (last change to i-node information, shown by ls –lc). As on other UNIX implementations, it is notable that most Linux file systems don’t record the creation time of a file.
* Number of hard links to the file.
* Size of the file in bytes.
* Number of blocks actually allocated to the file, measured in units of 512-byte blocks. There may not be a simple correspondence between this number and the size of the file in bytes, since a file can contain holes, and thus require fewer allocated blocks than would be expected according to its nominal size in bytes.
* Pointers to the data blocks of the file.
I-nodes are identified numerically by their sequential location in the i-node table.
The i-node doesn't contain a file name; it is only the mapping within a directory list that defines the name of a file.
I-node 1 is used to record bad blocks in the file system. The root directory (/) of a file system is always stored in i-node entry 2.
I-node numbers are unique only within a file system.

Q. What are hard and soft links?
A. The mapping within a directory list that defines the name of a file and it's i-node number is called a link, or a hard link. One can create multiple names — in the same or in different directories — each of which refers to the same i-node.
Hard links have two limitations, both of which can be circumvented by the use of symbolic links:
* Because directory entries (hard links) refer to files using just an i-node number, and i-node numbers are unique only within a file system, a hard link must reside on the same file system as the file to which it refers.
* A hard link can’t be made to a directory. This prevents the creation of circular links, which would confuse many system programs.
A symbolic link is just a file containing the name of another file; Symbolic link refers to a file name, rather than an i-node number, it can be used to link to a file in a different file system.

Q. What is a Signal in Linux, and what signal is invoked when you use the kill command? What is the difference between kill and kill -9?
A. A signal is a limited form of inter-process communication used in Unix, Unix-like, and other POSIX-compliant operating systems. It is an asynchronous notification sent to a process or to a specific thread within the same process in order to notify it of an event that occurred. When a signal is sent, the operating system interrupts the target process's normal flow of execution.
The difference between invoking kill with no signal specified (which uses SIGTERM, number 15) and kill -9 is that the latter tries to kill the process without consideration to open files and resources in use.

Q. Describe what happens when you run the rm command.
A. The rm command removes a filename from a directory list, decrements the link count of the corresponding i-node by 1, and, if the link count thereby falls to 0, deallocates the i-node and the data blocks to which it refers.

Q. What is a process?
A. A process is an instance of an executing program. When a program is executed, the kernel loads the code of the program into virtual memory, allocates space for program variables, and sets up kernel bookkeeping data structures to record various information (such as process ID, termination status, user IDs, and group IDs) about the process. From a kernel point of view, processes are the entities among which the kernel must share the various resources of the computer.

Q. What are the logically divided parts of a process?
A. A process is logically divided into the following parts, known as segments:
* Text: the read-only machine-language instructions of the program run by the process.
* Data: initialized/uninitialized global and static variables used by the program;
* Heap: an area from which memory (for variables) can be dynamically allocated at run time. The top end of the heap is called the program break;
* Stack: a piece of memory that grows and shrinks as functions are called and return and that is used to allocate storage for local variables and function call linkage information;

Q. What are the process states in Linux?
A. Running: Process is either running or ready to run
* Interruptible: a Blocked state of a process and waiting for an event or signal from another process
* Uninterpretable: a blocked state. Process waits for a hardware condition and cannot handle any signal
* Stopped: Process is stopped or halted and can be restarted by some other process
* Zombie: process terminated, but information is still there in the process table.

Q. How are threads different from processes?
A. Like processes, threads are a mechanism that permits an application to perform multiple tasks concurrently. A single process can contain multiple threads. All threads are independently executing the same program, and they all share the same global memory, including the initialized data, uninitialized data, and heap segments.
Sharing information between threads is easy and fast. It is just a matter of copying data into shared (global or heap) variables. However, in order to avoid the problems that can occur when multiple threads try to update the same information, we must employ some synchronization techniques.
Thread creation is faster than process creation—typically, ten times faster or better. On Linux, threads are implemented using the clone() system call.

Q. What is a Socket?
A. A Socket is a form of Interprocess Communication and Synchronization that can be used to transfer data from one process to another, either on the same host computer or on different hosts connected by a network; Network sockets are identified by source IP address source port and destination IP address and port.

Q. How do you debug a running process or a library that is being called?
A. strace -p PID
ltrace libraryfile

Q. How to see a memory map of a process, along with how much memory a process uses?
A. pmap -x PID

Q. You run chmod -x /bin/chmod, how do you make chmod executable again without copying it or restoring from backup?
A. On Linux, when you execute an ELF executable, the kernel does some mapping and then hands the rest of process setup off to ld.so(1), which is treated somewhat like a (hardware backed) interpreter for ELF files, much like /bin/sh interprets shell scripts, perl interprets perl scripts, etc. And just like you can invoke a shell script without the executable bit via ’/bin/sh your_script’, you can do:
/lib64/ld-linux-x86-64.so.2 /bin/chmod +x /bin/chmod

Q. Explain the TIME_WAIT state in a TCP connection, as displayed by netstat or ss.
A. A TCP connection is specified by the tuple (source IP, source port, destination IP, destination port). The reason why there is a TIME_WAIT state following session shutdown is because there may still be live packets out in the network on its way to you. If you were to re-create that same tuple and one of those packets show up, it would be treated as a valid packet for your connection (and probably cause an error due to sequencing).  So the TIME_WAIT time is generally set to double the packets maximum age. This value is the maximum age your packets will be allowed to get to before the network discards them. That guarantees that, before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead. That generally dictates the minimum value you should use. The maximum packet age is dictated by network properties, an example being satellite lifetimes are higher than LAN lifetimes since the packets have much further to go.

Q. What is Huge Pages in Linux and what use is there for them?
A. Hugepages is a mechanism that allows the Linux kernel to utilize the multiple page size capabilities of modern hardware architectures. Linux uses pages as the basic unit of memory, where physical memory is partitioned and accessed using the basic page unit. The default page size is 4096 Bytes in the x86 architecture. Hugepages allows large amounts of memory to be utilized with a reduced overhead.
To check: cat /proc/sys/vm/nr_hugepages.
To set: echo 5 > /proc/sys/vm/nr_hugepages

Q. What is a Master boot Record and how do you back it up and restore it?
A. The MBR  is a 512 byte segment on the very first sector of your hard drive composed of three parts: 1) the boot code which is 446 bytes long, 2) the partiton table which is 64 bytes long, and 3) the boot code signature which is 2 bytes long.
To backup: dd if=/dev/sda of=/tmp/mbr.img_backup bs=512 count=1
To restore: dd if=/tmp/mbr.img of=/dev/sda bs=512 count=1

Q. You are using iSCSI or a virtual machine with attached block device. Due to high IO or network latencies the FS goes in read only mode from time to time. What can you do to increase the write time out on the block device?
A. To increase the write time out on a block device in real time use the sys fs:
echo 60 > /sys/block/sdk/device/timeout

Q. Your server is using a lot of cached memory. How do you free it up short of rebooting?
A. Kernels 2.6.16 and newer provide a mechanism to have the kernel drop the page cache and/or inode and dentry caches on command, which can help free up a lot of memory.
To free page cache, dentries and inodes: echo 3 > /proc/sys/vm/drop_caches


Q. How do you pin a process to a specific CPU?
A. CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. The scheduler attempts to keep processes on the same CPU as long as practical for  performance  reasons. To pin a new process to the first CPU run:
taskset -c 0 top
To pin an existing process to the second CPU run:
taskset -c 1 -p $(pgrep top)

Q. How do you track new concurrent connections?
A. Concurrent connections are the number of authenticated "handshakes" between a client and/or server during any given time before all communications have been disconnected whether by force or by refusal. You can run:
modprobe ip_conntrack
conntrack -E -e NEW

Q. What is SYN flood and how can you detect it and mitigate it?
A. A SYN flood is a form of denial-of-service attack in which an attacker sends a succession of SYN requests to a target's system in an attempt to consume enough server resources to make the system unresponsive to legitimate traffic. Detection can be done by by netstat or ss and filtering for SYN-RECV connection states. Mitigation can be done by null-routing the offending IP and enabling SYN cookies in the kernel, which allow the server to sends back the appropriate SYN+ACK response to the client but discards the SYN queue entry.
ss -a | grep SYN-RECV | awk '{print $4}' | awk -F":" '{print $1}' | sort | uniq -c | sort -n netstat -antp | grep SYN_RECV|awk '{print $4}'|sort|uniq -c | sort -n

Q. You have a file with 2000 IP's. How do you ping them all using bash in parallel?
A.  echo $(cat iplistfile) | xargs -n 1 -P0 ping -w 1 -c 1

Q. What is Memory Overcommit in Linux?
A. By default, Linux will allow processes to allocate more virtual memory than the system actually has, assuming that they won't end up actually using it. When there's more overcommited memory than the available physical and swap memory the OOM-killer picks some process to kill in order to recover memory. One reason Linux manages memory this way by default is to optimize memory usage on fork()'ed processes; fork() creates a full copy of the process space, but in this instance, with overcommitted memory, only pages which have been written to actually need to be allocated by the kernel.

Q. What is system load averag as displayed by uptime?
A. Load Average is the sum of the number of processes waiting in the run-queue plus the number currently executing.If there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilizing its processors perfectly for the last 60 seconds.

Q. How do you list all kernel modules that are compiled in or enabled?
A. You can execute:
cat /boot/config-$(uname -r)

Q. Kernel space Vs. User space - pros and cons.
A. The role of the operating system, in practice, is to provide programs with a consistent view of the computer's hardware. In addition, the operating system must account for independent operation of programs and protection against unauthorized access to resources. This nontrivial task is possible only if the CPU enforces protection of system software from the applications.
Every modern processor is able to enforce this behavior. The chosen approach is to implement different operating modalities (or levels) in the CPU itself. The levels have different roles, and some operations are disallowed at the lower levels; program code can switch from one level to another only through a limited number of gates. Unix systems are designed to take advantage of this hardware feature, using two such levels. All current processors have at least two protection levels, and some, like the x86 family, have more levels; when several levels exist, the highest and lowest levels are used. Under Unix, the kernel executes in the highest level (also called supervisor mode), where everything is allowed, whereas applications execute in the lowest level (the so-called user mode), where the processor regulates direct access to hardware and unauthorized access to memory.
We usually refer to the execution modes as kernel space and user space. These terms encompass not only the different privilege levels inherent in the two modes, but also the fact that each mode can have its own memory mapping—its own address space—as well.
Unix transfers execution from user space to kernel space whenever an application issues a system call or is suspended by a hardware interrupt. Kernel code executing a system call is working in the context of a process—it operates on behalf of the calling process and is able to access data in the process's address space. Code that handles interrupts, on the other hand, is asynchronous with respect to processes and is not related to any particular process.

Q. What is the difference between Active and Passive FTP sessions:
A.
Active FTP :
command channel : client port above1023 connects to server port 21
data channel: client port above 1023 is connected from server port 20

Passive FTP :
command channel: client port above 1023 connects to server port 21
data channel: client port above 1023 connects to server port above 1023


Saturday, March 5, 2016

YUM Interview Questions and Answers in linux ?


1.What is yum ?
Answer: yum is "yellow update manager" which is a front end tool for package management. All the rpm commands activity can be carried out using yum command in automated way.
Yum mechanism will automatically resolve the dependencies unlike rpm commands.

2.How to install packages using yum ?
Answer  : yum install package_name

3.How to update the package using yum ?
Answer: yum update package_ name

4.How to search the package in yum ?
Answer: yum search package_name

5.How to remove the package  using yum ?
Answer:  yum remove package_name

6.How to check the updates for yum repository ?
Answer:  yum checkupdate

7.How to update the yum repo ?
Answer: yum update

8.How to get the package information using yum ?
Answer: yum info package_name

9.How to list the installed packages on Redhat linux using yum command ?
Answer: yum list installed

10.How to know the particular files belongs to which package ?
Answer:  yum provides file_path
Ex:
[root@linuxforfreshers ~]# yum provides
/etc/yum.conf
Loaded plugins: refreshpackagekit,
rhnplugin
This system is not registered with RHN.
RHN support will be disabled.
yum3.2.2714.
el6.noarch : RPM
installer/updater
Repo : localinstallation
Matched from:
Filename : /etc/yum.conf
yum3.2.2714.
el6.noarch : RPM
installer/updater
Repo : installed
Matched from:
Other : Providesmatch:
/etc/yum.conf

11.How to list the enabled repositories ?
Answer: yum repolist








Wednesday, February 10, 2016

Nginx Server Interview Question and Answers in linux?


What is Nginx Server?
Nginx (pronounced "engine x") is an open source web server and a reverse proxy server for HTTP, SMTP, POP3, and IMAP protocols, with a strong focus on high concurrency, performance and low memory usage.

What is the Best Usage of Ngins Server?
Nginx can deploy dynamic HTTP content on a network using FastCGI, SCGI handlers for scripts, WSGI application servers or Phusion Passenger module, and it can serve as a software load balancer.

What’s the Difference between Apache Web Server and Nginx?
Nginx uses an asynchronous event-driven approach to handling requests, instead of the Apache HTTP Server model that defaults to a threaded or process-oriented approach. Nginx's event-driven approach can provide more predictable performance under high loads.

Describe Some Best Features of Nginx?
Simultaneous Connections with low memory, Auto Indexing, Load Balancing, Reverse Proxy with Caching, Fault Tolerance

What is the Configuration File for Nginx and where it can be in UNIX like Systems?
Configuration file is named nginx.conf and placed in the directory;
 /usr/local/nginx/conf, /etc/nginx     or
/usr/local/etc/nginx          depends on distribution.

What is the Master and Worker Processes in Nginx Server?
The main purpose of the master process is to read and evaluate configuration, and maintain worker processes. Worker processes do actual processing of requests.

How to define Worker Processes?
The number of worker processes is defined in the configuration file and may be fixed for a given configuration or automatically adjusted to the number of available CPU cores


Where the Process ID does for Nginx Server is written?
The process ID of the master process is written to the file /usr/local/nginx/logs/nginx.pid

What are the controls used in Nginx Server?
There are only few controls that are assosiated with Nginx Server and these are as below;
Nginx -s [stop | quit | reopen | reload]

How to reload configuration file of Nginx Server?
You can reload Nginx configuration file by running this command: nginx -s reload

How to reopening the log files in Nginx Server?
You can use nginx –s reopen

What is the purpose of –s with Nginx Server?
-s parameter is used to run the executable file of nginx.

How to add Modules in Nginx Server?
Nginx modules must be selected during compile, run-time selection of modules is not currently supported.

Define some of Nginx Architerture?
Here is brief nginx's architecture in diagram.



what are the configuration files available for Nginx ?

       1.      Default configuration directory: /etc/nginx/
2.      Default SSL and vhost config directory: /etc/nginx/conf.d/
3.      Default log file directory: /var/log/nginx/
4.      Default document root directory: /usr/share/nginx/html
5.      Default configuration file: /etc/nginx/nginx.conf
6.      Default server access log file: /var/log/nginx/access.log
7.      Default server access log file: /var/log/nginx/error.log

Monday, February 8, 2016

Nagios interview Questions and Answers in linux?

Nagios interview Questions and Answers

1.What is Nagios and how it Works ?
Ans:   Nagios is an open source System and Network Monitoring application. Nagios runs on a server, usually as a daemon or service. Nagios periodically run plugins to monitor clients, if it found anything warning and critical it will send an alerts via Email OR SMS as per the configuration.
The Nagios daemon behaves like a scheduler that runs certain scripts at certain moments. It stores the results of those scripts and will run other scripts if these results change.

2. what are ports numbers Nagios will use to monitor clients..?
Ans: Port numbers are 5666, 5667 and 5668

3. Explain Main Configuration file and its location?
Ans:
Resource File : It is used to store sensitive information like username, passwords with out making them available to the CGIs. Default path: /usr/local/nagios/etc/resource.cfg

Object Definition Files: It is the location were you define all you want to monitor and how you want to monitor. It is used to define hosts, services, hostgroups, contacts, contact groups, commands, etc.. Default Path: /usr/local/nagios/etc/objects/

CGI Configuration File : The CGI configuration file contains a number of directives that affect the operation of the CGIs. It also contains a reference the main configuration file, so the CGIs know how you’ve configured Nagios and where your object definitions are stored. Default Path: /usr/local/nagios/sbin/

4. Nagios administrator is adding 100+ clients in monitoring but he don’t want to add every .cfg file entry in nagios.cfg file he want to enable a directory path. How can he configure directory for all configuration files..?
Ans: He can able to achieve the above scenario by adding the directory path in nagios.cfg file, in line number 54 we have to add below line.
54  cfg_dir=/usr/local/nagios/etc/objects/monitor

5. Explain Nagios files and its location?
Ans:
The main configuration file is usually named nagios.cfg and located in the /usr/local/nagios/etc/ directory default.

Object Configuration File : This directive is used to specify an object configuration file containing object definitions that Nagios should use for monitoring.
cfg_file=/usr/local/nagios/etc/hosts.cfg
cfg_file=/usr/local/nagios/etc/services.cfg
cfg_file=/usr/local/nagios/etc/commands.cfg

Object Configuration Directory :This directive is used to specify a directory which contains object configuration files that Nagios should use for monitoring.

cfg_dir=/usr/local/nagios/etc/commands
cfg_dir=/usr/local/nagios/etc/services
cfg_dir=/usr/local/nagios/etc/hosts

Object Cache File :This directive is used to specify a file in which a cached copy of object definitions should be stored.

line number 66 object_cache_file=/usr/local/nagios/var/objects.cache

Precached Object File: Line Number 82 precached_object_file=/usr/local/nagios/var/objects.precache Default

This is used to specify an optional resource file that can contain $USERn$ macro definitions. $USERn$ macros are useful for storing usernames, passwords, and items commonly used in command definitions.

Temp File : temp_path=/tmp
This is a directory that Nagios can use as scratch space for creating temporary files used during the monitoring process. You should run tmpwatch, or a similiar utility, on this directory occasionally to delete files older than 24 hours.

Status File :  Line Number 105 status_file=/usr/local/nagios/var/status.dat
This is the file that Nagios uses to store the current status, comment, and downtime information. This file is used by the CGIs so that current monitoring status can be reported via a web interface. The CGIs must have read access to this file in order to function properly. This file is deleted every time Nagios stops and recreated when it starts.

Log Archive Path :  Line Number 245 log_archive_path=/usr/local/nagios/var/archives/
This is the directory where Nagios should place log files that have been rotated. This option is ignored if you choose to not use the log rotation functionality.

External Command File :  command_file=/usr/local/nagios/var/rw/nagios.cmd
This is the file that Nagios will check for external commands to process. The command CGI writes commands to this file. The external command file is implemented as a named pipe (FIFO), which is created when Nagios starts and removed when it shuts down. If the file exists when Nagios starts, the Nagios process will terminate with an error message. Always keep read only permission to submit the commands from authorized users only.

Lock File :  lock_file=/tmp/nagios.lock
This option specifies the location of the lock file that Nagios should create when it runs as a daemon (when started with the -d command line argument). This file contains the process id (PID) number of the running Nagios process.

State Retention File:  state_retention_file=/usr/local/nagios/var/retention.dat
This is the file that Nagios will use for storing status, downtime, and comment information before it shuts down. When Nagios is restarted it will use the information stored in this file for setting the initial states of services and hosts before it starts monitoring anything. In order to make Nagios retain state information between program restarts, you must enable the retain_state_information option.

Check Result Path :    check_result_path=/var/spool/nagios/checkresults
This options determines which directory Nagios will use to temporarily store host and service check results before they are processed.

Host Performance Data File :     host_perfdata_file=/usr/local/nagios/var/host-perfdata.da
This option allows you to specify a file to which host performance data will be written after every host check. Data will be written to the performance file as specified by the host_perfdata_file_template option. Performance data is only written to this file if the process_performance_data option is enabled globally and if the process_perf_data directive in the host definition is enabled.

Service Performance Data File:   service_perfdata_file=/usr/local/nagios/var/service-perfdata.dat
This option allows you to specify a file to which service performance data will be written after every service check. Data will be written to the performance file as specified by the service_perfdata_file_template option. Performance data is only written to this file if the process_performance_data option is enabled globally and if the process_perf_data directive in the service definition is enabled

Debug File :   debug_file=/usr/local/nagios/var/nagios.debug
This option determines where Nagios should write debugging information. What (if any) information is written is determined by the debug_level and debug_verbosity options. You can have Nagios automatically rotate the debug file when it reaches a certain size by using the max_debug_file_size option.

6. Explain Host and Service Check Execution Option?
Ans: This option determines whether or not Nagios will execute Host/service checks when it initially (re)starts. If this option is disabled, Nagios will not actively execute any service checks and will remain in a sort of “sleep” mode. This option is most often used when configuring backup monitoring servers or when setting up a distributed monitoring environment.
Note: If you have state retention enabled, Nagios will ignore this setting when it (re)starts and use the last known setting for this option (as stored in the state retention file), unless you disable the use_retained_program_state option. If you want to change this option when state retention is active (and the use_retained_program_state is enabled), you’ll have to use the appropriate external command or change it via the web interface.
Values are as follows:
0 = Don’t execute host/service checks
1 = Execute host/service checks (default)

7. Explain active and Passive check in Nagios?
Ans:    Nagios will monitor host and services in tow ways actively and passively.Active checks are the most common method for monitoring hosts and services. The main features of actives checks as as follows:Active checks are initiated by the Nagios process
A. Active checks:
1.Active checks are run on a regularly scheduled basis
2.Active checks are initiated by the check logic in the Nagios daemon.
When Nagios needs to check the status of a host or service it will execute a plugin and pass it information about what needs to be checked. The plugin will then check the operational state of the host or service and report the results back to the Nagios daemon. Nagios will process the results of the host or service check and take appropriate action as necessary (e.g. send notifications, run event handlers, etc).
Active check are executed At regular intervals, as defined by the check_interval and retry_interval options in your host and service definitions
On-demand as needed.Regularly scheduled checks occur at intervals equaling either the check_interval or the retry_interval in your host or service definitions, depending on what type of state the host or service is in. If a host or service is in a HARD state, it will be actively checked at intervals equal to the check_interval option. If it is in a SOFT state, it will be checked at intervals equal to the retry_interval option.
On-demand checks are performed whenever Nagios sees a need to obtain the latest status information about a particular host or service. For example, when Nagios is determining the reach ability of a host, it will often perform on-demand checks of parent and child hosts to accurately determine the status of a particular network segment. On-demand checks also occur in the predictive dependency check logic in order to ensure Nagios has the most accurate status information.
b.Passive checks:
They key features of passive checks are as follows:
1.Passive checks are initiated and performed external applications/processes
2.Passive check results are submitted to Nagios for processing
The major difference between active and passive checks is that active checks are initiated and performed by Nagios, while passive checks are performed by external applications.
Passive checks are useful for monitoring services that are:
Asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis
Located behind a firewall and cannot be checked actively from the monitoring host
Examples of asynchronous services that lend themselves to being monitored passively include SNMP traps and security alerts. You never know how many (if any) traps or alerts you’ll receive in a given time frame, so it’s not feasible to just monitor their status every few minutes.Passive checks are also used when configuring distributed or redundant monitoring installations.
Here’s how passive checks work in more detail…
An external application checks the status of a host or service.
The external application writes the results of the check to the external command file.
The next time Nagios reads the external command file it will place the results of all passive checks into a queue for later processing. The same queue that is used for storing results from active checks is also used to store the results from passive checks.
Nagios will periodically execute a check result reaper event and scan the check result queue. Each service check result that is found in the queue is processed in the same manner – regardless of whether the check was active or passive. Nagios may send out notifications, log alerts, etc. depending on the check result information.

8. How to verify Nagios configuration ..?
Ans:  In order to verify your configuration, run Nagios with the -v command line option like so:
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
If you’ve forgotten to enter some critical data or misconfigured things, Nagios will spit out a warning or error message that should point you to the location of the problem. Error messages generally print out the line in the configuration file that seems to be the source of the problem. On errors, Nagios will often exit the pre-flight check and return to the command prompt after printing only the first error that it has encountered.

9. What Are Objects?
Ans:    Objects are all the elements that are involved in the monitoring and notification logic.
Types of objects include:

Services  are one of the central objects in the monitoring logic. Services are associated with hosts Attributes of a host (CPU load, disk usage, uptime, etc.)

Service Groups :are groups of one or more services. Service groups can make it easier to (1) view the status of related services in the Nagios web interface and (2) simplify your configuration through the use of object tricks.

Hosts  are one of the central objects in the monitoring logic.Hosts are usually physical devices on your network (servers, workstations, routers, switches, printers, etc).
Host Groups   are groups of one or more hosts. Host groups can make it easier to (1) view the status of related hosts in the Nagios web interface and (2) simplify your configuration through the use of object tricks

Contacts Conact information of  people involved in the notification process
Contact Groups are groups of one or more contacts. Contact groups can make it easier to define all the people who get notified when certain host or service problems occur.
Commands are used to tell Nagios what programs, scripts, etc. it should execute to perform ,Host and service checks and when Notifications should send etc.
Time Periods are are used to control ,When hosts and services can be monitored
Notification Escalations Use for escalating the the notification

10. What Are Plugins?
Ans:    Plugins are compiled executable s or scripts (Perl scripts, shell scripts, etc.) that can be run from a command line to check the status or a host or service. Nagios uses the results from plugins to determine the current status of hosts and services on your network.
Nagios will execute a plugin whenever there is a need to check the status of a service or host. The plugin does something (notice the very general term) to perform the check and then simply returns the results to Nagios. Nagios will process the results that it receives from the plugin and take any necessary actions (running event handlers, sending out notifications, etc).

11. How Do I Use Plugin X?
Ans:    We have to download the plugins from nagios exchange https://exchange.nagios.org/. Then check the nagios plugin by running manually.
Most all plugins will display basic usage information when you execute them using ‘-h’ or ‘–help’ on the command line.

12. How to generate Performance graphs..?
Ans: In Nagios Core there is no inbuilt option to generate the performance graphs, We have to install pnp4nagios and add hosts and services URL’s in defination files.

13. What is the difference between NagiosXI and Nagios Core ..?
Ans:  NagiosXI is a Paid version and Nagios core is a free version.
NagiosXI includes lot of features which we can modify using web interface. Nagios Core default not include all the features we have to implement by installing plugins.

14. When Does Nagios Check For External Commands?
Ans:     At regular intervals specified by the command_check_interval option in the main configuration file
Immediately after event handlers are executed. This is in addition to the regular cycle of external command checks and is done to provide immediate action if an event handler submits commands to Nagios.
External commands that are written to the command file have the following format
[time] command_id;command_arguments
where time is the time (in time_t format) that the external application submitted the external command to the command file. The values for the command_id and command_arguments arguments will depend on what command is being submitted to Nagios.

15. Explain Nagios State Types?
Ans:   The current state of monitored services and hosts is determined by two components:
The status of the service or host (i.e. OK, WARNING, UP, DOWN, etc.)
Tye type of state the service or host is in
There are two state types in Nagios – SOFT states and HARD states. These state types are a crucial part of the monitoring logic, as they are used to determine when event handlers are executed and when notifications are initially sent out.

A.Soft States:
When a service or host check results in a non-OK or non-UP state and the service check has not yet been (re)checked the number of times specified by the max_check_attempts directive in the service or host definition. This is called a soft error.
When a service or host recovers from a soft error. This is considered a soft recovery.
The following things occur when hosts or services experience SOFT state changes:
The SOFT state is logged. Event handlers are executed to handle the SOFT state. SOFT states are only logged if you enabled the log_service_retries or log_host_retries options in your main configuration file.
The only important thing that really happens during a soft state is the execution of event handlers. Using event handlers can be particularly useful if you want to try and proactively fix a problem before it turns into a HARD state. The $HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of “SOFT” when event handlers are executed, which allows your event handler scripts to know when they should take corrective action.

B.Hard states :
occur for hosts and services in the following situations:
When a host or service check results in a non-UP or non-OK state and it has been (re)checked the number of times specified by the max_check_attempts
option in the host or service definition. This is a hard error state.
When a host or service transitions from one hard error state to another error state (e.g. WARNING to CRITICAL).
When a service check results in a non-OK state and its corresponding host is either DOWN or UNREACHABLE.
When a host or service recovers from a hard error state. This is considered to be a hard recovery.
When a passive host check is received. Passive host checks are treated as HARD unless the passive_host_checks_are_soft option is enabled.
The following things occur when hosts or services experience HARD state changes:
The HARD state is logged.
Event handlers are executed to handle the HARD state.
Contacts are notifified of the host or service problem or recovery.
The $HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of “HARD” when event handlers are executed, which allows your event handler scripts to know when they should take corrective action.

16. What is State Stalking?
Ans:     Stalking is purely for logging purposes.When stalking is enabled for a particular host or service, Nagios will watch that host or service very carefully and log any changes it sees in the output of check results. As you’ll see, it can be very helpful to you in later analysis of the log files. Under normal circumstances, the result of a host or service check is only logged if the host or service has changed state since it was last checked. There are a few exceptions to this, but for the most part, that’s the rule.

If you enable stalking for one or more states of a particular host or service, Nagios will log the results of the host or service check if the output from the check differs from the output from the previous check.

17. Explain how  Flap Detection works in Nagios?
Ans:  Nagios supports optional detection of hosts and services that are “flapping”. Flapping occurs when a service or host changes state too frequently, resulting in a storm of problem and recovery notifications. Flapping can be indicative of configuration problems (i.e. thresholds set too low), troublesome services, or real network problems.

Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping. It does this by:
Storing the results of the last 21 checks of the host or ser vice
Analyzing the historical check results and determine where state changes/transitions occur
Using the state transitions to determine a percent state change value (a measure of change) for the host or service

Comparing the percent state change value against low and high flapping thresholds
A host or service is determined to have started flapping when its percent state change first exceeds a high flapping threshold.

A host or service is determined to have stopped flapping when its percent state goes below a low flapping threshold (assuming that is was previously flapping).

The historical service check results are examined to determine where state changes/transitions occur. State changes occur when an archived state is different from the archived state that immediately precedes it chronologically. Since we keep the results of the last 21 service checks in the array, there is a possibility of having at most 20 state changes. In this example there are 7 state changes, indicated by blue arrows in the image above.

The flap detection logic uses the state changes to determine an overall percent state change for the service. This is a measure of volatility/change for the service. Services that never change state will have a 0% state change value, while services that change state each time they’re checked will have 100% state change. Most services will have a percent state change somewhere in between.

18. Explain Distributed Monitoring ?
Ans:   Nagios can be configured to support distributed monitoring of network services and resources.
When setting up a distributed monitoring environment with Nagios, there are differences in the way the central and distributed servers are configured.

The function of a distributed server is to actively perform checks all the services you define for a “cluster” of hosts. it basically just mean an arbitrary group of hosts on your network. Depending on your network layout, you may have several clusters at one physical location, or each cluster may be separated by a WAN, its own firewall, etc. There is one distributed server that runs Nagios and monitors the services on the hosts in each cluster. A distributed server is usually a bare-bones installation of Nagios. It doesn’t have to have the web interface installed, send out notifications, run event handler scripts, or do anything other than execute service checks if you don’t want it to.
The purpose of the central server is to simply listen for service check results from one or more distributed servers. Even though services are occasionally actively checked from the central server, the active checks are only performed in dire circumstances,

19. What is NRPE?
Ans:  The Nagios Remote Plugin Executor addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to allow Nagios to monitor “local” resources (like CPU load, memory usage, etc.) on remote machines. Since these public resources are not usually exposed to external machines, an agent like NRPE must be installed on the remote Linux/Unix machines.

The NRPE addon consists of two pieces:
– The check_nrpe plugin, which resides on the local monitoring machine
– The NRPE daemon, which runs on the remote Linux/Unix machine
When Nagios needs to monitor a resource of service from a remote Linux/Unix machine:
– Nagios will execute the check_nrpe plugin and tell it what service needs to be checked
– The check_nrpe plugin contacts the NRPE daemon on the remote host over an (optionally) SSL-protected connection
– The NRPE daemon runs the appropriate Nagios plugin to check the service or resource
– The results from the service check are passed from the NRPE daemon back to the check_nrpe plugin, which
then returns the check results to the Nagios process.

20.What is NDOUTILS ?
Ans:  The NDOUTILS addon is designed to store all configuration and event data from Nagios in a database. Storing information from Nagios in a database will allow for quicker retrieval and processing of that data and will help serve as a foundation for the development of a new PHP-based web interface in Nagios 4.1.

MySQL databases are currently supported by the addon and PostgreSQL support is in development.
The NDOUTILS addon was designed to work for users who have:
– Single Nagios installations
– Multiple standalone or “vanilla” Nagios installations
– Multiple Nagios installations in distributed, redundant, and/or fail over environments.

Each Nagios process, whether it is a standalone monitoring server or part of a distributed, redundant, or failover monitoring setup, is referred to as an “instance”. In order to maintain the integrity of stored data, each Nagios instance must be labeled with a unique identifier or name.

21. What are the components that make up the NDO utilities ?
Ans:
There are four main components that make up the NDO utilities:
NDOMOD Event Broker Module : The NDO utilities includes a Nagios event broker module (NDOMOD.O) that exports data from the Nagios daemon.Once the module has been loaded by the Nagios daemon, itcan access all of the data and logic present in the running Nagios process.The NDOMOD module has been designed to export configuration data, as well as information about various run time events that occur in the monitoring process, from the Nagios daemon. The module can send this data to a standard file, a Unix domain socket, or a TCP socket.
LOG2NDO Utility : The LOG2NDO utility has been designed to allow you to import historical Nagios and NetSaint log files into a database via the NDO2DB daemon (described later). The utility works by sending historical log file data to a standard file, a Unix domain socket, or a TCP socket in a format the NDO2DB daemon understands. The NDO2DB daemon can then be used to process that output and store the historical log file  information in a database.
FILE2SOCK Utility :  The FILE2SOCK utility is quite simple. Its reads input from a standard file (or STDIN) and writes all of that data to either a Unix domain socket or TCP socket. The data that is read is not processed in any way before it is sent to the socket.
NDO2DB Daemon:   The NDO2DB utility is designed to take the data output from the NDOMOD and LOG2NDO components and store it in a MySQL or PostgreSQL database.When it starts, the NDO2DB daemon creates either a TCP or Unix domain socket and waits for clients to connect. NDO2DB can run either as a standalone, multi-process daemon or under INETD (if using a TCP socket). Multiple clients can connect to the NDO2DB daemon’s socket and transmit data simultaneously. A separate NDO2DB process is spawned to handle each new client that connects. Data is read from each client and stored in a user-specified database for later retrieval and processing.

22. What are the Operating Systems we can monitor using Nagios..?
Ans:  Any Operating System We can monitor using Nagios, OS should support to install Nagios Clinet either SNMP.

23. What is database is used by Nagios to store collected status data..?

Ans: Nagios core will use default RRD database format to store status data