Wednesday, February 10, 2016

Nginx Server Interview Question and Answers in linux?


What is Nginx Server?
Nginx (pronounced "engine x") is an open source web server and a reverse proxy server for HTTP, SMTP, POP3, and IMAP protocols, with a strong focus on high concurrency, performance and low memory usage.

What is the Best Usage of Ngins Server?
Nginx can deploy dynamic HTTP content on a network using FastCGI, SCGI handlers for scripts, WSGI application servers or Phusion Passenger module, and it can serve as a software load balancer.

What’s the Difference between Apache Web Server and Nginx?
Nginx uses an asynchronous event-driven approach to handling requests, instead of the Apache HTTP Server model that defaults to a threaded or process-oriented approach. Nginx's event-driven approach can provide more predictable performance under high loads.

Describe Some Best Features of Nginx?
Simultaneous Connections with low memory, Auto Indexing, Load Balancing, Reverse Proxy with Caching, Fault Tolerance

What is the Configuration File for Nginx and where it can be in UNIX like Systems?
Configuration file is named nginx.conf and placed in the directory;
 /usr/local/nginx/conf, /etc/nginx     or
/usr/local/etc/nginx          depends on distribution.

What is the Master and Worker Processes in Nginx Server?
The main purpose of the master process is to read and evaluate configuration, and maintain worker processes. Worker processes do actual processing of requests.

How to define Worker Processes?
The number of worker processes is defined in the configuration file and may be fixed for a given configuration or automatically adjusted to the number of available CPU cores


Where the Process ID does for Nginx Server is written?
The process ID of the master process is written to the file /usr/local/nginx/logs/nginx.pid

What are the controls used in Nginx Server?
There are only few controls that are assosiated with Nginx Server and these are as below;
Nginx -s [stop | quit | reopen | reload]

How to reload configuration file of Nginx Server?
You can reload Nginx configuration file by running this command: nginx -s reload

How to reopening the log files in Nginx Server?
You can use nginx –s reopen

What is the purpose of –s with Nginx Server?
-s parameter is used to run the executable file of nginx.

How to add Modules in Nginx Server?
Nginx modules must be selected during compile, run-time selection of modules is not currently supported.

Define some of Nginx Architerture?
Here is brief nginx's architecture in diagram.



what are the configuration files available for Nginx ?

       1.      Default configuration directory: /etc/nginx/
2.      Default SSL and vhost config directory: /etc/nginx/conf.d/
3.      Default log file directory: /var/log/nginx/
4.      Default document root directory: /usr/share/nginx/html
5.      Default configuration file: /etc/nginx/nginx.conf
6.      Default server access log file: /var/log/nginx/access.log
7.      Default server access log file: /var/log/nginx/error.log

Tuesday, February 9, 2016

How do you check Linux machine is Physical or Virtual ?

There is no hard and fast rule to check whether the machine is physical or virtual but still we do have some commands which can be used for the some purpose.

NOTE: There might be other commands as well but these are the few of my knowledge.

The command used to view all the required hardware related information for any Linux machine is
# dmidecode

But the output would be very long and hard to find out the specific details looking for. So, lets narrow it down

Physical Servers
# dmidecode   -s  system-product-name
System x3550 M2 -[7234AC1]-

As you can see in one of my servers I see the above information which says the machine I am using is DELL System X

Now to get more details about the system
# dmidecode | less (And search for "System Information")
System Information
        Manufacturer: DELL
        Product Name: System x3550 M2 -[7284AC1]-
        Version: 00
        Wake-up Type: Other
        SKU Number: XxXxXxX
        Family: System x

Some more examples for different product
# dmidecode  -s system-product-name
ProLiant BL460c G6

# dmidecode | less
System Information
        Manufacturer: HP
        Product Name: ProLiant BL460c G6
        Version: Not Specified
        Wake-up Type: Power Switch
        Family: ProLiant

For a DELL blade chasis
# dmidecode -s system-product-name
BladeCenter HS33 -[5940PRX]-

# dmidecode | less
System Information
        Manufacturer: DELL
        Product Name: BladeCenter HS33 -[5940PRX]-
        Version: 06
        Wake-up Type: Other
        SKU Number: XxXxXxX
        Family: System x

Virtual Servers
# dmidecode -s system-product-name
VMware Virtual Platform

# dmidecode | less
System Information
        Manufacturer: VMware, Inc.
        Product Name: VMware Virtual Platform
        Version: None
        Wake-up Type: Power Switch
        SKU Number: Not Specified
        Family: Not Specified

On a virtual server running VMware you can run the below command to verify
# lspci | grep -i vmware

00:0f.0 VGA compatible controller: VMware SVGA II Adapter

Monday, February 8, 2016

Nagios interview Questions and Answers in linux?

Nagios interview Questions and Answers

1.What is Nagios and how it Works ?
Ans:   Nagios is an open source System and Network Monitoring application. Nagios runs on a server, usually as a daemon or service. Nagios periodically run plugins to monitor clients, if it found anything warning and critical it will send an alerts via Email OR SMS as per the configuration.
The Nagios daemon behaves like a scheduler that runs certain scripts at certain moments. It stores the results of those scripts and will run other scripts if these results change.

2. what are ports numbers Nagios will use to monitor clients..?
Ans: Port numbers are 5666, 5667 and 5668

3. Explain Main Configuration file and its location?
Ans:
Resource File : It is used to store sensitive information like username, passwords with out making them available to the CGIs. Default path: /usr/local/nagios/etc/resource.cfg

Object Definition Files: It is the location were you define all you want to monitor and how you want to monitor. It is used to define hosts, services, hostgroups, contacts, contact groups, commands, etc.. Default Path: /usr/local/nagios/etc/objects/

CGI Configuration File : The CGI configuration file contains a number of directives that affect the operation of the CGIs. It also contains a reference the main configuration file, so the CGIs know how you’ve configured Nagios and where your object definitions are stored. Default Path: /usr/local/nagios/sbin/

4. Nagios administrator is adding 100+ clients in monitoring but he don’t want to add every .cfg file entry in nagios.cfg file he want to enable a directory path. How can he configure directory for all configuration files..?
Ans: He can able to achieve the above scenario by adding the directory path in nagios.cfg file, in line number 54 we have to add below line.
54  cfg_dir=/usr/local/nagios/etc/objects/monitor

5. Explain Nagios files and its location?
Ans:
The main configuration file is usually named nagios.cfg and located in the /usr/local/nagios/etc/ directory default.

Object Configuration File : This directive is used to specify an object configuration file containing object definitions that Nagios should use for monitoring.
cfg_file=/usr/local/nagios/etc/hosts.cfg
cfg_file=/usr/local/nagios/etc/services.cfg
cfg_file=/usr/local/nagios/etc/commands.cfg

Object Configuration Directory :This directive is used to specify a directory which contains object configuration files that Nagios should use for monitoring.

cfg_dir=/usr/local/nagios/etc/commands
cfg_dir=/usr/local/nagios/etc/services
cfg_dir=/usr/local/nagios/etc/hosts

Object Cache File :This directive is used to specify a file in which a cached copy of object definitions should be stored.

line number 66 object_cache_file=/usr/local/nagios/var/objects.cache

Precached Object File: Line Number 82 precached_object_file=/usr/local/nagios/var/objects.precache Default

This is used to specify an optional resource file that can contain $USERn$ macro definitions. $USERn$ macros are useful for storing usernames, passwords, and items commonly used in command definitions.

Temp File : temp_path=/tmp
This is a directory that Nagios can use as scratch space for creating temporary files used during the monitoring process. You should run tmpwatch, or a similiar utility, on this directory occasionally to delete files older than 24 hours.

Status File :  Line Number 105 status_file=/usr/local/nagios/var/status.dat
This is the file that Nagios uses to store the current status, comment, and downtime information. This file is used by the CGIs so that current monitoring status can be reported via a web interface. The CGIs must have read access to this file in order to function properly. This file is deleted every time Nagios stops and recreated when it starts.

Log Archive Path :  Line Number 245 log_archive_path=/usr/local/nagios/var/archives/
This is the directory where Nagios should place log files that have been rotated. This option is ignored if you choose to not use the log rotation functionality.

External Command File :  command_file=/usr/local/nagios/var/rw/nagios.cmd
This is the file that Nagios will check for external commands to process. The command CGI writes commands to this file. The external command file is implemented as a named pipe (FIFO), which is created when Nagios starts and removed when it shuts down. If the file exists when Nagios starts, the Nagios process will terminate with an error message. Always keep read only permission to submit the commands from authorized users only.

Lock File :  lock_file=/tmp/nagios.lock
This option specifies the location of the lock file that Nagios should create when it runs as a daemon (when started with the -d command line argument). This file contains the process id (PID) number of the running Nagios process.

State Retention File:  state_retention_file=/usr/local/nagios/var/retention.dat
This is the file that Nagios will use for storing status, downtime, and comment information before it shuts down. When Nagios is restarted it will use the information stored in this file for setting the initial states of services and hosts before it starts monitoring anything. In order to make Nagios retain state information between program restarts, you must enable the retain_state_information option.

Check Result Path :    check_result_path=/var/spool/nagios/checkresults
This options determines which directory Nagios will use to temporarily store host and service check results before they are processed.

Host Performance Data File :     host_perfdata_file=/usr/local/nagios/var/host-perfdata.da
This option allows you to specify a file to which host performance data will be written after every host check. Data will be written to the performance file as specified by the host_perfdata_file_template option. Performance data is only written to this file if the process_performance_data option is enabled globally and if the process_perf_data directive in the host definition is enabled.

Service Performance Data File:   service_perfdata_file=/usr/local/nagios/var/service-perfdata.dat
This option allows you to specify a file to which service performance data will be written after every service check. Data will be written to the performance file as specified by the service_perfdata_file_template option. Performance data is only written to this file if the process_performance_data option is enabled globally and if the process_perf_data directive in the service definition is enabled

Debug File :   debug_file=/usr/local/nagios/var/nagios.debug
This option determines where Nagios should write debugging information. What (if any) information is written is determined by the debug_level and debug_verbosity options. You can have Nagios automatically rotate the debug file when it reaches a certain size by using the max_debug_file_size option.

6. Explain Host and Service Check Execution Option?
Ans: This option determines whether or not Nagios will execute Host/service checks when it initially (re)starts. If this option is disabled, Nagios will not actively execute any service checks and will remain in a sort of “sleep” mode. This option is most often used when configuring backup monitoring servers or when setting up a distributed monitoring environment.
Note: If you have state retention enabled, Nagios will ignore this setting when it (re)starts and use the last known setting for this option (as stored in the state retention file), unless you disable the use_retained_program_state option. If you want to change this option when state retention is active (and the use_retained_program_state is enabled), you’ll have to use the appropriate external command or change it via the web interface.
Values are as follows:
0 = Don’t execute host/service checks
1 = Execute host/service checks (default)

7. Explain active and Passive check in Nagios?
Ans:    Nagios will monitor host and services in tow ways actively and passively.Active checks are the most common method for monitoring hosts and services. The main features of actives checks as as follows:Active checks are initiated by the Nagios process
A. Active checks:
1.Active checks are run on a regularly scheduled basis
2.Active checks are initiated by the check logic in the Nagios daemon.
When Nagios needs to check the status of a host or service it will execute a plugin and pass it information about what needs to be checked. The plugin will then check the operational state of the host or service and report the results back to the Nagios daemon. Nagios will process the results of the host or service check and take appropriate action as necessary (e.g. send notifications, run event handlers, etc).
Active check are executed At regular intervals, as defined by the check_interval and retry_interval options in your host and service definitions
On-demand as needed.Regularly scheduled checks occur at intervals equaling either the check_interval or the retry_interval in your host or service definitions, depending on what type of state the host or service is in. If a host or service is in a HARD state, it will be actively checked at intervals equal to the check_interval option. If it is in a SOFT state, it will be checked at intervals equal to the retry_interval option.
On-demand checks are performed whenever Nagios sees a need to obtain the latest status information about a particular host or service. For example, when Nagios is determining the reach ability of a host, it will often perform on-demand checks of parent and child hosts to accurately determine the status of a particular network segment. On-demand checks also occur in the predictive dependency check logic in order to ensure Nagios has the most accurate status information.
b.Passive checks:
They key features of passive checks are as follows:
1.Passive checks are initiated and performed external applications/processes
2.Passive check results are submitted to Nagios for processing
The major difference between active and passive checks is that active checks are initiated and performed by Nagios, while passive checks are performed by external applications.
Passive checks are useful for monitoring services that are:
Asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis
Located behind a firewall and cannot be checked actively from the monitoring host
Examples of asynchronous services that lend themselves to being monitored passively include SNMP traps and security alerts. You never know how many (if any) traps or alerts you’ll receive in a given time frame, so it’s not feasible to just monitor their status every few minutes.Passive checks are also used when configuring distributed or redundant monitoring installations.
Here’s how passive checks work in more detail…
An external application checks the status of a host or service.
The external application writes the results of the check to the external command file.
The next time Nagios reads the external command file it will place the results of all passive checks into a queue for later processing. The same queue that is used for storing results from active checks is also used to store the results from passive checks.
Nagios will periodically execute a check result reaper event and scan the check result queue. Each service check result that is found in the queue is processed in the same manner – regardless of whether the check was active or passive. Nagios may send out notifications, log alerts, etc. depending on the check result information.

8. How to verify Nagios configuration ..?
Ans:  In order to verify your configuration, run Nagios with the -v command line option like so:
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
If you’ve forgotten to enter some critical data or misconfigured things, Nagios will spit out a warning or error message that should point you to the location of the problem. Error messages generally print out the line in the configuration file that seems to be the source of the problem. On errors, Nagios will often exit the pre-flight check and return to the command prompt after printing only the first error that it has encountered.

9. What Are Objects?
Ans:    Objects are all the elements that are involved in the monitoring and notification logic.
Types of objects include:

Services  are one of the central objects in the monitoring logic. Services are associated with hosts Attributes of a host (CPU load, disk usage, uptime, etc.)

Service Groups :are groups of one or more services. Service groups can make it easier to (1) view the status of related services in the Nagios web interface and (2) simplify your configuration through the use of object tricks.

Hosts  are one of the central objects in the monitoring logic.Hosts are usually physical devices on your network (servers, workstations, routers, switches, printers, etc).
Host Groups   are groups of one or more hosts. Host groups can make it easier to (1) view the status of related hosts in the Nagios web interface and (2) simplify your configuration through the use of object tricks

Contacts Conact information of  people involved in the notification process
Contact Groups are groups of one or more contacts. Contact groups can make it easier to define all the people who get notified when certain host or service problems occur.
Commands are used to tell Nagios what programs, scripts, etc. it should execute to perform ,Host and service checks and when Notifications should send etc.
Time Periods are are used to control ,When hosts and services can be monitored
Notification Escalations Use for escalating the the notification

10. What Are Plugins?
Ans:    Plugins are compiled executable s or scripts (Perl scripts, shell scripts, etc.) that can be run from a command line to check the status or a host or service. Nagios uses the results from plugins to determine the current status of hosts and services on your network.
Nagios will execute a plugin whenever there is a need to check the status of a service or host. The plugin does something (notice the very general term) to perform the check and then simply returns the results to Nagios. Nagios will process the results that it receives from the plugin and take any necessary actions (running event handlers, sending out notifications, etc).

11. How Do I Use Plugin X?
Ans:    We have to download the plugins from nagios exchange https://exchange.nagios.org/. Then check the nagios plugin by running manually.
Most all plugins will display basic usage information when you execute them using ‘-h’ or ‘–help’ on the command line.

12. How to generate Performance graphs..?
Ans: In Nagios Core there is no inbuilt option to generate the performance graphs, We have to install pnp4nagios and add hosts and services URL’s in defination files.

13. What is the difference between NagiosXI and Nagios Core ..?
Ans:  NagiosXI is a Paid version and Nagios core is a free version.
NagiosXI includes lot of features which we can modify using web interface. Nagios Core default not include all the features we have to implement by installing plugins.

14. When Does Nagios Check For External Commands?
Ans:     At regular intervals specified by the command_check_interval option in the main configuration file
Immediately after event handlers are executed. This is in addition to the regular cycle of external command checks and is done to provide immediate action if an event handler submits commands to Nagios.
External commands that are written to the command file have the following format
[time] command_id;command_arguments
where time is the time (in time_t format) that the external application submitted the external command to the command file. The values for the command_id and command_arguments arguments will depend on what command is being submitted to Nagios.

15. Explain Nagios State Types?
Ans:   The current state of monitored services and hosts is determined by two components:
The status of the service or host (i.e. OK, WARNING, UP, DOWN, etc.)
Tye type of state the service or host is in
There are two state types in Nagios – SOFT states and HARD states. These state types are a crucial part of the monitoring logic, as they are used to determine when event handlers are executed and when notifications are initially sent out.

A.Soft States:
When a service or host check results in a non-OK or non-UP state and the service check has not yet been (re)checked the number of times specified by the max_check_attempts directive in the service or host definition. This is called a soft error.
When a service or host recovers from a soft error. This is considered a soft recovery.
The following things occur when hosts or services experience SOFT state changes:
The SOFT state is logged. Event handlers are executed to handle the SOFT state. SOFT states are only logged if you enabled the log_service_retries or log_host_retries options in your main configuration file.
The only important thing that really happens during a soft state is the execution of event handlers. Using event handlers can be particularly useful if you want to try and proactively fix a problem before it turns into a HARD state. The $HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of “SOFT” when event handlers are executed, which allows your event handler scripts to know when they should take corrective action.

B.Hard states :
occur for hosts and services in the following situations:
When a host or service check results in a non-UP or non-OK state and it has been (re)checked the number of times specified by the max_check_attempts
option in the host or service definition. This is a hard error state.
When a host or service transitions from one hard error state to another error state (e.g. WARNING to CRITICAL).
When a service check results in a non-OK state and its corresponding host is either DOWN or UNREACHABLE.
When a host or service recovers from a hard error state. This is considered to be a hard recovery.
When a passive host check is received. Passive host checks are treated as HARD unless the passive_host_checks_are_soft option is enabled.
The following things occur when hosts or services experience HARD state changes:
The HARD state is logged.
Event handlers are executed to handle the HARD state.
Contacts are notifified of the host or service problem or recovery.
The $HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of “HARD” when event handlers are executed, which allows your event handler scripts to know when they should take corrective action.

16. What is State Stalking?
Ans:     Stalking is purely for logging purposes.When stalking is enabled for a particular host or service, Nagios will watch that host or service very carefully and log any changes it sees in the output of check results. As you’ll see, it can be very helpful to you in later analysis of the log files. Under normal circumstances, the result of a host or service check is only logged if the host or service has changed state since it was last checked. There are a few exceptions to this, but for the most part, that’s the rule.

If you enable stalking for one or more states of a particular host or service, Nagios will log the results of the host or service check if the output from the check differs from the output from the previous check.

17. Explain how  Flap Detection works in Nagios?
Ans:  Nagios supports optional detection of hosts and services that are “flapping”. Flapping occurs when a service or host changes state too frequently, resulting in a storm of problem and recovery notifications. Flapping can be indicative of configuration problems (i.e. thresholds set too low), troublesome services, or real network problems.

Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping. It does this by:
Storing the results of the last 21 checks of the host or ser vice
Analyzing the historical check results and determine where state changes/transitions occur
Using the state transitions to determine a percent state change value (a measure of change) for the host or service

Comparing the percent state change value against low and high flapping thresholds
A host or service is determined to have started flapping when its percent state change first exceeds a high flapping threshold.

A host or service is determined to have stopped flapping when its percent state goes below a low flapping threshold (assuming that is was previously flapping).

The historical service check results are examined to determine where state changes/transitions occur. State changes occur when an archived state is different from the archived state that immediately precedes it chronologically. Since we keep the results of the last 21 service checks in the array, there is a possibility of having at most 20 state changes. In this example there are 7 state changes, indicated by blue arrows in the image above.

The flap detection logic uses the state changes to determine an overall percent state change for the service. This is a measure of volatility/change for the service. Services that never change state will have a 0% state change value, while services that change state each time they’re checked will have 100% state change. Most services will have a percent state change somewhere in between.

18. Explain Distributed Monitoring ?
Ans:   Nagios can be configured to support distributed monitoring of network services and resources.
When setting up a distributed monitoring environment with Nagios, there are differences in the way the central and distributed servers are configured.

The function of a distributed server is to actively perform checks all the services you define for a “cluster” of hosts. it basically just mean an arbitrary group of hosts on your network. Depending on your network layout, you may have several clusters at one physical location, or each cluster may be separated by a WAN, its own firewall, etc. There is one distributed server that runs Nagios and monitors the services on the hosts in each cluster. A distributed server is usually a bare-bones installation of Nagios. It doesn’t have to have the web interface installed, send out notifications, run event handler scripts, or do anything other than execute service checks if you don’t want it to.
The purpose of the central server is to simply listen for service check results from one or more distributed servers. Even though services are occasionally actively checked from the central server, the active checks are only performed in dire circumstances,

19. What is NRPE?
Ans:  The Nagios Remote Plugin Executor addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to allow Nagios to monitor “local” resources (like CPU load, memory usage, etc.) on remote machines. Since these public resources are not usually exposed to external machines, an agent like NRPE must be installed on the remote Linux/Unix machines.

The NRPE addon consists of two pieces:
– The check_nrpe plugin, which resides on the local monitoring machine
– The NRPE daemon, which runs on the remote Linux/Unix machine
When Nagios needs to monitor a resource of service from a remote Linux/Unix machine:
– Nagios will execute the check_nrpe plugin and tell it what service needs to be checked
– The check_nrpe plugin contacts the NRPE daemon on the remote host over an (optionally) SSL-protected connection
– The NRPE daemon runs the appropriate Nagios plugin to check the service or resource
– The results from the service check are passed from the NRPE daemon back to the check_nrpe plugin, which
then returns the check results to the Nagios process.

20.What is NDOUTILS ?
Ans:  The NDOUTILS addon is designed to store all configuration and event data from Nagios in a database. Storing information from Nagios in a database will allow for quicker retrieval and processing of that data and will help serve as a foundation for the development of a new PHP-based web interface in Nagios 4.1.

MySQL databases are currently supported by the addon and PostgreSQL support is in development.
The NDOUTILS addon was designed to work for users who have:
– Single Nagios installations
– Multiple standalone or “vanilla” Nagios installations
– Multiple Nagios installations in distributed, redundant, and/or fail over environments.

Each Nagios process, whether it is a standalone monitoring server or part of a distributed, redundant, or failover monitoring setup, is referred to as an “instance”. In order to maintain the integrity of stored data, each Nagios instance must be labeled with a unique identifier or name.

21. What are the components that make up the NDO utilities ?
Ans:
There are four main components that make up the NDO utilities:
NDOMOD Event Broker Module : The NDO utilities includes a Nagios event broker module (NDOMOD.O) that exports data from the Nagios daemon.Once the module has been loaded by the Nagios daemon, itcan access all of the data and logic present in the running Nagios process.The NDOMOD module has been designed to export configuration data, as well as information about various run time events that occur in the monitoring process, from the Nagios daemon. The module can send this data to a standard file, a Unix domain socket, or a TCP socket.
LOG2NDO Utility : The LOG2NDO utility has been designed to allow you to import historical Nagios and NetSaint log files into a database via the NDO2DB daemon (described later). The utility works by sending historical log file data to a standard file, a Unix domain socket, or a TCP socket in a format the NDO2DB daemon understands. The NDO2DB daemon can then be used to process that output and store the historical log file  information in a database.
FILE2SOCK Utility :  The FILE2SOCK utility is quite simple. Its reads input from a standard file (or STDIN) and writes all of that data to either a Unix domain socket or TCP socket. The data that is read is not processed in any way before it is sent to the socket.
NDO2DB Daemon:   The NDO2DB utility is designed to take the data output from the NDOMOD and LOG2NDO components and store it in a MySQL or PostgreSQL database.When it starts, the NDO2DB daemon creates either a TCP or Unix domain socket and waits for clients to connect. NDO2DB can run either as a standalone, multi-process daemon or under INETD (if using a TCP socket). Multiple clients can connect to the NDO2DB daemon’s socket and transmit data simultaneously. A separate NDO2DB process is spawned to handle each new client that connects. Data is read from each client and stored in a user-specified database for later retrieval and processing.

22. What are the Operating Systems we can monitor using Nagios..?
Ans:  Any Operating System We can monitor using Nagios, OS should support to install Nagios Clinet either SNMP.

23. What is database is used by Nagios to store collected status data..?

Ans: Nagios core will use default RRD database format to store status data

File compression and archiving in linux ?

INTRODUCTION TO FILE COMPRESSION AND ARCHIVING

20 useful tar and zip commands  It is useful to store a group of files in one file for easy backup, for transfer to another directory, or for transfer to another computer. It is also useful to compress large files; compressed files take up less disk space and download faster via the Internet.

It is important to understand the distinction between an archive file and a compressed file. An archive file is a collection of files and directories stored in one file. The archive file is not compressed — it uses the same amount of disk space as all the individual files and directories combined. A compressed file is a collection of files and directories that are stored in one file and stored in a way that uses less disk space than all the individual files and directories combined. If disk space is a concern, compress rarely-used files, or place all such files in a single archive file and compress it.

Note: tar file is not a compressed file, but compressed file is archived file
As we so many extensions to compress the files using tar command, as we take few examples in this article. All the extensions will work to compress the files and directories but there compression ratio is different compare to each other. Based extension compression ratio we can use different options.

1. gzip

2. bzip

3. zip

Syntax: tar <File Name.tar> <directory / file path>

1. ARCHIVING FILES USING TAR COMMAND

Archiving is not an compression of files and directories it's an kind of group all the files and directories together in single file, instead of multiple files. After creating an archive file, we can't see size difference in between actual file system size and archive file.

Let's see an example below

[root@Linuxforfreshers.com tar]# du -h *.txt   <<-- Files Size before creating an archive  44K     d.txt  44K     g.txt  44K     kumar.txt  44K     raghu.txt  44K     linux.txt  44K     test1.txt  44K     test2.txt  44K     test3.txt  44K     test4.txt
  [root@Linuxforfreshers.com tar]# tar -cvf raghu.tar *.txt   << to Create an Archive file command  d.txt  g.txt  kumar.txt  raghu.txt  linux.txt  test1.txt  test2.txt  test3.txt  test4.txt
  [root@Linuxforfreshers.com tar]# du -h raghu.tar  << -- After Creating an archive file size  380K    raghu.tar
explanation of tar command options

-c Create an archive file

-v verbose (display all files status to archive)

-f specifying the files

2. EXTRACTING AN ARCHIVE FILE

In order to extract the archive file we have to use -x option along with tar command

[root@Linuxforfreshers.com tar]# tar -xvf raghu.tar  d.txt  g.txt  kumar.txt  raghu.txt  linux.txt  test1.txt  test2.txt  test3.txt  test4.txt
3. UPDATING AN ARCHIVE FILE WITH NEWLY CREATED FILES

There is a requirement that, we have to update an archive file by adding only newly created files.  Adding only newly created files to archive will save us the lot of time.

Let's see an example as shown below, when we use -u option along with tar command it will update the tar file with newly created files

[root@Linuxforfreshers.com tar]# touch Linuxforfreshers.coms.txt 
[root@Linuxforfreshers.com tar]# tar -uvf raghu.tar *.txt  Linuxforfreshers.coms.txt
4. LIST FILES FROM ARCHIVE WITHOUT EXTRACTING THEM

all the times we know need to extract an archive in order to see the archive content, if it is an large file its very difficult to extract and it takes lot of time to extract and required disk space as well to extract the files.

We have to use '-t' option to see all files which are there in archive file

[root@Linuxforfreshers.com tar]# tar -tf raghu.tar  d.txt  g.txt  kumar.txt  raghu.txt  linux.txt  test1.txt  test2.txt  test3.txt  test4.txt  Linuxforfreshers.coms.txt
5. EXTRACT SINGLE FILE FROM ARCHIVE

This option is very handy whenever we have an large archive file, we need only single file from that archive to be restored. In order to restore an single file from archive we have to use wildcards

[root@Linuxforfreshers.com tar]# rm -rf *.txt  <<-- Deleted all the Files from current location  [root@Linuxforfreshers.com tar]# ls   << -- After Deletion we have below files  3  arkit10.doc  arkit1.doc  arkit2.doc  arkit3.doc  arkit5.doc  arkit6.doc  arkit7.doc  arkit8.doc  arkit9.doc  raghu.tar
  [root@Linuxforfreshers.com tar]# tar -xvf raghu.tar Linuxforfreshers.coms.txt   <<<-- Restored an single file from archive  Linuxforfreshers.coms.txt
 [root@Linuxforfreshers.com tar]# ls   <<-- After Restoration we have below files  3            arkit1.doc  arkit3.doc  arkit6.doc  arkit8.doc  raghu.tar  arkit10.doc  arkit2.doc  arkit5.doc  arkit7.doc  arkit9.doc  Linuxforfreshers.coms.txt
above is the example how we can restore a single from archive

6. EXTRACT MULTIPLE FILES FROM ARCHIVE (NOT ALL FILES)

As you see in 5th step we extracted single file from archive, in the same way we are going to extract an multiple files from archive (not all).

Note: in order to extract files from archive you have to know exact file names, you can use '-t' to see all the files in archive

[root@Linuxforfreshers.com tar]# rm -rf Linuxforfreshers.coms.txt   <<-- To get clarity deleted previous presented files
 [root@Linuxforfreshers.com tar]# tar -xvf raghu.tar "Linuxforfreshers.coms.txt" "test1.txt"  test1.txt  Linuxforfreshers.coms.txt
  [root@Linuxforfreshers.com tar]# ls  3            arkit1.doc  arkit3.doc  arkit6.doc  arkit8.doc  raghu.tar           test1.txt  arkit10.doc  arkit2.doc  arkit5.doc  arkit7.doc  arkit9.doc  Linuxforfreshers.coms.txt 
[root@Linuxforfreshers.com tar]# rm -rf Linuxforfreshers.coms.txt test1.txt  [root@Linuxforfreshers.com tar]# tar -xvf raghu.tar --wildcards *.txt  d.txt  g.txt  kumar.txt  raghu.txt  linux.txt  test1.txt  test2.txt  test3.txt  test4.txt  Linuxforfreshers.coms.txt   
Note:: As we deleting the previous files only for demonstration only, DO NOT DELETE FILES in your environment.

you can mention multiple file names and also we can use wildcard option to restore multiple files as shown above example

7. COMPRESSING FILES IN GZIP

As of now we see how to archive an files (grouping files together in single file). After creating an archive we did not get an space saving benefit because archive will not compress an files, file size will not decrease. When we compress an files we save disk space. If we want to create 'gzip' file with extension '.gz' we have to use '-z' option along with 'tar' command.

Let's see an example

[root@Linuxforfreshers.com tar]# tar -czvf linux.tar.gz *.txt  d.txt  g.txt  kumar.txt  raghu.txt  Linuxforfreshers.coms.txt  linux.txt  test1.txt  test2.txt  test3.txt  test4.txt  [root@Linuxforfreshers.com tar]# ls  3            arkit2.doc  arkit6.doc  arkit9.doc  kumar.txt  linux.tar.gz        test1.txt  test4.txt  arkit10.doc  arkit3.doc  arkit7.doc  d.txt       raghu.tar   Linuxforfreshers.coms.txt  test2.txt  arkit1.doc   arkit5.doc  arkit8.doc  g.txt       raghu.txt   linux.txt           test3.txt
 [root@Linuxforfreshers.com tar]# du -h linux.tar.gz  4.0K    linux.tar.gz  [root@Linuxforfreshers.com tar]# du -h *.txt  44K     d.txt  44K     g.txt  44K     kumar.txt  44K     raghu.txt  0       Linuxforfreshers.coms.txt  44K     linux.txt  44K     test1.txt  44K     test2.txt  44K     test3.txt  44K     test4.txt
 [root@Linuxforfreshers.com tar]#
As shown in above example, after compression of text files using '-z' we got an compression file size is 4KB actual file size 380KB

8. COMPRESSING FILES USING BZIP

Its also same like 'gzip' only but compression ratio of '.bz2′ is more compare to '.gz' we are going to compress same files as we used in above example and see how much we will get the compressed file size, for 'bzip' we have to use '-j' option.

[root@Linuxforfreshers.com tar]# tar -cjvf 1linux.tar.bz2 *.txt  d.txt  g.txt  kumar.txt  raghu.txt  Linuxforfreshers.coms.txt  linux.txt  test1.txt  test2.txt  test3.txt  test4.txt  [root@Linuxforfreshers.com tar]# du -h 1linux.tar.bz2  4.0K    1linux.tar.bz2
In this comparison of '.gz' and '.bz2' compression methods practical examples are below

9. COMPRESSION RATIO OF .GZ (GZIP) AND .BZ2 (BZIP)

After compressing 34MB using '.gz' output file size is 8.6MB.

Using same  files compressed with '.bz2' output file size is 7.2MB. Comparatively .bz2 compression ratio is higher than .gz

[root@Linuxforfreshers.com tar]# du -h tarr.tar.gz  8.6M    tarr.tar.gz  [root@Linuxforfreshers.com tar]# du -h tarr.tar.bz2  7.2M    tarr.tar.bz2
10. EXTRACTING COMPRESSED FILES FROM 'GZIP' AND 'BZIP'

To extract 'gzip' and 'bzip' files we have to use '-x' option along with there own options '-z' for gzip and '-j' for bzip.

Below is the example for extracting the 'bzip' file

[root@Linuxforfreshers.com tar]# tar -xjvf 1linux.tar.bz2  d.txt  g.txt  kumar.txt  raghu.txt  Linuxforfreshers.coms.txt  linux.txt  test1.txt  test2.txt  test3.txt  test4.txt
Below is the practical example for extracting the 'gzip' file

[root@Linuxforfreshers.com tar]# tar -xzvf linux.tar.gz  d.txt  g.txt  kumar.txt  raghu.txt  Linuxforfreshers.coms.txt  linux.txt  test1.txt  test2.txt  test3.txt  test4.txt  [root@Linuxforfreshers.com tar]#
11. ZIPPING THE FILES USING ZIP COMMAND

zip command is used to compress the files with .zip extension, zip is available in different platform's such as Unix, Linux, Windows and MAC.

Syntax:  zip <Destination File Path and Name>.zip  <source files to compress>

below is the example to compress the files using 'zip' command

[root@Linuxforfreshers.com tar]# zip docfiles.zip *.txt    adding: d.txt (deflated 100%)    adding: g.txt (deflated 100%)    adding: kumar.txt (deflated 100%)    adding: raghu.txt (deflated 100%)    adding: Linuxforfreshers.coms.txt (stored 0%)    adding: linux.txt (deflated 100%)    adding: test1.txt (deflated 100%)    adding: test2.txt (deflated 100%)    adding: test3.txt (deflated 100%)    adding: test4.txt (deflated 100%)
  [root@Linuxforfreshers.com tar]#
12. ZIPPING FILES AND DIRECTORIES ALONG WITH SUB DIRECTORIES AND ITS FILES

When we use remote directory compression using 'zip' command it will not compress all the sub directories and its content in order to compress all the sub directories and its files we have to use '-r' along with zip command

[root@Linuxforfreshers.com tar]# zip -r subdir.zip raghu/    adding: raghu/ (stored 0%)    adding: raghu/kumar/ (stored 0%)    adding: raghu/kumar/linux/ (stored 0%)    adding: raghu/kumar/linux/d.txt (deflated 100%)    adding: raghu/kumar/linux/g.txt (deflated 100%)    adding: raghu/kumar/linux/kumar.txt (deflated 100%)    adding: raghu/kumar/linux/raghu.txt (deflated 100%)
13. COMPRESSING WITH HIGH COMPRESSION RATIO

zip command has good feature that we can also mention an compression ratio option from 1 to 9. 9 gives high compression.

[root@Linuxforfreshers.com tar]# zip -9 -r deepcompress.zip raghu/    adding: raghu/ (stored 0%)    adding: raghu/kumar/ (stored 0%)    adding: raghu/kumar/linux/ (stored 0%)    adding: raghu/kumar/linux/d.txt (deflated 100%)    adding: raghu/kumar/linux/g.txt (deflated 100%)    adding: raghu/kumar/linux/kumar.txt (deflated 100%)    adding: raghu/kumar/linux/raghu.txt (deflated 100%)    adding: raghu/kumar/linux/Linuxforfreshers.coms.txt (stored 0%)    adding: raghu/kumar/linux/linux.txt (deflated 100%)    adding: raghu/kumar/linux/test1.txt (deflated 100%)    adding: raghu/kumar/linux/test2.txt (deflated 100%)    adding: raghu/kumar/linux/test3.txt (deflated 100%)    adding: raghu/kumar/linux/test4.txt (deflated 100%)
14. EXCLUDING PARTICULAR FILE / DIRECTORY FROM COMPRESSION

We can also exclude file from compression in order to do that '-x' we have to use.

[root@Linuxforfreshers.com tar]# zip -r compress1.zip raghu/ -x raghu/g.txt    adding: raghu/ (stored 0%)    adding: raghu/d.txt (deflated 100%)    adding: raghu/kumar.txt (deflated 100%)    adding: raghu/raghu.txt (deflated 100%)    adding: raghu/Linuxforfreshers.coms.txt (stored 0%)    adding: raghu/linux.txt (deflated 100%)    adding: raghu/test1.txt (deflated 100%)    adding: raghu/test2.txt (deflated 100%)    adding: raghu/test3.txt (deflated 100%)    adding: raghu/test4.txt (deflated 100%)  [root@Linuxforfreshers.com tar]# ls raghu/  d.txt  g.txt  kumar.txt  raghu.txt  Linuxforfreshers.coms.txt  linux.txt  test1.txt  test2.txt  test3.txt  test4.txt
15. DELETE PARTICULAR FILE FROM ZIP

We can also delete an file from compressed file using option '-d' along with zip command

[root@Linuxforfreshers.com tar]# zip -d compress1.zip raghu/linux.txt  deleting: raghu/linux.txt
16. UPDATE NEWLY CREATED FILES TO ZIP

We can update zip file using '-u' option which will only add newly created files to zip file.

[root@Linuxforfreshers.com tar]# touch Update2.txt  [root@Linuxforfreshers.com tar]# zip -u compress1.zip *.txt    adding: Update2.txt (stored 0%)  [root@Linuxforfreshers.com tar]#
17. UPDATE ZIP WITH NEWLY MODIFIED FILES

Update only modifed files to zip file, in order to do modified file update use '-fr' option

[root@Linuxforfreshers.com tar]# zip -fr compress1.zip *.txt  freshening: Update2.txt (stored 0%)  [root@Linuxforfreshers.com tar]#
18. LIST ALL FILES FROM ZIP WITHOUT EXTRACTING THEM

List all files from zip without extracting them

# less compress.zip
19. CHECK ZIP FILE CONTENT WITHOUT EXTRACTING

Without extracting zip file, if you want to see zipped file content you can see using 'zmore' and 'zless' commands.

# zmore compress.zip  # zless comress.zip
20. DE-COMPRESS ZIP FILE

In order to extract the zip file we have to use 'unzip' command. If files are exists it will ask you for the confirmation to re-write the same.


[root@Linuxforfreshers.com tar]# unzip compress1.zip  Archive:  compress1.zip  replace d.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: y    inflating: d.txt  replace g.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: y    inflating: g.txt  replace kumar.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: A    inflating: kumar.txt