Saturday, May 11, 2019

How do I list all IRQS currently used under Linux?



An interrupt request (IRQ) is a hardware signal sent to the processor instructing it to suspend its
current activity and handle some external event, such as a keyboard input or a mouse movement. In
x86 based computer systems, IRQs are numbered from 0 to 15. Newer computers, including x86-64
systems, provide more than these 16 interrupts (usually 24). Some interrupts are reserved for
specific purposes, such as the keyboard and the real-time clock; others have common uses but may
be reassigned; and some are left available for extra devices that may be added to the system.
Here is a list of the IRQs and their common purposes in the x86 system:

irq list and purpose
There is a file called /proc/interrupts. The proc file system is a pseudo file system which is used as an
interface to kernel data structures. It is commonly mounted at /proc.

This is used to record the number of interrupts per each IRQ on (at least) the i386 architecture. Very
easy to read formatting, done in ASCII.

Display /proc/interrupts

Use cat or less command:

$ cat /proc/interrupts

Output:


Sunday, May 5, 2019

How to Install GDB?


There are two ways you can install GDB on your linux machine.

1. Install pre-built gdb binaries from verified distribution resources
You can install gdb on Debian-based linux distro (e.g. Ubuntu, Mint, etc) by following command.
$ sudo apt-get update
$ sudo apt-get install gdb

2. Download source code of GDB, compile it and install.
Follow below mentioned steps to compile GDB from scratch and install it.
Step-1: Download source code.
You can download source code of all release from http://ftp.gnu.org/gnu/gdb/

Step-2: Extract it
$ tar -xvzf gdb-8.0.tar.gz

Step-3: Configure and Compile it.
$ cd gdb-8.0
gdb-8.0$ ./configure
gdb-8.0$ make
This step will take a bit of time. So you can sit back and have cup of beer for a while.
Once it is completed, you can locate gdb binary located at gdb-8.0/gdb/gdb

Step-4: Install GDB.
$ make install
By default this will install gdb binaries in /usr/local/bin and libs in /usr/local/lib
Congratulation, you have successfully compiled and installed GDB.

Once you installed GDB, you can print GDB version to test whether it is installed correctly.

$ gdb --version

How do I kill all the processes in MySQL show processlist?

How do I kill a running query in mysql ?



We can kill the processes with the help of the ‘kill’ command. However, you need to
kill those processes one by one, since MySQL does not have any massive kill command.


To check how many processes exist, use ‘show processlist’


The following is the output.


mysql> show processlist;
+------+--------+--------------------+----------+---------+------+-------+------------------+
| Id   | User  | Host            | db | Command | Time | State | Info             |
+------+--------+--------------------+----------+---------+------+-------+------------------+
|   41 | zabbix | localhost          | zabbixdb | Sleep | 2 |   | NULL |
|   48 | zabbix | localhost          | zabbixdb | Sleep | 0 |   | NULL |
|   50 | zabbix | localhost          | zabbixdb | Sleep | 29 |   | NULL |
|   55 | zabbix | localhost          | zabbixdb | Sleep | 0 |   | NULL |
|   58 | zabbix | localhost          | zabbixdb | Sleep | 1 |   | NULL |
|   62 | zabbix | localhost          | zabbixdb | Sleep | 57 |   | NULL |
| 5280 | root   | 192.168.109.202:58608 | zabbixdb | Query   | 0 | NULL | show processlist |
+------+--------+--------------------+----------+---------+------+-------+------------------+
7 rows in set (0.00 sec)


To kill a process which have been active for more than 10 seconds, the following is the query. Here, we are killing a process with Id “41”


mysql> select concat('kill ',41,';')
  -> from information_schema.processlist
  -> where TIME > 10;


The following is the output.


mysql> select concat('kill ',41,';')  from information_schema.processlist where TIME > 10;
+------------------------+
| concat('kill ',41,';') |
+------------------------+
| kill 41;               |
| kill 41;               |
| kill 41;               |
| kill 41;               |
+------------------------+
4 rows in set (0.00 sec)
Method 2:


kill <thread_id>;


mysql> show processlist;
+------+--------+--------------------+----------+---------+------+-------+------------------+
| Id   | User  | Host            | db | Command | Time | State | Info             |
+------+--------+--------------------+----------+---------+------+-------+------------------+
|   48 | zabbix | localhost          | zabbixdb | Sleep | 3 |   | NULL |
|   50 | zabbix | localhost          | zabbixdb | Sleep | 23 |   | NULL |
|   55 | zabbix | localhost          | zabbixdb | Sleep | 4 |   | NULL |
|   58 | zabbix | localhost          | zabbixdb | Sleep | 55 |   | NULL |
|   62 | zabbix | localhost          | zabbixdb | Sleep | 51 |   | NULL |
| 5280 | root   | 192.168.109.202:58608 | zabbixdb | Query   | 0 | NULL | show processlist |
| 5355 | zabbix | localhost          | zabbixdb | Sleep | 26 | | NULL             |
+------+--------+--------------------+----------+---------+------+-------+------------------+
7 rows in set (0.00 sec)


mysql> kill 5355  ;


Query OK, 0 rows affected (0.00 sec)



you can still try the following MySQL query to kill all the processes.


mysql -e "show full processlist;" -ss | awk '{print "KILL "$1";"}'



Wednesday, May 1, 2019

How to copy files with rsync over SSH?



Install rsync


If the command is not included by default inside the server configuration we can easily add it
using the default package manager:


Cent Or Redhat:

sudo yum install rsync


Debian/Ubuntu:

sudo apt-get install rsync


Copy a file from local server to remote one:

rsync -v -e ssh /home/localuser/testfile.txt remoteuser@X.X.X.X:/home/remoteuser/transfer


In the above example we will copy a file called testfile.txt from the current server to the remote
one and will place it inside the folder /home/remoteuser/transfer.


The output should be similar to the following one:

sent X bytes  received X bytes  X.X bytes/sec
total size is X  speedup is X.X

If the remote server is configured to work with non-default SSH port (other than 22) we can specify that inside the -e option:

rsync -v -e "ssh -p2222" /home/localuser/testfile.txt remoteuser@X.X.X.X:~/transfer
Again the testfile.txt will be copied inside the /home/remoteuser/transfer folder situated on the remote server.


Copy a file from remote server into a local folder:

rsync -v -e ssh remoteuser@X.X.X.X:/home/remoteuser/transfer/testfile.txt /home/localuser/


In the above example we will copy a file called testfile.txt from the remote server inside a local folder called /home/localuser/.


Synchronize local folder on remote server:


rsync -r -a -v -e ssh --delete /home/localuser/testfolder    remoteuser@X.X.X.X:/home/remoteuser/testfolder


Synchronize folder from the remote server on the local server:

rsync -r -a -v -e ssh --delete remoteuser@X.X.X.X:/home/remoteuser/testfolder /home/localuser/testfolder

Here is a list of some of the most common rsync options:

--delete - delete files that don't exist on sender (system)
-v - verbose (-vv will provide more detailed information)
-e "ssh options" - specify the ssh as remote shell
-a - archive mode - it preserves permissions (owners, groups), times, symbolic links, and devices
-r - recurse into directories
-z - compress file data during transfer
--exclude 'foldername' – excludes the corresponding folder from transfer
-P – show progress during transfer

Friday, April 5, 2019

How to use wget to download file via proxy in linux?

The wget program allows you to download files from URLs. Although it can do a lot, the simplest
form of the command is: wget [some URL]. Assuming no errors, it will place that file in the current
directory. If you do not specify a filename, by default it will attempt to get the index.html file.

This document descript how to set wget (The non-interactive network downloader) to download file
via proxy.

wget Configuration files
Below are wget configuration files listed by their priorities:


  1. ~/.wgetrc: User startup file.
  2. /etc/wgetrc: Default location of the global startup file.
  3. Set proxy variables in shell for current pseudo-terminal.
  4. ~/.bash_profile: User specific environment.
  5. /etc/profile: System wide environment.

Note: If higher priority configuration is not set, then the very next priority configuration takes
effective. For instance, ~/.wgetrc was not configured with proxy settings but /etc/wgetrc was
configured, then proxy settings in /etc/wgetrc are the working proxys in wget.
Configuring wget proxy


1. Add below line(s) in file ~/.wgetrc or /etc/wgetrc:

http_proxy = http://[Proxy_Server]:[port]
https_proxy = http://[Proxy_Server]:[port]

2. Set proxy variable(s) in a shell manually:

$ export http_proxy=http://[Proxy_Server]:[port]
$ export https_proxy=$http_proxy
$ export ftp_proxy=$http_proxy

Verify the variable values using the “env” command.

$ env | grep proxy
http_proxy=http://[Proxy_Server]:[port]
https_proxy=http://[Proxy_Server]:[port]

3. Add below line(s) in file ~/.bash_profile or /etc/profile:

# export http_proxy=http://[Proxy_Server]:[port]
# export https_proxy=http://[Proxy_Server]:[port]
# export ftp_proxy=http://[Proxy_Server]:[port]

4. Direct command line
$wget -e use_proxy=yes -e http_proxy=http://[Proxy_Server]:[port]  URL

How to Use pip behind a Proxy ?

On Linux

Set up the proxy through


and then install package:

pip install somepackage

or specify proxy in pip command:

sudo pip install --proxy=https://[username:password@]proxyserver:port somepackage



On Windows

You can manually set up proxy environment variables through right-click on This PC (Windows 10)
or Computer (Widnows 7) –> Proporties –> Advanced system settings –> Environment variables
then add environment variables:



You can also set up the proxy through comand lines:

set http_proxy=http://[username:password@]proxyserver:port
set http_proxy=https://[username:password@]proxyserver:port
After setting up proxies, then you can install packages through running:

pip install somepackage
Alternativelly, you can also specify proxy settings in the pip command:

pip install --proxy=https://[username:password@]proxyserver:port somepackage

Wednesday, March 27, 2019

How do I increase the MySQL connections for my server?

These following reasons cause MySQL to run out connections.

1). Slow Queries

2). Data Storage Techniques

3). Bad MySQL configuration





If you have encountered the error “Too many connections” while trying to connect to a MySQL
Server, that means it reached the maximum number of connections, or all available permitted are in
use by other clients and your connection attempts will get rejected.

That number of connections is defined via the max_connections system variable. To open for more
connections, you can set a higher value for max_connections.

To see the current value of max_connections, run this command:

SHOW VARIABLES LIKE "max_connections";

Sample output:


By default, it’s set to 151. But MySQL actually allows up to max_connections + 1, which is 151 + 1 for
the default setting. The extra connection can be used by the user with SUPER privilege only.

To increase the max_connections value, let’s say 500, run this command:

SET GLOBAL max_connections = 500;


The command takes effect right after you execute it, but it only applies to the current session. If you
want it to be permanent until you re-adjust it the next time, you have to edit the configuration file
my.cnf (normally it’s stored in /etc/my.cnf).

Under the [mysqld] section add the following line:

max_connections = 500
Then restart the MySQL server to take effect.

One thing to keep in mind that there is no hard limit to setting up maximum max_connections value,
but increasing it will require more RAM to run. The number of maximum connections permitted has
to be calculated based on how much RAM you have and how much is used per connection. In many
cases, if you run out of usable disc space on your server partition or drive, MySQL might also return
this error.

The maximum number can be estimated by the formula:

max.connection=(available RAM-global buffers)/thread buffers
So increase it with caution.