Sunday, November 2, 2014

The way to figure out how your linux application installations and deployments are configured

There are several main distribution among Linux community and each of them have slightly different configuration upon the software package management and compiler prefix. Hence, this is crucial to learn what the current package deployed on your system:
1. Default Repository Manager: such as yum, apt-get, homebrew, you have to learn the main stream repository and deployment management system on you distribution.
2. Default Package Manager: such as rpm, dkpg, you need to learn the shell command about software installation shell command. And these command usually provide a database to record the relationship between package's dependencies. You might have a lot customized software package that provide by vendor which you can not found the open source software on public repository for you to yum or apt-get.
3. Useful shell command `locate`: we usually use this command to learn the installation made by ./configure and `make install`. This command is really useful when you have some cutting edge software that compiled and installed manually by your own. Usually, sometimes we use the command to check the library missing or any misconfigure package installation.
4. Finally, you have to check the following folder like /etc/init.d on CentOS for your command about `service` or `sbin/service` and `chkconfig`. These are the command for you to lookup the application would be started at booting.
5. If the file does exist but you still have trouble on link it, you could use ldd command to parse the ELF binary for checking all the dependencies' location such as
[user@server ~]$ ldd /usr/lib64/libtdsodbc.so.0
        linux-vdso.so.1 =>  (0x00007fffe3ea0000)
        libodbcinst.so.2 => not found
        libgnutls.so.26 => /usr/lib64/libgnutls.so.26 (0x00007f65b1a6a000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f65b1862000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f65b1645000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f65b12b0000)
        libtasn1.so.3 => /usr/lib64/libtasn1.so.3 (0x00007f65b10a0000)
        libz.so.1 => /lib64/libz.so.1 (0x00007f65b0e8a000)
        libgcrypt.so.11 => /lib64/libgcrypt.so.11 (0x00007f65b0c14000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f65b1f71000)
        libgpg-error.so.0 => /lib64/libgpg-error.so.0 (0x00007f65b0a10000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f65b080c000)

Thursday, October 2, 2014

The Cent OS Workstation

Recently, I would like to use Eclipse to develop python Client for Hadoop usage such as HDFS and HBase. However, I found there is no way to build up a "pydoop" client on Windows Machine. Hence we have to use CentOS as our regular based workstation. Although I have the experience on Mac for daily usage (email, browser, documentation) but the stuff related to workstation are totally different from Sever Management and Office Work. The Distribution Config is Software Developement.
For better transition, I have to left some work still on windows workstation. Therefore, the interaction between new CentOS Workstation and original windows machine take its matter:
1. Install the RDP Client on CentOS
[root@new]# yum install xfreerdp
Then you can use below command to connect to your original window workstation:
[root@new]# xfreerdp --plugin cliprdr -d [domain] -u [username] -g [w]x[h] 192.x.x.x
2. Install xrdp as RDP Server for Window client. This part would require the Extra Software Repository - EPEL for yum-ing the xrdp package
[root@new]# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@new]# rpm -ivh epel-release-6-8.noarch.rpm
Then you should refresh your yum repository:
[root@new]# yum repolist
================
epel             Extra Packages for Enterprise Linux 6 - x86_64           11,105
================
Now you can install the xrdp and vnc server:
[root@new]# yum install xrdp tigervnc-server
[root@new]# service xrdp start
Final, you should make those service auto-started after reboot:
[root@new]# chkconfig xrdp on

3. After EPEL, you should be able to install ntfs-3g for NTFS disk access.
[root@new]# yum install ntfs-3g
[root@new]# vim /etc/fstab
/dev/sda2               /mnt/win_d      ntfs-3g rw,umask=0000,defaults 0 0

4. Install Samba to enhance the file transfer for your document

5. Change the mouse scroll like Mac's nature. This would be more convenient for your daily work.
[root@new]#xmodmap -e "pointer = 1 2 3 5 4 7 6 8 9 10"

Wednesday, September 10, 2014

The XRDP Bug after restart the service

I have installed the XRDP on CentOS for Hadoop Java development. And the Eclipse on CentOS require the RDP to Gnume Desktop for GUI. However, I need all the connection to the same session to work on latest progress. So I added up the static port as session config like below:

[xrdp2]
name=sesman-Xvnc-5910
lib=libvnc.so
username=ask
password=ask
ip=127.0.0.1
port=5910

However, after I restarted the machine. The sesman session shows error on connection. After several try I found the XRDP require an initial session when you try to identify the port of session. Which means when there is no 5910 session on XRDP Service. The above connection will get error. We have to left the config like:

[xrdp1]
name=sesman-Xvnc-New
lib=libvnc.so
username=ask
password=ask
ip=127.0.0.1
port=-1

This config will start up the first session at port 5910 (Default). Then you can successfully identify the session port 5910 as [xrdp2] assigned and get into the session you left after service restart.

There is a way to get your desktop session on localhost:
[xrdp0]
name=sesman-Xvnc-Local
lib=libvnc.so
username=ask
password=ask
ip=127.0.0.1
port=5900
Before restart xrdp, you should change the setting [System]->[Preference]->[Remote Desktop] to open the value of [Allow other User to control your desktop]. The xrdp0 will show the local desktop you have directly work on.

Tuesday, June 10, 2014

The Fundamental Design for Service Application

Here so called Service Application indicate to background daemon for application such like Request Handle Server, Flow Processor. In Cloud Era, every Service Application should consider these three feature into fundamental design:
1. High Availability: HA has three design level based on the complication and cost. Lowest level is none. Basic Level is Active-Standby. The highest level is Capacity Impact. To reach the Highest Level, it means your system has some kind of scale out ability. However, we have met some terrible design which doesn't control the database transaction and lock logic well. That will introduce incapable to scale out your application and limited your option to Active-Standby Design. Otherwise, all well trained developers should have the ability to implement the application with scale out.

2. Application Resilient: any application would happen the process crash or machine incidentally rebooted. There should be a design to make the service resiliently restarted or elegantly terminated and started by other application. This is not only about the exception handle issue but also the service continuity. However, most modern designed OS has the basic tool to keep this feature on your application.

3. Log Notification: This is must have but each team has different implementation.

Monday, May 12, 2014

QT Creator 5 and Boost Framework on Windows

C++ is a language which brings you benefit from cross-platform and better performance. When we need compile C++ on different plat-form, usually we have to choose different IDE for windows and Linux. However, since QT has support the tool chain from Microsoft Visual C++ (msvc) we found QT might be the neat solution when we need an IDE that support our project on different platform.
For windows, we use boost as the framework for our C++ development (msvc 11 has support the implementation for C++ 11 statndard which is quiet handy that we don't have to build gcc 4.8 on MinGW platform. But for the developer already has code ran on MinGW framework, I believe it is the same handy when you integrated QT with MinGW Tool Chain.) So first thing is down load the boost_1_55_0 into C:\.
It is really easy to get boost installed. Go to C:\boost_1_55_0 and run bootstrap.bat batch file you will get a b2.exe file as your build tool. Then run the b2.exe (you can use b2 --help to find the command arguments for you to specify the customized installation config) The latest built dll would be under "C:\boost_1_55_0\stage\lib" and the header file is under C:\boost_1_55_0\boost. The whole build process should under the windows shell of visual studio 2012 or the compiler which supports C++ 11 standard. You could get the windows shell by using "VS2012 x64 Native Tools Command Prompt".
Hence, please add the follow instruction in your QT .pro file for the project you would like to include boost:
 
INCLUDEPATH+=C:\boost_1_55_0
LIBS+=-LC:\boost_1_55_0\stage\lib\

Then you could use #include<boost\asio.hpp> to see if the compiler work.
This is for framework include. If you want to include the external library such as logger or database, you could right click the Project Icon on left Tree View and add an external library (require a specified dll)



For the debugger on QT, you have to install the cdb.exe from WDK:
http://msdn.microsoft.com/en-us/windows/hardware/hh852365.aspx

If you use the tool chain of visual studio 2012, you should install WDK 8.0.

Monday, May 5, 2014

mRemoteNG with External Tool Set up

Many IT department only support the maintenance for Windows Workstation. So, if your work is mainly on Linux system, here is a great free multi-shell client tool on windows called "mRemoteNG". Although it support plenty of client shell with tabs, we still need the scp tool to upload file from windows workstation to Linux server. "mRemoteNG" provide you a way to invoke the other tools like a plug-in which is a really neat idea. First, you can put your winscp under the root folder of "mRemoteNG".
At the External Tool Properties you can specify the SCP tool's location and the argument you wan to pass into external tools from mRemoteNG like below:

scp://%Username%:%Password%@%Hostname%/


You can use the same method to integrate mRemoteNG and FileZilla.

The Folder Convention for CentOS

Each Linux distribution has their own system folder convention which will let user to figure out the purpose and program deployment layout quick. For example, MySQL's database file would be deployed to "/var/mysql" after you execute mysql_install_db command without specified the instance customized storage path but use default.
Speaking of CentOS, since it is derivative from RedHat distribution, I think it is quiet easy to grab its folder convention from application server development view. In daily operation, the "/var" folder would be the most important place for data storage. "/var" folder contains all the data file or "variable data of system persistent layer" and I believe this convention should be kept from server to server.
The second folder is "/opt" this folder is containing all the application server which is like a small OS inside Linux. I found some server package would assign /opt as their parent of server root folder and the script tool of application server would consider "/opt" is the shell path for scripting included. However, "/opt"s' purpose has most discrepancy from distribution to distribution. But I prefer use "/opt" as a small set of system package. Under "/opt" folder, each server package has its own "/usr", "/bin" and even "/sbin". It likes once you get into "/opt/server", you will get your own world with a dialect to communicate with another system. But sometimes it is pretty hard to configure the program under "/opt" for well function because there might be plenty binary dependency between "/opt" package and "/usr".
If you have built up a server with some auto run daemon or program, the "/etc" folder might be your most accessed folder per day. Surprisingly, "/etc" not only contains important configuration file from OS to Server Container (such as "/my.cnf"). "/etd/init.d" also has a lot of script for daemon and server start up. So you can tread "/etc" like a launch center to configure and regular the behavior of your whole application server from OS to user application at every beginning.
"/usr" is most crucial for all developer. The "/usr" folder would show the character of each Linux Dist. Even for the same distribution, the "/usr" folder might be much different from server edition to desktop edition. All the important compiler, productivity tool and open source software are scattered in the sub-folders of "/usr". If "/bin" and "sbin" are the shell that communicate to your machine, the "/usr" would give you the ability to enhance all your work to the machine. Most open source software are targeted to be deployed into "/usr" and scattered into "/usr/local", "/usr/bin", "/usr/include", "/usr/lib" and "/usr/etc". The famous Linux Package Manager and Deployment software are managing the "/usr" folder with their deliberated protocol and mechanism. And those deployment tools are really helpful for environment set up such as Java or X windows. However, some fresh new package might require you twist the file under "/usr" folder. That means the new package has not been adopted by those deployment tools and you have to dig into "/usr" folder's set up for running the new program normally. Neither, there is no easy way to unplug all those manually installed software from "/usr" with fresh clean up.

Saturday, April 5, 2014

CentOS or Window OS Python Environment Setup

We should download the python package "distribute" first which will install easy_install script first. The easy_install is installation tool for "pip". And after pip installed, we can use pip to set up all the package we need such as Django, VirtualEnv or Happybase.
We use windows as example:
First, install the python package on windows and add "C:\Python27" into Environment Path. Second, use the distribute_setup.py to let python package has ability to extend the feature such as happybase or django. You can download the package from:

https://pypi.python.org/pypi/distribute/0.6.49

Un-archive the zip file you will get a folder "dist" and copy to your python installation root folder. Then execute the command to run setup py: "C:\Python27> .\python.exe .\dist\distribute-0.6.49\setup.py install" (You can apply this process to install Django too)
Then you will see a folder "C:\Python27\Scripts" which contains python package manager program for windows. Please execute "C:\Python27\Scripts> .\easy_install.exe pip" for install pip which is the official program to let us down load the package such as happybase.
Final, you could use "pip list" to check how many package or extension on your python environment.

Sunday, February 23, 2014

MySQL Related Set up Step such like Xtrabackup

On CentOS 5, we need libaio for MySQL server installation.
$ sudo yum install libaio

$ sudo rpm -ivh MySQL-server-5.5.31-2.rhel5.x86_64.rpm
Preparing...                ########################################### [100%]
   1:MySQL-server           ########################################### [100%]
$ sudo rpm -ivh MySQL-shared-compact-5.5.31-2.rhel5.x86_64.rpm
Preparing...                ########################################### [100%]
   1:MySQL-shared-compat    ########################################### [100%]
$ sudo rpm -ivh MySQL-client-5.5.31-2.rhel5.x86_64.rpm
Preparing...                ########################################### [100%]
   1:MySQL-server           ########################################### [100%]
We have to let MySQL use "Table Per File" mode. For example, the my.cnf mysql section setting is like:
[mysqld]
......
......
innodb_data_file_path = ibdata1:10M:autoextend
innodb_file_per_table = 1
datadir = /var/lib/mysql/datadir
innodb_data_home_dir = /var/lib/mysql/ibdatadir
innodb_log_group_home_dir = /var/lib/mysql/log
tmpdir = /var/lib/mysql/tmp
Then we should use the shell script to make the folder ready:
$cd /var/lib/mysql
$sudo mkdir datadir
$sudo mkdir ibdatadir
$sudo mkdir log
$sudo mkdir tmp
$sudo chown -R mysql:mysql /var/lib/mysql
The fresh New MySQL instance could be create by the following command.
$ sudo mysql_install_db
Alternatively you can run:
$/usr/bin/mysql_secure_installation
which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. The XtraBackup has different build based on Linux Kernel ang glic version. percona-xtrabackup-2.1.5-680.rhel6.x86_64.rpm is suitable to CentOS 6 percona-xtrabackup-2.1.5-680.rhel5.x86_64.rpm is suitable to CentOS 5 Don't try to upgrade the GLIBC's version in Linux Kernel otherwise the Kernel would be instable by change the Glibc. your can use the Linux command below to check the OS Release edition and MySQL's version:
rpm -qa | grep MySQL
rpm -qa | grep percona
cat /etc/*release
uname -a
  MySQL 5.1 is only supported by XtraBackup 2.0.8. Therefore, for older MySQL we need use previous version's XtraBackup. For MySQL 5.5+, XtraBackup 2.1.5 is OK. Here is the Installation Process:
$ rpm -ivh percona-xtrabackup-2.1.5-680.rhel6.x86_64.rpm
warning: percona-xtrabackup-2.1.5-680.rhel6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID cd2efd2a: NOKEY
error: Failed dependencies:
        perl(DBD::mysql) is needed by percona-xtrabackup-2.1.5-680.rhel6.x86_64
        perl(Time::HiRes) is needed by percona-xtrabackup-2.1.5-680.rhel6.x86_64
You can see the innobackupex.pm perl script require some package for running. Hence we should install the perl packages first:
sudo yum install perl-DBD-MySQL
sudo yum install perl-Time-HiRes
Then the installation of XtraBackup would be successful.
$ sudo rpm -ivh percona-xtrabackup-2.1.5-680.rhel6.x86_64.rpm
warning: percona-xtrabackup-2.1.5-680.rhel6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID cd2efd2a: NOKEY
Preparing...                ########################################### [100%]
   1:percona-xtrabackup     ########################################### [100%]
Just in case that list down the error message for CentOS5.
#CentOS 5 is too old and require more dependency for XtraBackup
$ rpm -ivh percona-xtrabackup-2.1.5-680.rhel6.x86_64.rpm
warning: percona-xtrabackup-2.1.5-680.rhel6.x86_64.rpm: Header V4 DSA signature: NOKEY, key ID cd2efd2a
error: Failed dependencies:
        libc.so.6(GLIBC_2.7)(64bit) is needed by percona-xtrabackup-2.1.5-680.rhel6.x86_64
        libc.so.6(GLIBC_2.8)(64bit) is needed by percona-xtrabackup-2.1.5-680.rhel6.x86_64
        perl(DBD::mysql) is needed by percona-xtrabackup-2.1.5-680.rhel6.x86_64
        rpmlib(FileDigests) <= 4.6.0-1 is needed by percona-xtrabackup-2.1.5-680.rhel6.x86_64
        rpmlib(PayloadIsXz) <= 5.2-1 is needed by percona-xtrabackup-2.1.5-680.rhel6.x86_64

Linux Hardware Information Collection Command

Recently, my most job context is related to Linux CentOS and MySQL with perl and python programming. However, when I install the MySQL server, there are some command could help me get more base information from the remote server that I am dealing with:
Show kernel version and system architecture
uname -a
Show name and version of distribution
head -n1 /etc/issue
Show all partitions registered on the system
cat /proc/partitions
Show RAM total seen by the system
grep MemTotal /proc/meminfo
Show CPU(s) info
grep "model name" /proc/cpuinfo
Show info about disk sda
hdparm -i /dev/sda

And after the hardware information, we use RPM command to review the package list on this machine:
rpm -qa | grep MySQL*
rpm -qa | grep percona*

If the some RPM package is missing from your wonder list, you could use yum command (with internet connection) or download the RPM package (be careful the Kernel version and OS distribution version) and use RPM command to install those dependencies:
rpm -ivh [wonder package]
For example, we need the crontab package for doing some schedule task on specific server. However, the crontabs require the cronie as dependency and cronie require sendmail as dependency and the dependent chaining is going further to procmail. Hence, below is the demonstration for installation of procmail, sendmain and final step with install crontabs related package at once (must have, because they seem to related in a cyclic check, you could not install single one of them without two others)

[root@ServerA ~]# rpm -ivh cronie-1.4.4-7.el6.x86_64.rpm
error: Failed dependencies:
        /usr/sbin/sendmail is needed by cronie-1.4.4-7.el6.x86_64
        dailyjobs is needed by cronie-1.4.4-7.el6.x86_64
[root@ServerA ~]# rpm -ivh sendmail-8.14.4-8.el6.x86_64.rpm
error: Failed dependencies:
        procmail is needed by sendmail-8.14.4-8.el6.x86_64

[root@ServerA ~]# rpm -ivh procmail-3.22-25.1.el6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:procmail               ########################################### [100%]
[root@ServerA ~]# rpm -ivh sendmail-8.14.4-8.el6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:sendmail               ########################################### [100%]
[root@ServerA ~]# rpm -ivh cronie-anacron-1.4.4-7.el6.x86_64.rpm crontabs-1.10-33.el6.noarch.rpm cronie-1.4.4-7.el6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:crontabs               ########################################### [ 33%]
   2:cronie                 ########################################### [ 67%]
   3:cronie-anacron         ########################################### [100%]