Category: Linux
How to do multi-threaded backups and restores for mysql
How to do multi-threaded backups and restores for mysql
So there are probably a lot of people out there who have the standard master-slave mysql database servers running with InnoDB and MyISAM Databases.
This not usually a problem unless you have high amount of traffic going to your databases using InnoDB, since mysql does not do multithreaded dumps or restores, this can be problematic if your replication is broken to the point where you need to do a full restore.
Now if your database was say 15gigs in size, consisting of Innodb and myisam db’s in a production environment this would be brutal, as you would need to lock the tables on the primary while your restoring to the slave. Since mysql does not do a multithreaded restores, this could take 12 hours or more, keep in mind this is dependent on hardware. To give you an idea the servers we had when we ran into this issue, to help you gauge your problem.
Xeon quad core, sata 1 T drives, 18 gigs of ram (Master and Slave)
Fortunately, there is a solution 🙂
There is a free application called xtrabackup by Percona which does multithreaded backup and restores of myisam and innodb combined. In this blog I will be explaining how to set it up, and what I did to minimize downtime for the businesses.
What you should consider doing
Since drive I/O is a factor with high traffic Database servers which can seriously impede performace significantly. We built new servers same specs but with SSD drives this time.
Xeon quad core, (sata3) 1T, (SSD) 120G 18 gigs of ram
Now this is not necessary, however if database traffic is high you should consider SSD or even fiber channel drives if your setup supports it.
Xtrabackup is free unless you use mysql enterprise, then its $5000/server to license it. Honestly using mysql enterprise in my opinion is just stupid, is exactly the same except you get support, the same support you could get online or irc on any forum which is probably better, why pay for something you don’t need to.
Install and setup
Note: This will need to be installed on both master and slave database servers, as this process will replace the mysqldump and restore method you use.
- rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
- Yum install percona-xtrabackup.x86_64 (Master & Slave both servers)
Create backup
Note: There are number of ways you can do this. You can have it output to a /tmp directory while its doing the backup process, or you can have it output to stdout and compress to a directory. I will show you how to do both ways.
- Since innobackupex, which is the tool with xtrabackup we are going to use, looks at the /etc/my.cnf file for the data directory for mysql, we do not have to define a lot in our command string. For this example we do not setup a mysql password, however if you did you simply add –user <user> -pass <pass> to the string.
This process took 5 minutes on a 15gig Database with Xeon quad core, (sata3) 1T, (SSD) 120G 18Gram
2. Innobackupex <outputdirectory>
Eg. Innobackupex /test (this command will create another directory inside this one with a time stamp, it’s a fullback of all databases innodb and myisam uncompressed.)
3. innobackupex –stream=tar ./ | gzip – > /test/test.tar.gz (This command will do the same as the above except will output to stdout and compress the fullbackup into the tar file
Note: you also need to use the -i option to untar it eg. tar -ixvf test.tar.gz, ensure mysql is stopped on any slave before restoring, and dont forget to chown -R mysql:mysql the files after you restore the data to the data directory using the innobackupex –copy-back command.
Note: I have experienced issues with getting replication to start doing a full backup and restore on to a slave with innodb and myisam, using the innobackupex stream compression to gzip, after untarring for whatever reason the innodb log files had some corruption, which caused the slave to stop replication upon immediate connection of the master.
if the stream compression doesnt work do a uncompressed backup as shown above, and then rsync the data from your master to the slave via a gige switch if possible (ie. rsync -rva –numeric-ids <source> <destination>:/)
Our 15gig DB compressed to 3.4gigs
- Now copy tar file or directory to that innobackupex created to the slave server via scp
Scp * user@host: <-(edit accordingly)
Doing a Restore
Note: The beauty of this restore is it will be a multi-threaded restore utilizing multiple cores instead of just one, since our server data directory is now sitting on SSD, disk I/O will be almost nill, increasing performance significantly, and reducing load.
- On your primary database server log into mysql and lock the tables
- Mysql> FLUSH TABLES WITH READ LOCK;
- Now on your slave: To do a restore of all the databases its pretty easy.
- innobackupex –copy-back /test/2013-02-03_17-21-52/ (update the path to where ever the innobackupex files are.)
This took 3 mins to restore a 15gig DB with innodb and myisam for us on
Xeon quad core, (sata3) 1T, (SSD) 120G 18 gigs of ram
Setting up the backup crons
- Now if you were using mysqldump as part of your mysql backup process then you will need to change it to use the following.
- Create a directory on the slave called mysqldumps
- Create a file called backups.sh and save it.
- Add the following to it.
#!/bin/bash
innobackupex –stream=tar ./ | gzip – > /mysqldumps/innobackup-$(date +”%F”)
Note: that our backups are being stored on our sata3 drive and data directory resides on the SSD
- Now now add this to your crontab as root, again change the cron to run however often you need to run.
- 0 11,23 * * * /fullpath/ backups.sh
Setting up diskspacewatch for the SSD drive.
- Since the SSD drive is 120G, we need to setup alert to monitor to watch the space threshold. If you not have the resources to implement a tool to specifically to monitor diskspace, then you can write a script that watches the diskspace and send out an email alert in the event the threshold is reached.
- Run a df –h on your server find the partition you want it to watch edit (df /disk2) on the script to which ever partition you want it to watch, threshold is defined by ( if [ $usep -ge 80 ]; then)
- Create a file called diskspacewatch, add and save below
#!/bin/sh
df /disk2 | grep -vE ‘^Filesystem|tmpfs|cdrom’ | awk ‘{ print $5 ” ” $1 }’ | while read output;
do
echo $output
usep=$(echo $output | awk ‘{ print $1}’ | cut -d’%’ -f1 )
partition=$(echo $output | awk ‘{ print $2 }’ )
if [ $usep -ge 80 ]; then
echo “SSD Disk on slave database server Running out of space!! \”$partition
($usep%)\” on $(hostname) as on $(date)” |
mail -s “Alert: Almost out of disk space $usep%” nick@nicktailor.com
fi
done
- Now you want to setup a cron that runs this script every 1 hour, or however long you want
- 0 * * * * /path/diskspacewatch
That’s my tutorial on a mysql multithreaded backup and restore setup. If you have questions email nick@nicktailor.com
How to add a remote management ip to a bridged openbsd firewall
Adding Management IP to Open BSD Bridged Firewall
I am writing this because sometimes people set things up without setting up a remote management ip on servers and decide to do it later, only to find that now that firewall is running in a production environment and become more critical than it was originally suppose to be.
1. Ensure that you chosen an IP that is configured to the correct vlan
2. Edit /etc/hostname.rl0
Note: On a bridged firewall there will be usually two interfaces one will be the internal interface and the other will be the external interface. If you cat /etc/pf.conf you should see which is the external interface defined, this is the file you will be editing to add the remote management ip.
less /etc/hostname.rl0
up
inet 192.168.1.35 255.255.255.0
or
inet 192.168.1.35/24 (this one seems to work better in my experience)
up
3. Edit /etc/mygate (This is where you configure the gateway the management ip will be using.)
less /etc/mygate
192.168.1.1
4. Edit /etc/rc.conf
less /etc/rc.conf (the sshd_flags should look like the illustrated below)
…
sshd_flags=”” # for normal use: “”
5. Edit /etc/ssh/sshd_config
less /etc/ssh/sshd_config (Ensure that you allow root login or keys if you are using keys)
…
PermitRootLogin yes
6. You will also need to ensure that the firewall rules on pf.conf allow the traffic to come in on the interface and port 22 for ssh for tcp and udp
vi /etc/pf.conf
add something like the example below.
Example
pass in log quick on $external_interface proto tcp from $allowed_hosts to 192.168.1.35 port 22 keep state
pass in log quick on $external_interface proto udp from any to 192.168.1.35
6. Reboot Server.
In a Production Environment you probably want to avoid a reboot of the firewall, you can follow the steps below to help you achieve this.
Adding Management IP without Rebooting server
1. Check to see which interface is the external_interface in /etc/pf.conf.
In this case we will assume it is rl0:
2. Run these from the command line. This will set the IP/route on-the-fly, not requiring a reboot.
ifconfig rl0 inet <ip address> <netmask>
route add default <gateway>
or you can use
route add default gw 192.168.1.254 eth0
or
ip route add default via <gateway>
Note: if you make a mistake by adding the wrong gateway and bring everything down, you can delete the gateway on the fly as well, by using something similar to the example below
————————————————————————————
How to delete the gateway on the fly if you make an error
Example
ip route delete default
————————————————————————————–
3. Add this to /etc/hostname.rl0
vi /etc/hostname.rl0 add line: inet <ip address> netmask <netmask>
4. Add your gateway.
vi /etc/mygate add line: <gateway>
5. Modify the SSH configuraiton.
vi /etc/ssh/sshd_config Set to allow root and password logins
6. Run SSH.
/usr/sbin/sshd
7. Do not forget to update the firewall rules in /etc/pf.conf to allow traffic on the external interface to come in on the port 22
Example
pass in log quick on $external_interface proto tcp from $allowed_hosts to 192.168.1.35 port 22 keep state
pass in log quick on $external_interface proto udp from any to 192.168.1.35
8. You should now be able to test the connection with a telnet command from outside and see if you can connect to ssh remotely
telnet 192.168.1.35 22
Cheers
Hope this has helped you email nick@nicktailor.com if you have questions
How to setup your own cloud SAN storage at home using FreeNAS and a VM
How to setup your own Cloud SAN storage at home using FREENAS and a VM
So, I am writing this blog post for others who want to understand how SAN and NAS storage systems work. I had a colleague of mine suggest that I should setup my own NAS at home. I decided that it was a great idea. This tutorial will teach you how to setup a NAS using virtual machines for the purposes of testing and learning.
It will help you understand the fundamentals of what is involved in setting up a NAS and how they work. There are commercial applications such as NETAPP that provide far more complexity in terms of functionality, however the principles are pretty much the same, and relatively easy to pick up on variations should you need to learn them on the fly in a job.
- So one of the key benefits of setting up a NAS in VM is it allows you setup a sort of home based storage cloud solution
- You may add multiple USB storage devices where you will be able to create volumes of various file systems types, essentially giving you the cloud type solution.
- This will also allow various devices you have to mount the storage volumes
- At the moment VirtualBox allows up to 2 terabytes , I’m sure this will increase as future revisions come to pass.
Phase 1
Create a new virtualbox VM for freeNAS
- Download freeNAS
http://www.nicktailor.com/files/ FreeNAS-8.3.0-RELEASE-x86.iso
- Create a new VM in virtual box with the following configurations
- Ensure that you have over 10GB of HDspace and at least 512mb ram for you VM (otherwise your volume creations will fail)
- Network Configuration
i. Nic 1 (NAT)
ii. Nic 2 (Bridged Adapter, use wireless if its supported)
- On the settings tab you should enable USB Controller, you will need to install an additional packages for Virtualbox to enable this feature as it doesn’t come out of the box. The advantage of this is will allow you add multiple USB devices and use them as storage. Download and install the Oracle VM VirtualBox Extension Pack that matches your version of VirtualBox. The extension pack enables USB support.
- Next you want to click on the Storage tab under the IDE controller there should be a EMPTY and a icon of a CD, on the far right of that you should see Attributes “CD/DVD Drive” and the very far right of that a CD, click on that and find the FreeNAS iso you downloaded, click okay
- Now click the okay and go back to start the VM
Phase 2 Configuring FreeNAS
- Once freeNAS is installed you will get a configuration screen. The primary interface you should leave alone, as this most like dhcp’d already.
- You should be be able to ping externally from the freeNAS VM, if you need to look at your network configuration.
- Typically the primary interface will use a non-routable address 10.0.2.10 which is being NAT’d externally
- So if your home network is like most people you’re probably using a dhcp pool in 192.168.1.1-100 addresses. So you want to configure the second nic with within your home network pool (eg. 192.168.1.138/255.255.255.0) address and a netmask.
- You will not need to set a default gateway, since the primary interface is already using a gateway
- Once you have done this go to the command prompt on your host machine, so this is no longer in the VM, its on the physical machine that runs Virtualbox and see if you can ping the address you just assigned in FreeNAS. If you can’t ping the FreeNAS Ip you assigned try restarting the VM.
- Next you will want to ensure that your router of firewall allows port 80 on the ip address you assigned to freeNAS.
Phase 3 setting up ISCSI SAN with freeNAS
Note- any client Machine you want to connect to the iscsi SAN/NAS must have a iscsi controller, so if you are using a VM ensure that you have added the iscsi controller under the settings of your vm.
- So go to your browser and go to the webGUI
- Open a browser and login to FreeNAS 8.2.
- Navigate to Storage > Active Volumes:
- Click Volume Manager.
- Enter a Volume Name, select disk(s), select Filesystem type (ZFS has some neat features), then click Add Volume:
- Click the Create ZFS Volume button:
- Enter a ZFS Volume Name, specify the volume size, then click Add ZFS Volume:
- Navigate to Services > Core, and turn on iSCSI:
- Click the wrench icon next to iSCSI.
- Navigate to iSCSI > Portals, click Add Portal. Select 0.0.0.0 as the IP Address (this means it will listen on all IPs). Click OK:
- Navigate to iSCSI > Initiators, click Add Initiator. Leave ALL in both fields to allow all client connections from any network:
- Navigate to iSCSI > Authorized Access, click Add iSCSI Authorized Access. Enter a User andSecret:
Note: If you dont want create a user for your home SAN, just because it makes life easier, then skip the user Access stuff and under the Target Global Configuration just select
Discovery Auth method :Auto
Discovery Auth Group : None
- Navigate to iSCSI > Target, click Add Target. Enter a Target Name and Alias. Select thePortal and InitiatorGroup IDs, and Authentication Group number:
- Navigate to iSCSI > Device Extents, click Add Extent. Enter an Extent Name and select aVolume:
- Navigate to iSCSI > Associated Targets, click Add Extent to Target. Select a Target andExtent to map together:
- Navigate to iSCSI > Target Global Configuration. Customise your Base Name, select CHAPfor Discovery Auth Method and select your Discovery Auth Group. Leave other settings unless you know what you’re doing. Scroll to bottom of page to Save:
- All done. Now you need to connect to the iSCSI SAN using an iSCSI Initiator.
- If you get any connection problems, try restarting the iSCSI service here:
- If you don’t trust the GUI and want to confirm the service has definitely started, you can runservice -e at the shell prompt, and look for istgt:
Cheers Hope this helped you understand how it all works email nick@nicktailor.com if you have questions
How to setup grub on a USB stick
How to setup grub manually
- What to do when you have a grub error, you can simply reinstall grub.
- In this tutorial I will explain how to achieve this.
- You will learn how to setup a grub disk and install grub
Before we start: How Grub Calls Stuff
This is the only odd thing in grub: It doesn’t call the disks as we are used to. But don’t worry, is not as weird as with devfs (/dev/boo/lun/foo/bar/../../disk/stuff/…/…/and/so/on).
It’s only a bit different:
- Grub uses brackets to declare a device
- The /dev/ part is not used
- device numbers and partitions are defined with numbers starting from 0
This example will show you how it works (It’s easier to understand as it is to explain):
Linux standard GRUB
———————————–
/dev/hda1 (hd0,0)
/dev/hda2 (hd0,1)
/dev/hdd1 (hd3,0)
Even Easier : a=0 b=1 c=2 d=3 and the partition is N-1
NEXT STEP
- Put your USB disk into working linux computer (don’t mount it). On my computer it is typically recognized as /dev/sda. If your USB stick is recognized differently (and/or your hard disk is /dev/sda) then you must replace all the instances below of /dev/sda with whatever your actual USB stick is being recognized. This is, well, kinda important.
- Create a single partition. I typically do this with:
- fdisk /dev/sda
Feel free to create a FAT32 partition if you want to use this stick on different machines with different operating systems (including Windows).
- Mount the partition you just created:
- sudo mount /dev/sda1 /mnt
- sudo grub-install –no-floppy –root-directory=/mnt /dev/sda
- Install GRUB:
Note – make not of “–root-directory=/mnt” <- (this should match the directory you mounted your usb stick)
Also if you get an error that says you need to use blocklists, you can add –force to the above Install grub line.
- sudo grub-install –no-floppy –force –root-directory=/mnt /dev/sda
That should of create a /mnt/boot/grub directory.
- Now comes the hard part. You need a grub configuration that will make sense for the server you are booting.
If you are using grub1…
Hopefully you have a copy of the menu.lst file from the server. In that case, simply copy it to /mnt/boot/grub/ and you are ready to go. Otherwise, you’ll need to craft one. Below is a sample.
# uncomment these lines if you want to send grub to a serial console
#serial –unit=0 –speed=115200 –word=8 –parity=no –stop=1
#terminal serial
default 0
timeout 5
color cyan/blue white/blue
# simple setup
title Debian GNU/Linux, kernel 2.6.26-2-686
root (hd0,5)
kernel /boot/vmlinuz-2.6.26-2-686 root=/dev/hda6 ro
initrd /boot/initrd.img-2.6.26-2-686
# here’s a more complicated one
title Debian GNU/Linux, kernel 2.6.26-2-vserver-amd64
root (hd0,0)
kernel /vmlinuz-2.6.26-2-vserver-amd64 root=/dev/mapper/vg_pianeta0-root \
ro console=ttyS0,115200n8 cryptopts=target=md1_crypt,source=/dev/md1 \
cryptopts=target=md2_crypt,source=/dev/md2,lvm=vg_pianeta0-root
initrd /initrd.img-2.6.26-2-vserver-amd64
If you are using grub2…
grub-mkconfig -o /mnt/boot/grub/grub.cfg
Then edit. You might want something like:
serial –unit=0 –speed=115200 –word=8 –parity=no –stop=1 terminalinput serial terminaloutput serial insmod raid insmod mdraid insmod part_gpt set default=0 set timeout=5
menuentry “Debian GNU/Linux, with Linux 2.6.32-trunk-vserver-amd64” –class debian –class gnu-linux –class gnu –class os { set root='(hd0,1)’ search –no-floppy –fs-uuid –set 7682a24c-b06f-456b-b3d4-bcb7294d81e2 echo Loading Linux 2.6.32-trunk-vserver-amd64 … linux /vmlinuz-2.6.32-trunk-vserver-amd64 root=/dev/mapper/vg_chicken0-root ro quiet echo Loading initial ramdisk … initrd /initrd.img-2.6.32-trunk-vserver-amd64 }
- Put the USB stick into the target computer, configure it to boot from the USB stick via bios, and then you should see the GRUB menu come up.
- It’s possible that your computer will just boot with your menu.lst file. In that case – congrats! See the last step below to figure out how to ensure it can boot without your USB stick. On the other hand, if it fails, you’ll need to experimentally figure out which disk has which partitions and which kernels. Fortunately grub supports tab completion which makes this job easier:
- When the grub menu comes up, pick from the menu list the most likely candidate and press ‘e’ for edit.
- You should see the various lines from the stanzas for the list item you picked (i.e. a root, kernel, and initrd stanza). You may use the up/down arrows to select a line. If that doesn’t work, look for hints on the screen for how to get around. Going left and right on a given line may require Ctl-b and Ctl-f for back and forward. You also may need to use the delete key (not backspace) to delete characters
- Select a line, delete the characters from the end of the line, and then try tab completion with various options. For example, on the root line try typing simply:
root (
- And then tab. You should be presented with the available disks (numbered 0 and up). Try typing one of the disks and hitting tab and you should be presented with the available partitions. Continue this process until you find the one that seems right.
- When you are done and you have successfully booted, you can ensure that boots will work without the usb key by installing grub on all available disks:
- grub-install /dev/sda
grub-install /dev/sdb
If you have questions email nick@nicktailor.com
How to find which domain or vhost is spamming
How to find a spamming script hiding in under a vhost if your running qmail and parallel plesk
I am writing this because im sure everyone has runs into the issue of mail queues potentially running high for no apparent reason due to an excessive amount of spam in your mail queue.
Now this is being written on the premise that you running qmail as your MTA(mail server) and parallel’s plesk, however the principle is the same with any mail server and you can use this with ideology with any setup.
This is also the written on the basis your running redhat.
- Also how to setup qmHandle, which will help you manage qmail more efficiently
- So what we are setting is a send mail wrapper that will be logging Mail X-header information to a log file
- This will give you the ability to see which domain or vhost is sending out a large amount of mail through scripts
- Once you determine which domain is sending a high volume of mail through the mail server, then you can go to the document root of that vhost or domain and start grepping through the files to see if there is anything suspicious.
wget http://www.nicktailor.com/files/qmhandle-1.3.2.tar.gz
Available parameters:
-a : try to send queued messages now (qmail must be running)
-l : list message queues
-L : list local message queue
-R : list remote message queue
-s : show some statistics
-mN : display message number N
-dN : delete message number N
-fsender : delete message from sender
-f’re’ : delete message from senders matching regular expression re
-Stext : delete all messages that have/contain text as Subject
-h’re’ : delete all messages with headers matching regular expression re (case insensitive)
-b’re’ : delete all messages with body matching regular expression re (case insensitive)
-H’re’ : delete all messages with headers matching regular expression re (case sensitive)
-B’re’ : delete all messages with body matching regular expression re (case sensitive)
-t’re’ : flag messages with recipients in regular expression ‘re’ for earlier retry (note: this lengthens the time message can stay in queue)
-D : delete all messages in the queue (local and remote)
-V : print program version
Additional (optional) parameters:
-c : display colored output
-N : list message numbers only
(to be used either with -l, -L or -R)
How I used it
./qmHandle -l (List all the queues)
./qmHandle -L (this will actually list the individual id’s sitting the queue
./qmHandle -m(replace with ID here) This will let you look at the actual contents of the message
eg. ./qmHandle -m12345
The reason why you want to find look at the contents is later down this tutorial, you are going to attempt to locate the culprit spamming script, after you have localized the area its coming from.
How to setup the Send mail wrapper to localize which vhost/domain the spam is emenating from
1) Create a /var/qmail/bin/sendmail-wrapper script with the following content:
#!/bin/sh
(echo X-Additional-Header: $PWD ;cat) | tee –a /var/tmp/mail.send|/var/qmail/bin/sendmail-qmail “$@”
Note, it should be two lines including ‘#!/bin/sh’.
2) Create a log file /var/tmp/mail.send and grant it “a+rw” rights; make the wrapper executable; rename old sendmail; and link it to the new wrapper:
~# touch /var/tmp/mail.send
~# chmod a+rw /var/tmp/mail.send
~# chmod a+x /var/qmail/bin/sendmail-wrapper
~# mv /var/qmail/bin/sendmail /var/qmail/bin/sendmail-qmail
~# ln -s /var/qmail/bin/sendmail-wrapper /var/qmail/bin/sendmail
3) Wait for an hour and change back sendmail:
Examine the /var/tmp/mail.send file. There should be lines starting with “X-Additional-Header:” pointing to domain folders where the scripts which sent the mail are located.
You can see all the folders from where mail PHP scripts were run with the following command:
~# grep X-Additional /var/tmp/mail.send | grep `cat /etc/psa/psa.conf | grep HTTPD_VHOSTS_D | sed -e ‘s/HTTPD_VHOSTS_D//’ `
If you see no output from the above command, it means that no mail was sent using the PHP mail() function from the Plesk virtual hosts directory.
4) Once you have located the domain that appears to be sending the heavy mail, you can change the mail server back to sendmail by running the following commands below.
~# rm -f /var/qmail/bin/sendmail
~# ln -s /var/qmail/bin/sendmail-qmail /var/qmail/bin/sendmail
Tracking down the culprit script
Now that you have localized the area from which the spam script is most likely emanating from and your inside the document root and there just tons of files and directories, you want to sift through them looking for clues, so earlier you viewed a message ID using qmHandle, which showed you the contents of the message.
Note: keep in mind when attempting to track down the culprit there is no exact science to this, it take practice and determination to find them sometimes. What I am outlining below is to get you started in the right direction, and is usually successful.
1. Type “./qmHandle -m12345” this will shows the contents of the spam message, highlight a section of the first line, something thats most likely not going to be in the application webfiles.
2. While inside the document root of the domain grep for the spam string something like inidicated below. This sometimes takes a few goes, and you may need to even go throug several message Id’s and strings before you locate the culprit.
grep -R ‘Viagra blah blah blah’ /var/www/vhosts/ * (This will search recursively for the string in all directories from the parents specified).
Thats the end of my Tutorial. I hope this helps you if you have questions email nick@nicktailor.com
Cheers
How to compile PHP and run it as a CGI binary
- This blog post was written using RedHat enterprise server, however the principles apply to just about any linux distro out there, it should work for them all.
- Generally what I would do is check the OS repository and see what is the next upgraded version of php its going to do, when finally you decide to do a server wide upgrade of php, and then I would go and download the source files of that version. So in this case we are going to do php 5.3.17
- Download from http://www.php.net/releases/
- Log into your server, and created a directory, then cd into it, next run the following below.
wget http://www.php.net/get/php-5.3.17.tar.gz/from/a/mirror
- Next you want to untar the file tar –zxvf <filename you downloaded>
- Next we need to get the configure flags that php is currently using, the easiest way to get this is to find a domain that has php running and setup a phpinfo.php that contains the following
<?php phpinfo() ?>
Save that file and then view it through your browser http://domain.com/phpinfo.php
You should see a php info page. If you do not see it, it probably means your owner permissions are incorrect.
Example
-rw-r–r– 1 root root 19 Nov 7 14:32 phpinfo.php (incorrect)
-rw-r–r– 1 tailor tailor 19 Nov 7 14:32 phpinfo.php (correct)
6. So at the top of that phpinfo page you should see a section called “Configure Command” Looks like what is below here.
Configure Command | ‘./configure’ ‘–disable-fileinfo’ ‘–disable-pdo’ ‘–enable-bcmath’ ‘–enable-calendar’ ‘–enable-ftp’ ‘–enable-gd-native-ttf’ ‘–enable-libxml’ ‘–enable-magic-quotes’ ‘–enable-mbstring’ ‘–enable-soap’ ‘–enable-sockets’ ‘–enable-zend-multibyte’ ‘–prefix=/usr’ ‘–with-bz2’ ‘–with-curl=/opt/curlssl/’ ‘–with-freetype-dir=/usr’ ‘–with-gd’ ‘–with-gettext’ ‘–with-imap=/opt/php_with_imap_client/’ ‘–with-imap-ssl=/usr’ ‘–with-jpeg-dir=/usr’ ‘–with-kerberos’ ‘–with-libdir=lib64’ ‘–with-libxml-dir=/opt/xml2’ ‘–with-libxml-dir=/opt/xml2/’ ‘–with-mcrypt=/opt/libmcrypt/’ ‘–with-mysql=/usr’ ‘–with-mysql-sock=/var/lib/mysql/mysql.sock’ ‘–with-mysqli=/usr/bin/mysql_config’ ‘–with-openssl=/usr’ ‘–with-openssl-dir=/usr’ ‘–with-pcre-regex=/opt/pcre’ ‘–with-pic’ ‘–with-png-dir=/usr’ ‘–with-xpm-dir=/usr’ ‘–with-zlib’ ‘–with-zlib-dir=/usr’ |
You want to copy and inside the ‘./configure—
From above example
‘./configure’ ‘–disable-fileinfo’ ‘–disable-pdo’ ‘–enable-bcmath’ ‘–enable-calendar’ ‘–enable-ftp’ ‘–enable-gd-native-ttf’ ‘–enable-libxml’ ‘–enable-magic-quotes’ ‘–enable-mbstring’ ‘–enable-soap’ ‘–enable-sockets’ ‘–enable-zend-multibyte’ ‘–prefix=/usr’ ‘–with-bz2’ ‘–with-curl=/opt/curlssl/’ ‘–with-freetype-dir=/usr’ ‘–with-gd’ ‘–with-gettext’ ‘–with-imap=/opt/php_with_imap_client/’ ‘–with-imap-ssl=/usr’ ‘–with-jpeg-dir=/usr’ ‘–with-kerberos’ ‘–with-libdir=lib64’ ‘–with-libxml-dir=/opt/xml2’ ‘–with-libxml-dir=/opt/xml2/’ ‘–with-mcrypt=/opt/libmcrypt/’ ‘–with-mysql=/usr’ ‘–with-mysql-sock=/var/lib/mysql/mysql.sock’ ‘–with-mysqli=/usr/bin/mysql_config’ ‘–with-openssl=/usr’ ‘–with-openssl-dir=/usr’ ‘–with-pcre-regex=/opt/pcre’ ‘–with-pic’ ‘–with-png-dir=/usr’ ‘–with-xpm-dir=/usr’ ‘–with-zlib’ ‘–with-zlib-dir=/usr’
7. Now you will need to make some modifications to this. I am running my php as a cgi already, but if you are running it as a module you will see in the configure flags that mysql is disabled even though it is enabled, the reason why this is because when php is run as a module, you are also loading the various flags as module extenstions to run with php and the configure flags will not reflect that on phpinfo page.
So these will be the primary flags you need to ensure are working
Note- these flags I am using are for a 64bit OS, the flags are different for a 32bit OS
This is my Example of flag functions in php I wanted to work. I copied this into a text editor and made the changes I needed accordingly, outlined below.
‘./configure’ ‘-enable-yum’ ‘–build=x86_64-redhat-linux-gnu’ ‘–host=x86_64-redhat-linux-gnu’ ‘–target=x86_64-redhat-linux-gnu’ ‘–program-prefix=’ ‘–prefix=/usr’ ‘–exec-prefix=/usr’ ‘–bindir=/usr/bin’ ‘–sbindir=/usr/sbin’ ‘–sysconfdir=/etc’ ‘–datadir=/usr/share’ ‘–includedir=/usr/include’ ‘–libdir=/usr/lib’ ‘–libexecdir=/usr/libexec’ ‘–localstatedir=/var’ ‘–sharedstatedir=/usr/com’ ‘–mandir=/usr/share/man’ ‘–infodir=/usr/share/info’ ‘–cache-file=../config.cache’ ‘–with-config-file-path=/etc’ ‘–without-config-file-scan-dir’ ‘–enable-force-cgi-redirect’ ‘–disable-debug’ ‘–enable-pic’ ‘–disable-rpath’ ‘–enable-inline-optimization’ ‘–with-bz2’ ‘–with-curl’ ‘–with-exec-dir=/usr/bin’ ‘–with-freetype-dir=/usr’ ‘–with-png-dir=/usr’ ‘–with-gd’ ‘–enable-gd-native-ttf’ ‘–with-gettext’ ‘–with-ncurses=shared’ ‘–with-gmp’ ‘–with-iconv’ ‘–with-jpeg-dir=/usr’ ‘–with-png’ ‘–with-xml’ ‘–with-libxml-dir=/usr”–with-expat-dir=/usr’ ‘–with-dom=shared,/usr’ ‘–with-dom-xslt=/usr’ ‘–with-dom-exslt=/usr’ ‘–with-xmlrpc=shared’ ‘–with-pcre-regex=/usr/include’ ‘–with-zlib’ ‘–with-layout=GNU’ ‘–enable-bcmath’ ‘–enable-exif’ ‘–enable-ftp’ ‘–enable-magic-quotes’ ‘–enable-sockets’ ‘–enable-sysvsem’ ‘–enable-sysvshm’ ‘–enable-track-vars’ ‘–enable-trans-sid’ ‘–enable-yp’ ‘–enable-wddx’ ‘–with-pear=/usr/share/pear’ ‘–with-imap=shared’ ‘–with-imap-ssl’ ‘–with-kerberos’ ‘–with-mysql=/usr’ ‘–with-unixODBC=shared,/usr’ ‘–enable-memory-limit’ ‘–enable-shmop’ ‘–enable-calendar’ ‘–enable-mbstring’ ‘–enable-mbstr-enc-trans’ ‘–enable-mbregex’ ‘–with-mime-magic=/usr/share/file/magic.mime’ ‘–enable-dba’ ‘–enable-db4’ ‘–enable-gdbm’ ‘–enable-static’ ‘–with-openssl’
- You will notice the following key thing
- You want to remove “–with-apxs2=/usr/sbin/apxs” – you can only compile one SAPI + cli at the same time. You can’t both compile CGI and the apache2 SAPI at the same time. http://bugs.php.net/bug.php?id=30682&edit=1
- You want to ensure that “–without-mysql” is changed to “–with-mysql”
- You want to ensure that you have ‘–enable-static’ (this will enable the static libraries)
- Enable any other flags you may want to use.
8. Create a file called config.sh on the server in the directory where you untarred the source php files
9. Copy the configure flags all on one line in that files and save it
10. Now run “sh config.sh” (This is will test the configure flags to see if the server can support them and it will fail on any that need dependancies installed. When you get an error on a package you generally just want to search for the development packages of the failed packages. Use Yum to install them and then run “sh config.sh” again and keep going until it finishes properly. I have listed a few common packages people run into below.
Note: sometimes you will run into path issues, like it cant find something in /usr/bin etc, what I do is do a locate of the file its looking for, usually they are .so files and simply create a simlink to where its attempting for the missing file so the configure test can complete.
yum install gcc4-c++
yum install gdbm-devel
yum install libjpeg-devel
yum install libpng-devel
yum install freetype-devel
yum install gmp-devel
yum install libc-client-devel
yum install openldap-devel
yum install mysql-devel
yum install ncurses-devel
yum install unixODBC-devel
yum install postgresql-devel
yum install net-snmp-devel
yum install bzip2-devel
yum install curl-devel
11. Once you have completed the test of the configure flags you, it will generate files at the end.
12. Now you want to run “Make” <–(DO NOT DO “MAKE INSTALL”) This will take some time to complete, if it completes successfully, you will have created a cgi binary file that will be located in “sapi/cgi/php-cgi”
Phase 2 Running Your new php cgi on your vhost
- Copy your new php-cgi binary to the cgi-bin directory or script-alias directory within your domain or vhost. This is usually “/home/username/www/cgi-bin”
Fix the permissions on the new cgi bin so that its running as the apache user or suexec user your are using for your vhost.
Ie. -rw-r–r– 1 apache:apache 19 Nov 7 14:32 php-cgi(correct)
-rw-r–r– 1 tailor:tailor 19 Nov 7 14:32 php-cgi(correct)
- Next ensure the file has executable permissions “chmod +x php-cgi“
- Now go back into the document root folder of the domain or vhost and create a .htaccess file with the following lines below and save the file.
AddHandler php-cgi .php .htm
Action php-cgi /cgi-bin/php-cgi
- As soon as you do the above the site will be using the new cgi php. If you reload the phpinfo.php page now you should see the server API read as:
Server API | CGI/FastCGI |
Note: if you want to disabled the cgi php simply comment out the lines in the .htaccess file. The cool thing about this is you don’t need to reload apache for php changes anymore, since its running as a cgi. You should check the phpinfo.php page and ensure that all the flags you wanted are listed on that page, if they are not, you either missed the flag in your configure or did not compile fully.
How to upgrade wordpress in a production environment
How to upgrade wordpress in a production environment
So I am writing this because I’m sure some people have or will run into this issue. You are running an older version of wordpress and have delayed the upgrade, in addition to delaying the upgrade you delayed the upgrade of php from 5.1.6 because your running redhat 5.
The problem with this type of upgrade is that, it’s not just a matter of upgrading wordpress, you will need to upgrade php to 5.2.x in order to use the latest version of wordpress. This can be a problem, since most people decide to run php as module and not as a cgi.
This means that you will have to upgrade php globally on the server, which could cause issues if you have not tested it and you may have to roll back if it doesn’t work. Please keep in mind you will need to do your own testing for your environments. However, if you want a solution that can give you virtually no downtime, then read on, as this is what I did when I was faced with the situation, which worked flawlessly.
So what you can do is compile your own version of php and run it as a cgi just for that one vhost or domain that you want to test, do the upgrade inside wordpress or follow the upgrade instructions that wordpress gives you. I am going to outline how to do this in this blog post.
Phase 1 Compile your own php as CGI
- Generally what I would do is check the OS repository and see what is the next upgraded version of php its going to do, when finally you decide to do a server wide upgrade of php, and then I would go and download the source files of that version. So in this case we are going to do php 5.3.17
- Download from http://www.php.net/releases/
- Log into your server and created a directory and cd into it then run the wget below.
wget http://www.php.net/get/php-5.3.17.tar.gz/from/a/mirror
- Next you want to untar the file tar –zxvf <filename you downloaded>
- Next we need to get the configure flags that php is currently using, the easiest way to get this is to find a domain that has php running and create a phpinfo.php file that contains the following
<?php phpinfo() ?>
Save that file and then view it through your browser http://domain.com/phpinfo.php
You should see a php info page. If you do not see it, it probably means your owner permissions are incorrect.
Example
-rw-r–r– 1 root root 19 Nov 7 14:32 phpinfo.php (incorrect)
-rw-r–r– 1 tailor tailor 19 Nov 7 14:32 phpinfo.php (correct)
6. So at the top of that phpinfo page you should see a section called “Configure Command” Looks like what is below here.
Configure Command | ‘./configure’ ‘–disable-fileinfo’ ‘–disable-pdo’ ‘–enable-bcmath’ ‘–enable-calendar’ ‘–enable-ftp’ ‘–enable-gd-native-ttf’ ‘–enable-libxml’ ‘–enable-magic-quotes’ ‘–enable-mbstring’ ‘–enable-soap’ ‘–enable-sockets’ ‘–enable-zend-multibyte’ ‘–prefix=/usr’ ‘–with-bz2’ ‘–with-curl=/opt/curlssl/’ ‘–with-freetype-dir=/usr’ ‘–with-gd’ ‘–with-gettext’ ‘–with-imap=/opt/php_with_imap_client/’ ‘–with-imap-ssl=/usr’ ‘–with-jpeg-dir=/usr’ ‘–with-kerberos’ ‘–with-libdir=lib64’ ‘–with-libxml-dir=/opt/xml2’ ‘–with-libxml-dir=/opt/xml2/’ ‘–with-mcrypt=/opt/libmcrypt/’ ‘–with-mysql=/usr’ ‘–with-mysql-sock=/var/lib/mysql/mysql.sock’ ‘–with-mysqli=/usr/bin/mysql_config’ ‘–with-openssl=/usr’ ‘–with-openssl-dir=/usr’ ‘–with-pcre-regex=/opt/pcre’ ‘–with-pic’ ‘–with-png-dir=/usr’ ‘–with-xpm-dir=/usr’ ‘–with-zlib’ ‘–with-zlib-dir=/usr’ |
You want to copy and inside the ‘./configure—
From above example
‘./configure’ ‘–disable-fileinfo’ ‘–disable-pdo’ ‘–enable-bcmath’ ‘–enable-calendar’ ‘–enable-ftp’ ‘–enable-gd-native-ttf’ ‘–enable-libxml’ ‘–enable-magic-quotes’ ‘–enable-mbstring’ ‘–enable-soap’ ‘–enable-sockets’ ‘–enable-zend-multibyte’ ‘–prefix=/usr’ ‘–with-bz2’ ‘–with-curl=/opt/curlssl/’ ‘–with-freetype-dir=/usr’ ‘–with-gd’ ‘–with-gettext’ ‘–with-imap=/opt/php_with_imap_client/’ ‘–with-imap-ssl=/usr’ ‘–with-jpeg-dir=/usr’ ‘–with-kerberos’ ‘–with-libdir=lib64’ ‘–with-libxml-dir=/opt/xml2’ ‘–with-libxml-dir=/opt/xml2/’ ‘–with-mcrypt=/opt/libmcrypt/’ ‘–with-mysql=/usr’ ‘–with-mysql-sock=/var/lib/mysql/mysql.sock’ ‘–with-mysqli=/usr/bin/mysql_config’ ‘–with-openssl=/usr’ ‘–with-openssl-dir=/usr’ ‘–with-pcre-regex=/opt/pcre’ ‘–with-pic’ ‘–with-png-dir=/usr’ ‘–with-xpm-dir=/usr’ ‘–with-zlib’ ‘–with-zlib-dir=/usr’
7. Now you will need to make some modifications to this. I am running my php as a cgi already, but if you are running it as a module you will see in the configure flags that mysql is disabled even though it is enabled, the reason why this is because when php is run as a module, you are also loading the various flags as module extenstions to run with php and the configure flags will not reflect that on phpinfo page.
So these will be the primary flags you need to ensure are working
Note- these flags I am using are for a 64bit OS, the flags are different for a 32bit OS
This is my Example of flag functions in php I wanted to work. I copied this into a text editor and made the changes I needed accordingly, outlined below.
‘./configure’ ‘-enable-yum’ ‘–build=x86_64-redhat-linux-gnu’ ‘–host=x86_64-redhat-linux-gnu’ ‘–target=x86_64-redhat-linux-gnu’ ‘–program-prefix=’ ‘–prefix=/usr’ ‘–exec-prefix=/usr’ ‘–bindir=/usr/bin’ ‘–sbindir=/usr/sbin’ ‘–sysconfdir=/etc’ ‘–datadir=/usr/share’ ‘–includedir=/usr/include’ ‘–libdir=/usr/lib’ ‘–libexecdir=/usr/libexec’ ‘–localstatedir=/var’ ‘–sharedstatedir=/usr/com’ ‘–mandir=/usr/share/man’ ‘–infodir=/usr/share/info’ ‘–cache-file=../config.cache’ ‘–with-config-file-path=/etc’ ‘–without-config-file-scan-dir’ ‘–enable-force-cgi-redirect’ ‘–disable-debug’ ‘–enable-pic’ ‘–disable-rpath’ ‘–enable-inline-optimization’ ‘–with-bz2’ ‘–with-curl’ ‘–with-exec-dir=/usr/bin’ ‘–with-freetype-dir=/usr’ ‘–with-png-dir=/usr’ ‘–with-gd’ ‘–enable-gd-native-ttf’ ‘–with-gettext’ ‘–with-ncurses=shared’ ‘–with-gmp’ ‘–with-iconv’ ‘–with-jpeg-dir=/usr’ ‘–with-png’ ‘–with-xml’ ‘–with-libxml-dir=/usr”–with-expat-dir=/usr’ ‘–with-dom=shared,/usr’ ‘–with-dom-xslt=/usr’ ‘–with-dom-exslt=/usr’ ‘–with-xmlrpc=shared’ ‘–with-pcre-regex=/usr/include’ ‘–with-zlib’ ‘–with-layout=GNU’ ‘–enable-bcmath’ ‘–enable-exif’ ‘–enable-ftp’ ‘–enable-magic-quotes’ ‘–enable-sockets’ ‘–enable-sysvsem’ ‘–enable-sysvshm’ ‘–enable-track-vars’ ‘–enable-trans-sid’ ‘–enable-yp’ ‘–enable-wddx’ ‘–with-pear=/usr/share/pear’ ‘–with-imap=shared’ ‘–with-imap-ssl’ ‘–with-kerberos’ ‘–with-mysql=/usr’ ‘–with-unixODBC=shared,/usr’ ‘–enable-memory-limit’ ‘–enable-shmop’ ‘–enable-calendar’ ‘–enable-mbstring’ ‘–enable-mbstr-enc-trans’ ‘–enable-mbregex’ ‘–with-mime-magic=/usr/share/file/magic.mime’ ‘–enable-dba’ ‘–enable-db4’ ‘–enable-gdbm’ ‘–enable-static’ ‘–with-openssl’
- You will notice the following key thing
- You want to remove “–with-apxs2=/usr/sbin/apxs” – you can only compile one SAPI + cli at the same time. You can’t both compile CGI and the apache2 SAPI at the same time. http://bugs.php.net/bug.php?id=30682&edit=1
- You want to ensure that “–without-mysql” is changed to “–with-mysql”
- You want to ensure that you have ‘–enable-static’ (this will enable the static libraries)
- Enable any other flags you may want to use.
8. Create a file called config.sh on the server in the directory where you untarred the source php files
9. Copy the configure flags all on one line in that files and save it
10. Now run “sh config.sh” (This is will test the configure flags to see if the server can support them and it will fail on any that need dependancies installed. When you get an error on a package you generally just want to search for the development packages of the failed packages. Use Yum to install them and then run “sh config.sh” again and keep going until it finishes properly. I have listed a few common packages people run into below.
Note: sometimes you will run into path issues, like it cant find something in /usr/bin etc, what I do is do a locate of the file its looking for, usually they are .so files and simply create a simlink to where its attempting for the missing file so the configure test can complete.
yum install gcc4-c++
yum install gdbm-devel
yum install libjpeg-devel
yum install libpng-devel
yum install freetype-devel
yum install gmp-devel
yum install libc-client-devel
yum install openldap-devel
yum install mysql-devel
yum install ncurses-devel
yum install unixODBC-devel
yum install postgresql-devel
yum install net-snmp-devel
yum install bzip2-devel
yum install curl-devel
11. Once you have completed the test of the configure flags you, it will generate files at the end.
12. Now you want to run “Make” <–(DO NOT DO “MAKE INSTALL”) This will take some time to complete, if it completes successfully, you will have created a cgi binary file that will be located in “sapi/cgi/php-cgi”
Phase 2 Running Your new php cgi on your vhost
- Copy your new php-cgi binary to the cgi-bin directory or script-alias directory within your domain or vhost. This is usually “/home/username/www/cgi-bin”
Fix the permissions on the new cgi bin so that its running as the apache user or suexec user your are using for your vhost.
Ie. -rw-r–r– 1 apache:apache 19 Nov 7 14:32 php-cgi(correct)
-rw-r–r– 1 tailor:tailor 19 Nov 7 14:32 php-cgi(correct)
- Next ensure the file has executable permissions “chmod +x php-cgi
- Now go back into the document root folder of the domain or vhost and create a .htaccess file with the following lines below and save the file.
AddHandler php-cgi .php .htm
Action php-cgi /cgi-bin/php-cgi
- As soon as you do the above the site will be using the new cgi php. If you reload the phpinfo.php page now you should see the server API read as:
Server API | CGI/FastCGI |
Note: if you want to disabled the cgi php simply comment out the lines in the .htaccess file. The cool thing about this is you don’t need to reload apache for php changes anymore, since its running as a cgi. You should check the phpinfo.php page and ensure that all the flags you wanted are listed on that page, if they are not, you either missed the flag in your configure or did not compile fully.
Phase 3. Upgrading your wordpress
- Now that you php is running under the vhost for just this wordpress site, you can log into wordpress and see if it loads properly, it should load without issues. You can either do an update from inside wordpress or do the safer way in my humble opinion and do it on the server as indicated below.
- Download the latest wordpress files to a directory inside the document root of the vhost.
- Backup your databaseBackup ALL your WordPress files in your WordPress directory. Don’t forget your .htaccess file.
- Once you have deleted the necessary files go into the directory where you untarred the new files and delete the files and directories that don’t want to overwrite in the document root (ie wp-content, wp-images, wp-includes/languages etc.)
- Then run “cp –r * ../” This will move everything from the new wordpress directory to the live document root directory.
- You should now be able to log into wordpress and see that its upgraded and functioning correctly.
- Verify the backups you created are there and usable. This is essential.
- Deactivate ALL your Plugins.
- Ensure first four steps are completed. Do not attempt the upgrade unless you have completed the first four steps.
- Delete the old WordPress files on your site, but DO NOT DELETE
- wp-config.php file;
- wp-content folder; Special Exception: the wp-content/cache and the wp-content/plugins/widgets folders should be deleted.
- wp-images folder;
- wp-includes/languages/ folder–if you are using a language file do not delete that folder;
- .htaccess file–if you have added custom rules to your .htaccess, do not delete it;
- robots.txt file–if your blog lives in the root of your site (ie. the blog is the site) and you have created such a file, do not delete it.
So now you can leave the wordpress running on the cgi php until, you pick a date to do a global php server update, at which time you can comment out the .htaccess files lines and it will revert back to using the php global version as a module.
Hope this helped you if you have questions email nick@nicktailor.com
How to install Gnome 3 on Ubuntu 12.04 LTS
How to install Gnome 3 on Ubuntu 12.04 LTS
If you are reading this article, chances are that you have tried the Unity interface on Ubuntu. Although Canonical has done a great job with the development of Unity, some of us still prefer to use Gnome as a default GUI. In addition, the Gnome team has also done an excellent job improving Gnome and released this as Gnome 3. Since Gnome 3 comes with both the classic (similar to Gnome 2) and the new Gnome 3 interface, I decided to focus on installing Gnome 3 in this article.
Installing Gnome 3
Before we continue, it is worth mentioning that there is a gnome package in the defaultUbuntu repository for Gnome, however from what I understood from several articles this version is outdated and does not include all the beauty that is included in the latest Gnome 3 release. So you may want to skip installing the default package from the repository.
The good news is that installing the latest Gnome 3 on Ubuntu 12.04 is extremely easy. Just copy-paste the following lines for the latest release from the Gnome team into a terminal (type Ctrl-Alt T to open a terminal window):
sudo add-apt-repository ppa:gnome3-team/gnome3 sudo apt-get update sudo apt-get install gnome-shell
Now be sure to reboot your computer and when you are prompted with your login screen you have the following additional options (click on the little Ubuntu icon next to your login name):
I recommend using the first option, Gnome. However if you are interested in going back to a familiar environment, feel free to choose one of the two Gnome Classic options. You can log in and log out to try the different versions.
Gnome 3 Shell Extensions
One of the great new features of Gnome 3 is the possibility to add “shell extensions”. These are small user interface elements which can improve the overall user experience.
To install a shell extension visit the Gnome Extensions website with your browser (the default Firefox works fine for this) and install extensions by switching the “ON/OFF” button to“ON” (you can find these buttons on the individual extension pages, in the left upper corner).
You may also want to consider installing the Gnome Tweak Tool which will give you greater control over your shell extensions and several other Gnome settings. You can install this tool directly from the Ubuntu Software Repository, or by copy-pasting the following lines into a terminal:
sudo apt-get install gnome-tweak-tool
You can now find this tweak tool by searching for “Advanced Settings” in your applications or in System Tools menu.
Recommended Shell Extensions
Experiment and try out some shell extensions. Personally I recommend to at least try out activating/installing the following shell extensions:
Alternatively, if you prefer to install a small collection of popular shell extensions in one go (including most of the listed above) you can copy-paste the following lines in a terminal:
sudo add-apt-repository ppa:ricotz/testing sudo apt-get update sudo apt-get install gnome-shell-extensions-common
And once you have finished installing extensions, visit the Installed Extensions pageon the Gnome Extensions website or the “Shell Extensions” option in the Gnome Tweak Tool. There you will be able to see, enable/disable and customize settings of the individual extensions from the collection.
An important note about using Gnome shell extensions: Unfortunately any installed shell extension will not automatically be updated when newer versions are released. You will need to manually remove and reinstall any shell extension which conflicts with future Gnome 3 or Ubuntu updates. This is something the Gnome team is aware of and (I hope) is working on fixing.
Getting Around In Gnome 3
As mentioned earlier in this article, there are a lot of exciting new features in Gnome 3. I decided to highlight the two features that have the most impact on my daily usage of Gnome.
Multiple Workspaces
One of the first things I noticed when I logged in was that there were only two workspaces in Gnome 3 (use the keyboard shortcut Ctrl-Alt Up/Down-arrows to navigate the workspaces). So my first impulse was to browse through a lot of different settings windows in the system settings and try to increase this number (I like working with four or more workspaces). However, I could not find where to change this anywhere. Only after watching this video I understood that this is not needed anymore as the number of active workspaces is dynamic to what you actually using. Watch the video below to understand what I mean.
Searching For Apps / Switching Windows
Quickly accessing popular apps and opened windows is similar to how Unity does this, however the approach from the Gnome team allows you to have more screen space for the apps and windows you have open. In the video below Jason of the Gnome team explains you what I mean.
How to setup a mysql replication check on your slave
How to setup replication check on your mysql slave with email alerting
I decided to write this, because there are probably lots of people who have mysql replication setup in master –slave, and the only way they are able to identify if replication is broken, is by logging in to the slave and checking. I found this to be a pain and in efficient.
What this entails
- This is a perl script which will run every hour via cron
- It will send an email alert notifying you that replication is broken on the slave
- Script is smart enough to know if mysql is simply stopped on the master.
- Script also check to see if mysql is running or not
- Open a file under nano -w /usr/sbin/replicationcheck.pl (either copy and paste below or download the link below and edit as needed, this goes on your slave mysql server)
http://www.nicktailor.com/files/replicationcheck - You need to ensure the file has +x permissions chmod +x /usr/sbin/replicationcheck.pl
- Create the following file ‘touch /root/repl_check/show_slave_status.txt’ (this file is used to pipe information to)
- Create the log file ‘touch /var/log/mysqlstopped.txt’ (this will be used to log the results
- Finally you will need to setup a cron to run this script. I ran mine every hour ‘crontab -e’ if your adding this to root
0 * * * /usr/sbin/replicationcheck.pl (This runs every hour) - Lastly, you can setup a bash script on master db which can ssh to your slave and output the results on your master using this script, so you dont need to log into the slave to use the script, if your lazy.
Explains the script
#!/usr/bin/perl
use Sys::Hostname;
use POSIX;
$timestamp = strftime “%b%e %Y %H:%M:%S”, localtime;
$host = hostname;
$email_lock = “/root/email.lck”;
$mysql_socket = “/var/lib/mysql/mysql.sock”;
$show_slave_status = “/root/repl_check/show_slave_status.txt”;
$pword = “”; (You will need to add this to the mysql lines below if you have password)
# This checks to see if mysql socket exists. if it exists, means that mysql is running. if mysql’s not running, don’t need to run slave status check
sub check_mysql_socket
{
# Can’t connect to local MySQL server through socket ‘/var/lib/mysql/mysql.sock
if (-e $mysql_socket)
{
print “MySQL running, will proceed\n”;
return 1;
}
else
{
print “MySQL not running, will do nothing\n”;
return 0;
}
}
# so this is the server doesn’t repeatedly keep sending email alerts if the replication is broken. How it does it is by a email lock file. If an email is sent it creates the lock file and stops sending, if it doesn’t it will send. You can change this how you see fit, this is the way I learned so stuck with it. It’s a sub, so we can use it as a function variable later down the script.
sub check_email_lock
{
if (-e $email_lock)
{
print “email file exists\n”;
return 1;
}
else
{
print “no email file exists\n”;
return 0;
}
}
#So this section just basically continues from above by using the check email lock function and then sends the email if the lock file doesn’t exist and creates the lock file, you also define the email address you want to send here. It also logs the results to a file called “mysqlstopped.txt” if there is a problem.
sub stop_mysql
{
print “**Show Slave Status**\n”;
if (check_email_lock)
{
print “email lock exists, keep email lock, no email will be sent “;
}
else
{
system (“mail -s ‘mysql stopped because replication is broken $host’ nick\@nicktailor.com < /var/log/mysqlstopped.txt”);
system (“touch $email_lock”);
print “email sent, email lock created\n”;
}
}
print $timestamp . “\n”;
# if MySQL is running then it moves on to the next phase where it mines the information from mysql we need:
- last io error
- last sql errno
- slave io running
- slave sql running
if (check_mysql_socket)
{
system (“/usr/bin/mysql -Bse ‘show slave status\\G’ > $show_slave_status”);
$last_io_errno = `less $show_slave_status | grep Last_IO_Errno | /usr/bin/awk ‘{print \$2}’`;
$last_sql_errno = `less $show_slave_status | grep Last_SQL_Errno | /usr/bin/awk ‘{print \$2}’`;
$slave_io_running = `less $show_slave_status | grep Slave_IO_Running | /usr/bin/awk ‘{print \$2}’`;
$slave_sql_running = `less $show_slave_status | grep Slave_SQL_Running | /usr/bin/awk ‘{print \$2}’`;
# trim newline character
chomp($last_io_errno);
chomp($last_sql_errno);
chomp($slave_io_running);
chomp($slave_sql_running);
print “last io error is ” . $last_io_errno . “\n”;
print “last sql errno is ” . $last_sql_errno . “\n”;
print “slave io running is ” . $slave_io_running . “\n”;
print “slave sql running is ” . $slave_sql_running . “\n”;
#So this piece is here because if you stop mysql on the master, the result on the slave from “show slave status” is a very specific one. You will need to test yours to see if the results match the code here, and edit it according.
Basically its saying if last_io_errno is less than 0 and does not equal 2013 there is a problem, If last sql_ernno is less than 0 there is also problem. You get the idea, you can add as many circumstances you need. I found this to be the best combo which covered pretty much most scenarios .
if (($last_io_errno > 0) && ($last_io_errno != 2013))
{
&stop_mysql;
}
elsif ($last_sql_errno > 0)
{
&stop_mysql;
}
# if slave not running = Slave_IO_Running and Slave_SQL_Running are set to No
elsif (($slave_io_running eq “No”) && ($slave_sql_running eq “No”))
{
&stop_mysql;
}
else
{
if (check_email_lock)
{
system (“rm $email_lock”);
}
print “replication fine or master’s just down, mysql can keep going, removed lock file\n”;
}
}
else
{
print “#2 MySQL not running, will do nothing\n”;
}
print “\n#########################\n”;
If the script works you should see the following below if replication is working and no email will be sent to you
#########################
Oct 3 2012 00:13:12
MySQL running, will proceed
last io error is 0
last sql errno is 0
slave io running is Yes
slave sql running is Yes
no email file exists
replication fine or master’s just down, mysql can keep going, removed lock file
#########################
If there is a problem it will look something like:
##########################
Oct 2 2012 02:02:54
MySQL running, will proceed
last io error is 0
last sql errno is 0
slave io running is No
slave sql running is No
**Show Slave Status**
no email file exists
Null message body; hope that’s ok
email sent, email lock created
Hope this helped you 🙂
Cheers
Nick Tailor
How to setup Arpwatch across multiple vlans
How to setup Arpwatch across multiple vlans
- Arpwatch is primarily used to avoid ip conflicts on your network
- This will help avoid an accidental outages from occurring by the mac-address arping to another device in error due to a duplicate ip configuration on another device
- This will also help track down a gateway theft, if there is an accidental theft of your gateway within your network by a compromised machine.
- Arpwatch keeps track for ethernet/ip address pairings. It syslogs activity and reports certain changes via email. Arpwatch uses pcap(3) to listen for arp packets on a local ethernet interface.
Installing ArpWatch on Debian
Note-You will need to ensure that your vlans are trunked and might need to tag them depending on your setup, so that you arp requests packets from arpwatch are not dropped if they go to another switch.
- Now you can download the source and compile and do this, however debian sources already have it, so this is pretty easy to install. “apt-get install arpwatch”
- Create empty file for storing host information “touch /var/lib/arpwatch/arp.dat” if this file already exists move to the next setup
- You want to open up your /etc/arpwatch.conf and configure your interfaces for listening on which ever subnets you want it to check.
Note: Since eth0 on the arpwatch server is your primary interface. I used the second nic plugged into a tagged vlan so that my arpwatch server could send packets
Add these lines for email alerts
eth1 -a -m admin@nicktailor.com
eth1.1 -a -m admin@nicktailor.com
eth1.2 -a – -m admin@nicktailor.com
4. If you need to exclude a specific subnet for any reason. I had to do this because we had multiple physical servers that had unconfigured drac cards which had the same ip address configured, so when we implemented arpwatch on our public facing vlans, we got a lot of alerts because dracs. To get around it we used the added the following lines in /etc/arpwatch.conf
eth1 -a -z 192.168.0.0/255.255.0.0 -m admin@nicktailor.com
eth1.1 -a -z 192.168.0.0/255.255.0.0 -m admin@nicktailor.com
eth1.2 -a -z 192.168.0.0/255.255.0.0 -m admin@nicktailor.com
Note: Another way to do this is updating the startup script /etc/init.d/arpwatch, edit the line below as follows:
Additional Configuring
IFACE_OPTS=”-i ${IFACE} -f ${IFACE}.dat $2 -z 192.168.0.0/255.255.0.0″
- If you want to make config cleaner for the emails for instance you want to have multiple addresses emailed. Open up /etc/aliases
Add the lines
arp-alert: nick@nicktailor.com, admin@nicktailor.com
2. Next go back into /etc/arpwatch.conf and edit the lines from step 3 as indicated below, this way you don’t have to keep updated the conf, if you want to added more emails addresses in future, just update your aliases file.
eth1 -a -z 192.168.0.0/255.255.0.0 -m arp-alert
eth1.1 -a -z 192.168.0.0/255.255.0.0 –m arp-alert
eth1.2 -a -z 192.168.0.0/255.255.0.0 -m arp-alert
How to Check your logs
So everything is logged in /var/log/syslog, if you want to filter out arpwatch logs. This a possible way to go about it. Mind you will need to edit this grep based on whatever your are mining log file for. Hope this was helpful.
cat syslog | grep -i arpwatch | grep -i reuse | cut -d” ” -f11 | sort | uniq