jps and jstat for tomcat on jdk-1.6.0_24

Recently upgrading to java version "1.6.0_24", jps and jstat seemed to be broken to get monitoring information from running tomcat process.

By default java.io.tmpdir is /tmp, however tomcat usually uses it's own temp directory. That is where jps/jstat looks for hsperfdata_* dirs. If java.io.tmpdir is not set, it would look in /tmp. If jps can't find the hsperfdata directory, it won't report anything.

jps/jstat however allows to specify java.io.tmpdir in case you're using a JVM that places those directories in a different location.

So to get it to work:

$JAVA_HOME/bin/jps -J-Djava.io.tmpdir=/path/to/tomcat/temp -l
$JAVA_HOME/bin/jstat -J-Djava.io.tmpdir=/path/to/tomcat/temp -gc $PID

Django HTTPS Redirects

This works for both HTTP and HTTPS where any front end web server such as nginx which handles the actual request sets a header when request comes via HTTPS. In Apache configuration you then use mod_setenvif to set the HTTPS variable, which Django then picks up to use for redirection.

With front end nginx server which handles SSL, set header "X-Forwarded-Proto=https" via:

  proxy_set_header X-Forwarded-Proto https;

On Apache, add directive:

  SetEnvIf X-Forwarded-Proto https HTTPS=1

The HTTPS variable is picked up as being special by mod_wsgi and it will fix the wsgi.url_scheme in WSGI environment which Django then uses for redirection.

This way you don't need to customize Django stack.

Check number of file descriptors opened by process

To check on the file descriptors opened by process:

lsof -p {PID} | awk '$4 ~ /^[0-9]/ {print $4}' | wc -l

alternately:

ls -1 /proc/{PID}/fd | wc -l

To check all file descriptors opened by a user:

lsof -u {user} | awk '$4 ~ /^[0-9]/ {print $4}' | wc -l

By default the hard and soft limits for a single process are set to 1024 on linux systems. To check on the limits:

ulimit -Hn
ulimit -Sn

To increase edit the "/etc/security/limits.conf" file for corresponding user:

{user} soft nofile 4096
{user} hard nofile 4096

To check if the new limits has been applied type `ulimit -n` after you get a new shell.

Logfile '/var/log/kav/5.5/kav4mailservers/avstats.log' does not exist

To resolve kluser cron error on missing log file:

Logfile '/var/log/kav/5.5/kav4mailservers/avstats.log' does not exist

Modify kluser cron as below via `crontab -e -u kluser` appending "/bin/touch /var/log/kav/5.5/kav4mailservers/avstats.log;":

/bin/touch /var/log/kav/5.5/kav4mailservers/avstats.log; /opt/kav/5.5/kav4mailservers/bin/parse_avstat.pl -d -sd=/opt/kav/5.5/kav4mailservers/proc_avstat /var/log/kav/5.5/kav4mailservers/avstats.log

Plesk and SPF records

The Plesk DNS zone template for TXT record does not include the servers hostname by default. If the host IP is different from the domains being hosted, may want to update the default template as below:

<domain>. TXT v=spf1 +a +mx a:host.domain.tld -all

Backup and restore lvm data with dd

Recently I've had to backup/restore data from a failing drive with LVM over Raid.

Luckily I had access to the backup of the current metadata configuration located in "/etc/lvm/backup/".

Below is what the volumegroup looked like:

vg0 {
        id = "xvni1W-24Xu-dVoR-PlXh-gQvQ-62fL-QX64O3"
        seqno = 9
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 65536             # 32 Megabytes
        max_lv = 0
        max_pv = 0

        physical_volumes {

                pv0 {
                        id = "9gbyhX-Owvj-u4Q4-wR1E-IEf2-gyUA-CJBCJK"
                        device = "/dev/md3"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1928892288   # 919.768 Gigabytes
                        pe_start = 384
                        pe_count = 29432        # 919.75 Gigabytes
                }
        }

        logical_volumes {

                lv0_sites {
                        id = "Sg1fYr-NTzr-8AA2-v29K-tcz5-rUMj-uRoXY1"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 1280     # 40 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                }

                lv0_m {
                        id = "scNeN4-4bmg-Y6kq-zKuO-n8B8-s8mw-FTUYqk"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 12800    # 400 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 1280
                                ]
                        }
                }
        }
}

Now to extract data with dd, use the below formula (this will only work for linear stripe):

skip=$[extent_size*stripes+pe_start] count=$[extent_size*(extent_count-1)]

So to get the lv0_m data off of the volume:

dd if=/dev/sdb4 of=/opt/bak/lv0_m.iso bs=512 skip=$[65536*1280+384] count=$[65536*(12800-1)] conv=sync,noerror

Once the iso is created, it can then be loop mounted via:

mount -o loop -t ext3 /opt/bak/lv0_m.iso /mnt/lv0_m

You should then be able to see all the files in the mount point which can then be used for data restoration.

Tracking slow running mysql queries

First enable logging of slow running queries in "mysqld" section of "my.cnf".

[mysqld]
log_slow_queries=/var/log/mysqld.slow.log
long_query_time=2

Once queries get logged, you can get the top 10 queries sorted by number of occurrences in the log via:

mysqldumpslow -s c -t 10 /var/log/mysqld.slow.log

Dry run update with svn and cvs

To test what files would get changed or conflict when running an update:

With svn:

svn merge --dry-run -r BASE:HEAD .

With cvs:

cvs -nq update -d

Dive Into Python

(via diveintopython.net)

Dive Into Python is a Python book for experienced programmers. You can buy a printed copy, read it online, or download it in a variety of formats. It is also available in multiple languages...

Remote backups with tar over ssh

Below is example of backing up users' home directory to remote host piped via ssh:

tar -cvzf - -C /home {username} | ssh {remotehost} 'cat >/path/to/bak/{username}.tgz'

Syndicate content
Comment