Blogs

Configure passive ports range for ProFTPd

Usually, if a client is behind firewall, they can only trasfer files via a passive ftp connection.

Edit /etc/proftpd.conf and specify the passive ports range. Place it in the 'Global' container:

</Global>
...
...
# Use the IANA registered ephemeral port range
PassivePorts 49152 65534
</Global>

Reference: proftpd.org

Load the ip_conntrack_ftp module and iptables rules, so the ports automatically open to the connected client:

# /sbin/modprobe ip_conntrack_ftp
#  lsmod | grep conntrack_ftp
ip_conntrack_ftp       41489  0
ip_conntrack           91237  4 xt_state,xt_conntrack,ip_conntrack_ftp,ip_conntrack_irc

Add the below iptables rules:

iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

If the server is beind NAT, ip_nat_ftp module also should be loaded:

# /sbin/modprobe ip_nat_ftp

defunct processes

When a process exits (normally or abnormally), it enters a state known as “zombie”, which in top appears as "Z". Its process ID stays in the process table until its parent waits on or "reaps" it. Under normal circumstances, when the parent process fully expects its child processes to exit, it sets up a signal handler for SIGCHLD so that, when the signal is sent (upon a child process's exit), the parent process then reaps it at its convenience.

As long as the parent hasn't called wait(), the system needs to keep the dead child in the global process list, because that's the only place where the process ID is stored. The purpose of the "zombies" is really just for the system to remember the process ID, so that it can inform the parent process about it on request.

If the parent "forgets" to collect on its children, then the zombie will stay undead forever. If the parent itself dies, then "init" (the system process with the ID 1) will take over fostership over its children and catch up on the neglected parental duties. If the init process is stalled, then you have much bigger problem than child processes not being reaped. In fact, a crashed init process will usually cause a kernel panic.

mysql if using too much CPU of my VPS.

Can anybody help me?
Basically my app is for an email marketing, so using a while statement its reads from a db (mysam) and make updates

Thank in advance!!

This is the info about my VPS (when typing )
Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
Mem: 1206272 kB
OS: CentOS release 5.3 (Final)
Mysql: 5.0.45-log

This is my my.cnf:
[client]
port = 3306
socket = /var/lib/mysql/mysql.sock

###The MySQL server
[mysqld]
set-variable=local-infile=0

sendmail use of clientmqueue and mqueue folders

When submitting mail by using sendmail as a mail submission program, sendmail copies all messages to "/var/spool/clientmqueue" first. Sendmail is a setgid smmsp program and thus gives any user the permission to do so (/var/spool/clientmqueue belongs to user and group smmsp). Later, another sendmail process, the sendmail mail transfer agent (MTA) copies the messages from /var/spool/clientmqueue to /var/spool/mqueue and sends them to their destination.

/var/spool/clientmqueue is thus the holding area used by the MSP (Mail Submission Protocol) sendmail instance before it injects the messages into the main MTA (Mail Transport Agent) sendmail instance.

Sendmail will save the message in /var/spool/clientmqueue for safe keeping before trying to connect to the MTA to get the message delivered. Normally there would be a 'queue runner' MSP sendmail instance which every half hour would retry sending any message that couldn't be sent immediately. Each message will generate a 'df' (message routing info) and 'qf' (message headers and body) file. You can list out all of the messages and their status by:

# mailq -v -Ac

When files accumulate in /var/spool/clientmqueue, this is probably due to sendmail localhost MTA not running, and thus the mails don't get send.

Monitor outgoing emails in cPanel exim

In cPanel WHM, Main > Service Configuration > Exim Configuration Editor:

Under Filters, check "System Filter File" location, usually at "/etc/cpanel_exim_system_filter".

Edit the file:

Just below (this should already exist):

if not first_delivery
then
  finish
endif

Add the filter:

# Monitor outgoing emails from domain.tld
if first_delivery
   and ("$h_from:" contains "@domain.tld")
   and ("$h_from:" does not contain "youremail@")
then
   unseen deliver "monitor@domain.tld"
endif

Save changes and restart exim:

# /etc/init.d/exim restart

rpmdb: unable to lock mutex: Invalid argument

I was caught by this when upgrading CentOS servers from 5.2 to 5.3. Should make it a habit to atleast read the release notes!!

When upgrading from an earlier version of Red Hat
Enterprise Linux to 5.3, you may encounter the following
error:

Updating : mypackage ################### [
472/1655]rpmdb: unable to lock mutex: Invalid argument

The cause of the locking issue is that the shared futex
locking in glibc was enhanced with per-process futexes
between 5.2 and 5.3. As a result, programs running against
the 5.2 glibc can not properly perform shared futex locking
against programs running with the 5.3 glibc.

This particular error message is a side effect of a package
calling rpm as part of its install scripts. The rpm
instance performing the upgrade is using the prior glibc
throughout the upgrade, but the rpm instance launched from
within the script is using the new glibc.

To avoid this error, upgrade glibc first in a separate run,
ie

# yum update glibc
# yum update

You will also see this error if you downgrade glibc to an
earlier version on an installed 5.3 system.

Disable auto fsck

I have a huge external usb drive set to be auto-mounted during backup processes and did not want to run fsck on every mount. With the below setting it would completely turn off the auto fsck check.

# tune2fs -c 0 -i 0 </dev/partition>

However, I would highly suggest atleast doing a regular 6 months check if not doing manual fscks.

# tune2fs -c 0 -i 6m </dev/partition>

PostgreSQL 8.3 install on CentOS

# wget http://yum.pgsqlrpms.org/reporpms/8.3/pgdg-centos-8.3-6.noarch.rpm
# rpm -Uvh pgdg-centos-8.3-6.noarch.rpm
# yum install postgresql-server
# service postgresql initdb
# service postgresql start

vzdump LVM snapshots kernel errors

On running daily lvm snapshot backups via vzdump on OpenVZ servers, I noticed the below Kernel errors in logwatch reports.


WARNING:  Kernel Errors Present
    Buffer I/O error on device dm-4,  ...:  22 Time(s)
    EXT3-fs error (device dm-4): e ...:  60 Time(s)
    lost page write due to I/O error on dm-4 ...:  22 Time(s)

This would show up on busy servers only, probably caused due to lvm snapshot running out of space.

I edited "/usr/bin/vzdump" and increased the size from 500m to 1000m which seems to have resolved the issue for now.


run_command (\*LOG, "$lvcreate --size 1000m --snapshot --name vzsnap /dev/$lvmvg/$lvmlv");

svn checkout via shell script

Very often development servers only have self-signed certificate for ssl connection. I've recently had to create a script that would checkout from a https svn repository that would fail with "Server certificate verification failed: issuer is not trusted..." . Below is a workaround used to temporarily trust the certificate.

svn --username $SVN_USER --password $SVN_PASS --no-auth-cache checkout ${REPO_URL}/${REPO_PATH} $REPO_PATH <<EOF 2>/dev/null
t
EOF

Syndicate content
Comment