Developer and drug use

I’ve never linked developer with drug use. In my mind, using drugs is anything but productive for software development and it will be counterproductive to pop pills while coding.

I changed my mind after reading this article:

A lot of things make sense now. I remember the time when I started to learn iOS development a few years ago. After a few weeks of intense learning and coding, I couldn’t go to sleep because my brain was “swiping” after closing my eyes. Sometimes I had to use a few drinks to relax.

Apply the same situation to young developers with a good income to dispose and a crazy deadline to meet, there is no surprise that drug use/abuse has become increasingly normal among the software developers.

If you’ve been to a Hackathon you will find enormous amount of different types of energy drinks available. Did they work for me? Yes, for a short while. But it’s certainly not something that I want to depend on to stay focused. I’m scared of what will happen when they are not available.

Maybe I should be glad to grow up without caffeinated drinks, Aderall or 5 hour energy pops. And I know when I have “drug talk” with my kids, it will be a much bigger subject on illegal stuff.

Posted in my 2 cents | Leave a comment

ios static library

Using xcode, it’s pretty easy to create a static library. However, when there is resource involved, things can be a bit complicated. Here is a good tutorial on how to create a static library with core data support in an iPhone app.

Posted in my 2 cents | Comments Off

Adding SSL to Apache – connection error

Here is the background of the problem:

Recently I added SSL certificate (purchased from a CA) to my Apache server. After making configuration changes, opening the port and restarting the server, the https connection still won’t work. From Chrome, I got “Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error.”; and from FireFox, simply “connection¬†interrupted”. Neither search term really offer much help. However one tips really helped is to check http on port 443 directly.

That worked. So in my case, it’s almost like Apache does not recognize SSL request. A new round of checking configuration and certificate files ensued.

I did find the fix in the end. In my Apache virtual host files, I have virtual host entries defined like this:

<VirtualHost SERVER_IP>

I had to change it to <VirtualHost SERVER_IP:80>

It’s probably because Apache matched the SSL request to a non-SSL virtual host and tried to serve it. So if you are adding SSL to the mix, remember to check the previously defined virtual hosts on port 80 and tighten up the rule.

Posted in server setup | Comments Off

An unlikely place to look for an Amazon S3 issue

This has driven me nuts for a few hours so I have to share just in case someone else has similar issue.

Basically I have a PHP script to access Amazon S3 using its SDK for PHP lib, and it runs from a server that I recently built (CentOS). The problem is, the script won’t work. Not that it threw out an error, it just wouldn’t return the result. For example if I was trying to get a list of files with same prefix, it returned an empty array. And it worked every where else and I knew for sure the stuff it’s looking for is there.

I looked everywhere and couldn’t figure out why. Until finally, I noticed that the system time is off by a few hours. It was a whole new story to fix that and this article should explain all you need to know. But in the end, after the system time is corrected, the script works again.

So if you have a script which consumes a web service, make sure the host’s system time is set correctly. It might save you a few hours of a Sunday afternoon.

Posted in PHP development | 1 Comment

A PHP class to read Microsoft Access Database

A friend of mine recently asked me to help out building a data warehouse based on some Access database files. Here is a PHP class that I created to read database record from Access.

class DataMdb {
	private $conn;

	function __construct($mdbFile) {
		// Set up the connection
		if (!$this-&gt;conn = new COM(&quot;ADODB.Connection&quot;)) {
			exit(&quot;Unable to create an ADODB connection&quot;);
		$strConn = &quot;DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=&quot; . realpath($mdbFile);
		try {
		} catch (Exception $e) {
			exit(&quot;Caught exception: &quot; . $e-&gt;getMessage());
	function getAll($sql) {
		$result = array();
		try {
			$rs = $this-&gt;conn-&gt;execute($sql);

			if (!$rs-&gt;EOF) {
				$fieldCnt = $rs-&gt;Fields-&gt;Count();

				while (!$rs-&gt;EOF) {
					$row = array();
					for ($i=0; $i&lt;$fieldCnt; $i++) {
						$row[$rs-&gt;Fields($i)-&gt;name] = $rs-&gt;Fields($i)-&gt;value;
					$result[] = $row;
		} catch (Exception $e) {
			exit(&quot;Caught exception: &quot; . $e-&gt;getTraceAsString());

		return $result;

	function disconnect() {
Posted in PHP development | Comments Off

Using Sphinx as a site search engine

A website, especially a content oriented one, needs good search functionality. This can be implemented locally, or outsourced to a search engine like Google. The former obviously requires a lot of work in database design and coding; and the later, relies on Google to guess what you have and what your users are looking for. And in the case of a site information behind a “walled garden”, it becomes impossible and insecure to let Google crawling and indexing the protected content.

Sphinx is a search server that I believe provides better approach to overcome the prior issues. It handles the indexing by reading into database (or files), and provides full text search capability via standard APIs.

Overall Sphinx is pretty easy to set up. Installation on a Linux server requires downloading the source code and going through the usual “make install” process. If you have done the installation from source before this should be easy. After installing the software, you also need to create a Sphinx configuration file to get Sphinx work in your environment. This is when I ran into some issues and I’ll share some of this experience in the rest of the post.

Basically Sphinx adds two processes to your server: indexer and searchd service. The Indexer is a process which should be kick off periodically, depending on the frequency you wish, to index the data (mostly) from a database; Searchd, by the name of it, is a daemon which listens on a port and handles the search request. The Indexer can be controlled by crontab, and Searchd should run as a service and configured properly so in the event of server reboot, it will be started automatically.

Adding a new service startup script in in a Linux environment requires creating a new shell script and putting it in somewhere under etc directory. Here is a sample script that I have for a CentOS server (it’s was quickly put together so it’s pretty rough on the edges):

# chkconfig: 345 55 35
# description: Sphinx search daemon

case “$1″ in
echo -n “Starting Sphinx searchd:”
sudo -u myuser /usr/local/bin/searchd –config /home/myuser/sphinx/sphinx.conf >> /dev/null 2>&1
echo -n “Stopping Sphinx searchd:”
/usr/local/bin/searchd –config /home/myuser/sphinx/sphinx.conf –stop >> /dev/null 2>&1
$0 stop
$0 start
echo “usage: $0 [start|stop|restart]”
exit 0

Notice that I added sudo command so Searchd runs under “myuser”. This is because having indexer and searchd run under different users can post some issues.

At this point I want to go over my setup a little bit. The server that I have Sphinx installed hosts several websites. A couple of user are created with different sites deployed under their home directories respectively. Since I plan to use Sphinx on only one of the site, I want the site owner “myuser” owns the indexer process, and keep the Sphinx data and log files locally, somewhere under myuser’s home directory. In this particular setup, if the searchd service runs by root, I ran into permission issues.

First, searchd creates a *.spl file, which myuser doesn’t have the read permission on. The indexer produces the following error even the rotate option IS indeed presented:

indexing index ‘mydatabase_search’…
FATAL: failed to open /home/myuser/sphinx/data/mydatabase.spl: Permission denied, will not index. Try –rotate option.

Another issue is the ownership of the file. The indexer complains again if it can’t read it:

WARNING: failed to open pid_file ‘…/’.
WARNING: indices NOT rotated.

If you wonder why the indexer needs the access to these files, it is because whenever the indexer is running, it notifies searchd by sending a SIGHUP signal.

Now these issues can be bypassed by changing the permission of these files in the searchd startup script. But in the end I think using sudo command is a cleaner solution. And ultimately, all Sphinx related files, including configuration and process pid, are stored locally and can be accessed easily by the “site owner”. The only drawback in this approach I can see now, is when Sphinx is added to another site owned by a different user, there needs to be a searchd process for each site owner.

Since I’m still experimenting this particular setup may not be the best solution. But hopefully the post can shed some light on certain issues that other people might run into.

Posted in tools | 1 Comment

Sending email using Google App and PHP Swift Mailer

Not very long ago I converted one of my site to use Google App email service. Using a third party email service can reduce the load on your own server and eliminate the responsibilities of configuring and maintaining a mail server. Since it’s essentially Gmail, SMTP is supported. I paired it with Swift Mailer, the free PHP mail client and the solution has been quite stable and satisfactory.

Until out of blue I checked the mailbox of the default mailing account:

The mailbox is filled with undeliverable emails (which is normal), and surprisingly, a quite amount of user emails. Here is the scenario: my website has an online form that supports one to send message to an email address. So if uses the form to send message to, an mail will be delivered to, through Gmail using Also to achieve better user experience, when Swift Mailer message is constructed, I also set the “from” address to, so when receives the email, the message appears to be from directly. The idea is also that when xyz replies, the message goes back to abc.

The problem is in the replying part. You might have guessed or figured out, all the replies went back to So basically all responses were lost since no one care about the noreply mailbox.

I looked more carefully into the email header and found the problem: the from address looks up like this:<>.

As you can see is only treated as a “display name” and actual email address is still So even the from address appears to be correct in a email client, replying to the message sends the response to a black hole.

After identifying the problem I started to tweak the Swift Mailer message but no matter what I try Gmail will always append the noreply address in the from header. This left me in a despair mood. Other options are not good. I can write a program to automatically check the mailbox and forward those emails, or modify the message with some warning to notify the receiver not to reply directly, or just switch to a new email service. All these options are either too complicated (without a good cause), or will negatively affect user experience.

After some more poking around I found this email header option: “Reply-To”. If it does what its name indicates my problem can be solved by adding this header in mail messages. You might be laughing at me right now for not knowing this but before this I had never really studied email headers and I did get a load of information by trying to solve this.

Adding a header field is pretty easy in Swift Mailer and it worked. With the “Reply-To” header the mail client (and webmails that I tested) correctly put the right email address when replying.

It’s always fun to be able to solve a problem in a simple way, before going too far down the other paths. And hopefully this post can help someone else in the same boat.

Posted in my 2 cents | Comments Off

Build your own Linux VPS

5 years ago if you ask me to build my own Linux VPS for my websites I would’ve shaken my head and said it was too much for a non-sysadmin like me. Now I’m pretty comfortable of doing it. I want to share my thoughts in this post and hopefully it can be useful to other web builders.

When I first started I put my websites on shared hosting servers. While it was cheap and easy to setup you often don’t get the best performance on your dollar. This is especially true when your site gets more traffic and you are anal about the downtime like me. Until I discovered VPS. VPS is a great solution to move up from shared hosting. Having a VPS server means you have full control over a virtual host so you can install and configure the way you like; also, because using VPS means “renting” a slice of a physical server, someone else takes care about the racking, networking and hardware maintenance.

There are generally two types of VPS. One is “managed” and the other is obviously, “un-managed”. “managed” in this case means the service provider will help you install apps, trouble shoot, and in some cases, walk you step by step to help you resolve an issue, as long as you ask. And often times you have a full web based system control panel, like “CPanel” installed for you. The later case, apparently, don’t have this kind of service. You are basically given a barebone server and you are on your own. As you can see a managed service will be more costly.

I started with managed VPS. But as my Linux skill gets better the “managed part” of the service becomes less and less necessary. CPanel is a great tool but it also a big resource consumer itself. Sometimes you might find most of your system resource is consumed by addons, not the main apps like web server or the database.

To grow out of a managed service the key is to try and learn. for example, if you choose to stay on CPanel forever (not that there is anything wrong with it:)), you’ll stay on it for ever. To take the leap you need to be ready for it. It took me sometime to get to the comfort level that I’m at now. During which, I found a number of guidelines that I begin to follow.

Keep a good and updated server diary

I think this is the first thing to do when you start handle your own server. A good server diary can not only help you trouble shoot, it’s also a good reference when you need to re-install, upgrade, or in some less frequent cases, move to new service provider.

Like any other service you buy, hosting service provider’s service quality can go down too. You pretty much don’t have any other choices except voting with your feet. A good service log can make changing host a lot easier. Recently when I switched my VPS provide, it only took me a few hours to stand up full services on a brand new VPS host. The process also forced me to refresh and update my server diary.

Utilize the external services

To host a website on VPS, we have to install pretty much everything ourselves. They include at least Apache web server, PHP and MySQL. However, besides the basic LAMP stack, we also need to take care of services like DNS, email service. To keep your admin work as simple as possible, I strongly recommend outsourcing DNS or email to external service provider. For DNS, there are lots of options. I use For only a few dallors a month, you are completely shielded from managing your DNS app. Although you still need to understand what a “cname” is and how to change the DNS record to point your web site to the correct ip, you have a much much smaller learning curve.

Email is another service that can get quite complex. I found using Google App’s email service makes a lot of sense. Since their email service is based on Gmail, the IMAP , effective Spam filter and web access are all included naturally. Without the email server and SmapAssasin taking up resource on your server, your server is also better optimized. Google Apps has both free and paid version. Another benefit of using reputable external email service is the trust level you gain for your emails. A lot of my users who use Yahoo mail couldn’t receive message because they were marked spam. But since I started using Gmail service it has been a lot better.

If your web site has a lot of user generated content like photos, you may also consider using the cloud storage like Amazon AWS. I’m generally against building your site completely depending on the cloud but it another subject.

Install from source

I know this is quite a debatable subject. Installing application from source doesn’t always give you the type of control that you can have using the packaging tools. However, there are several benefit that can’t be overlooked.

First, you have the full control on the binary and you can build the exact binary that you want. For example, when building your own PHP binary, you can specify the features you want to enable to have a small footprint. Same can be applied to Apache httpd server. This will directly impact the memory usage of your web server.

Secondly if you are accustomed to source installation you will not need to hunt around for the latest RPM or whatever installation package that built by someone else. You can stay updated with the latest version of software. Since the same procedure can be universally applied to all the Linux distros you are less likely to be affected by the different packaging tool that different Linux distro offers.

And lastly, it’s really not that hard to do.

Some basic steps

With a brand new VPS, there are some basic setups that have to be run to ensure the security and basic usability. Your VPS service provider will configure your VPS to a certain degree before handing over so you might need to look into the system configuration like partitioning before proceeding to the steps below.

Update system information

When your new server is up and running, you’ll need to update the host name:
echo “” > /etc/hostname
hostname -F /etc/hostname
And don’t forget updating the hostname variable in /etc/sysconfig/network file
Also update the system time:
ln -sf /usr/share/zoneinfo/US/Eastern /etc/localtime

Create users

Adding users is the second must.

groupadd johndoe

useradd -d /home/johndoe -g johndoe -p johndoespassword

To create user “apache” for your web server you’ll need the following command:

groupadd apache
useradd apache -c “Apache Server” -d /dev/null -g apache -s /sbin/nologin

Turn off the unnecessary services

By default you’ll have some services up and running. You want to turn them off.
This is the command to show who is up:
chkconfig –list | grep 3:on
This is the command to turn one off
chkconfig <service name> off
Also, make sure double check who is listening on what port. On my servers, I only leave ports for sshd, httpd, mysqld and a few others.
netstat -an

Secure SSH

You want to turn off the root access:

/etc/ssh/sshd_config file, set this: PermitRootLogin no

Also you want to set up public/private key authentication.

I would also recommend changing the port from 22 to something else.

Set up a firewall using iptables

If you are reading this article you probably know what iptables is. The tricky part is how to configure it. A few years ago I went a great length to learn what tables and chains are and how to set up a shell script to configure an iptables firewall. The problem was I soon forgot what I had learned since configuring iptables is not something I do on a daily basis for a developer like myself. And guess what, I locked myself out on my first try in a new server.

Luckily there are tools today which wraps around iptables and expose an easy to use configuration interface. This makes the life a lot easier for me. APF is what I use and the project page can be found here:

Install some utilities, compiler and libraries

I only use CentOS/Redhat system as example and yum is my command of choice for packaging tool. Again they are just basic tools and libraries for installing Apache, PHP so others might need to be installed as well. But the key again, is to keep a good log of what has been installed so you have a good reference when you build your next server.

yum install man
yum install vixie-cron
yum install wget
yum install rsync
yum groupinstall ‘Development Tools’
yum install mailx
yum install zlib-devel
yum install openssl-devel
yum install libxml2-devel
yum install curl
yum install curl-devel
yum install libjpeg-devel
yum install libpng-devel
yum install mysql-devel
yum install libxslt-devel
yum install libmcrypt
yum install libmcrypt-devel
yum install libevent
yum install libevent-devel

Install applications

Now it comes the time to install your beloved apps. One thing to remember if you install from source is to create script in /etc/init.d and add the service entry. For example after installing Apache http server, you need to add the httpd startup/shutdown script to /etc/init.d and add it to your service list:

chkconfig –add httpd
chkconfig –levels 235 httpd on

Install a MTA

It’s likely that your web site needs to send emails so you probably need an MTA to talk to your external email server through SMTP for message delivery. If the system comes with Sendmail installed, I’ll go ahead use it. Here is a post I wrote to get sendmail work with gmail. There are other options like postfix, exim and qmail that you can consider. Here is a good article on MTA comparison. Although there are lots of pros and cons that you can munch on I think the most important thing to consider is which one is the easiest for you. With the latest development they are all very capable products so anyone of them can deliver your need.
This is quite a long-winded post. I don’t mean to write a tutorial but just want to cover some basics on building a Linux VPS (or dedicated server in this matter). A few years ago it was almost unthinkable for me that all these kind of things can be done by one person, but as the tools, technology and information become more available and easier to find, it is quite feasible now. I hope the post can be a good start for us web builders who are interested in setting up their own server. And please do leave your comment if you have any thoughts, tips or suggestions.
Posted in server setup | 1 Comment

Configure sendmail to work with Gmail smtp relay

Ok this one was really a thinker. I spent at least 5 hours to get this to work and finally I was able to use Sendmail to relay through my Gmail account.

A little background:

I have a Linux VPS with CentOS installed. The only email MTA is the default Sendmail. Everything else is pretty much the standard CentOS 4 installation. I don’t intend to use this box as a mail server or any other type of email processor. What I was trying to do is to add some basic capability to send out email from the box using my existing email accounts hosted in Gmail. And I didn’t want to install any additional software such as Postfix for this.

That being said, let me continue to take you down the path that I have gone through, without the stumbling blocks.

My approach was basically: problem -> Google for solutions -> trouble shoot -> Google again. So I found a lot of useful content on the web during the process.

1. Check sendmail

Since gmail uses TLS, you will need to make sure your sendmail is compiled with TLS (for encryption) and SASL (for authentication). This is the command to use to check it:

/usr/sbin/sendmail -d0.1 -bv root

In my case, sendmail does have the necessary compilation flags so I was good. If yours doesn’t, you’ll need to re-compile sendmail and update the binary that is used to start the sendmail service, which is not covered here.

2. Upgrading Cyrus SASL

If your SASL installation doesn’t have the “plain” and “login” lib you will have authentication problem with Gmail. You can see why when you get to the sendmail configuration in the later steps. The common error in the /var/log/maillog is this:

AUTH=client, available mechanisms do not fulfill requirements

It was a vague error and I was so frustrated with it at one point I was ready to give up. However, this article about setting up Postfix with Gmail casted some light and helped me figured out the cause.

The problem is that SASL doesn’t have all the necessary plugins. The “login” and “plain” are the plugins necessary to talk to Gmail smtp. So I had to upgrade SASL to fix the problem. Here is what I did:

$ wget
$ tar -xzf cyrus-sasl-2.1.21.tar.gz
$ cd cyrus-sasl-2.1.21
$ ./configure
$ make
$ make install

$ mv /usr/lib/sasl2 /usr/lib/sasl2.orig
$ ln -s /usr/local/lib/sasl2 /usr/lib/sasl2

Note: if you have issue installing Cyrus SASL around compiling digestmd5.c, it’s because your compiler is too new. Read here¬†to find out how to patch it.

Since I just switched out the old sasl2 lib without recompiling sendmail, I was concerned sendmail would poop during runtime. Luckily that didn’t happen. Dynamic lib rocks!

3. Generate SSL certificate

I made a directory called certs under /etc/mail. Here are the commands that I used to generate the SSL certificates.

openssl req -new -x509 -keyout cakey.pem -out cacert.pem -days 3650
openssl req -nodes -new -x509 -keyout sendmail.pem -out sendmail.pem -days 3650

Notice I made the certificates good for almost 10 years. I didn’t needed the cacert.pem to be exact.

I also copied /usr/share/ssl/ca-bundle.crt to /ect/mail/certs and included it in the sendmail configuration file. Other wise you’ll see some error like this:

unable to get local issuer certificate

The reason is that the ca bundle file has the Gmail certificate issuer. Although I read it somewhere that email still goes out with this error. Nonetheless, we don’t need to see this if we can fix it.

4. Configure sendmail

With the preparations above we are ready to configure sendmail. I found this tutorial very useful in terms of getting the correct sendmail configurations.

In summary, I have the /etc/mail/auth/client-info looks like this: “U:root” “” “P:password” “M:PLAIN” “U:root” “” “P:password” “M:PLAIN”

If you use Gmail hosted email with your own domain name, you will have username@hostname.tld in there.

Make sure run:

$ makemap -r hash client-info.db < client-info

and chmod 600 on client info files.

Essential lines in my

FEATURE(`authinfo’,`hash /etc/mail/auth/client-info.db’)dnl
define(`RELAY_MAILER_ARGS’, `TCP $h 587′)
define(`ESMTP_MAILER_ARGS’, `TCP $h 587′)

define(`CERT_DIR’, `/etc/mail/certs’)
define(`confCACERT_PATH’, `CERT_DIR’)
define(`confCACERT’, `CERT_DIR/ca-bundle.crt’)
define(`confCRL’, `CERT_DIR/ca-bundle.crt’)
define(`confSERVER_CERT’, `CERT_DIR/sendmail.pem’)
define(`confSERVER_KEY’, `CERT_DIR/sendmail.pem’)
define(`confCLIENT_CERT’, `CERT_DIR/sendmail.pem’)
define(`confCLIENT_KEY’, `CERT_DIR/sendmail.pem’)


NOTE: Be aware that smart-quotes used in the code examples will not be recognised if pasted into your files! Ensure replacing smart-quotes by regular quotes (see comments below for further detail). – thanks Johnny for the suggestion.

The certificate files are generated/copied from the previous step. I’m no sendmail expert so the configuration lines may not be perfect. But it works. Let me know if you have better settings.

One tip I found very useful is to use the debugging feature. You can set a high log level in the to see at which step sendmail choked and for what reason.

Also, make sure run “make” or m4 every time you touch the

m4 >

So that’s pretty much it. I restart the sendmail service and out goes my email.


Recently I installed Sendmail on a brand new VPS and had hard time get the authentication working. It turned out saslauthd was not running. So a note for new system is to make sure saslauthd has to be up and running (better use chkconfig to make sure it starts up at run level 3) in order to get Sendmail authentication working. This may help resolve some issues in the comments.

Posted in server setup | 56 Comments

A follow up on using Amazon A3

Last week Amazon A3 was down for 4 hours and made a lot of webmasters unhappy. It further proves that it is quite risky to design your site solely relying on A3 to provide the essential functionalities, at least for now.

One way to reduce the risk is to have a copy of the files, for example images, saved in your server and design a flag in your code to pull the file from your own server if A3 downtime was detected. The flag can be controlled by some parameter in a configuration file so it can be easily switched.

One might argu this defeats the purpose of using AWS storage since the load copies take up the space. But I believe the storage cost will worth it in the event of the A3 hiccups. By using A3 service when it’s up and running, you will still save the bandwidth when serving those files, which is a lot more expensive than the storage cost.

Posted in my 2 cents | 2 Comments