Did a small startup and quitted again – what I learned

A little over a year ago I quitted my day job as a fullstack webdeveloper at a small company in Copenhagen. I did this in order to work fulltime on a startup with two friends of mine.

The startup company (now known as simply “startup”) is doing LIVE events (video recoding and streaming) and hosting the videos/audio clips afterwards, for shareholding companies in Scandinavia.

The project was already running live when I started fulltime on it, as I had started coding on the system about 6 months before in my freetime.

At first we where 2, with a consultant, then we got the last buddy to join and we were 3. We worked from our homes, and used a lot of time doing work, brainstorming and coming up with new things and ways to do the same – only faster, easier and more simple.

We were 3 different people, with different trades and backgrounds:
* One sales person
* One technician
* One developer (me)

We have since grown (by a lot) and are now 6 fulltime workers and 2 student workers. We also lost one of the original partners a few months ago, so a lot changed in the company since we started.
I am still the only developer in the company, and this is where the chain broke for me…

As many people know, working from 8-16 is not a problem. I had recently become a dad, so that life suited me perfectly. But when doing a startup, you are not expected to work only from 8-16 anymore. Time is consumed heavily, and if something crashes, you are expected to fix it right away.

What I learned.

I have spent a lot of time doing Linux server setups. More time than coding i’d say…
I had some experience with Linux servers, but that was in simple environments, with low traffic.

It’s odd that when first starting out, you are “hired” to do coding, but end up fending hacker attacks and handling server stability and server load issues instead.

Before doing the startup, I had mostly worked with PHP and Apache.
I had some experience with Ruby On Rails (rails), which I liked, so I decided to choose this stack for the startup.

Rails helped speeding up the coding – a lot. Of course, rails is a large framework, which has a lot of contraints and rules you have to know and follow – or you will die.
I chose rails for all the gems out there, the community and documentation. I also loved what ruby in general looked like.

That is also what I generally hear about rails: “lots of gems, just pick a lot of those, and you are done!”

But let’s face it, that’s not the whole story. A year into the startup, and I’m using only 10 or 15 different gems (besides the rails ones), 5 of them are dealing with deployments (capistrano)… So i’m using 5-10 gems? not a whole bunch.

Rails have also bitten me in the arse a lot of times. I know that if I was properly schooled in rails, I might not have done a lot of things the way I did it. But that’s just me… “Learn as you go”.
But as rails is as large (and complex) as it is, you cannot just: “Learn and go”.

I have spent a lot of time refactoring code, learning new things, handling odd issues, only to find an odd gem, that handles the exact same thing (which i didn’t know about until later). And why did the rails core team deprecate the ActiveResource lib? It was so easy to use… (I’m using the gem now though)

Scaling rails was easy enough though.
I had a load balancer in front of my servers, which made horizontal scaling easy enough.

However, this wasn’t always enough…

Our systems should be able to handle at least 1000 requests per second, for 1 hour or more in a row (during LIVE events, to listen for changes in the player).
This might not seem like a lot, but if you have a server that can handle a given request in 100ms, that same thread can only handle 10 requests per second.
That meant, that at one time we had 6 servers, serving the same content, and they still couldn’t keep up with the number of requests coming in.

I then started using Redis as a page caching mechanism, and we can now handle around 14000 requests per second (in theory), using only:
1 load balancer
2 redis servers
2.5 web servers

Would I choose rails again for a startup? Properly not… The reason being, that I really don’t use most of the things rails offers, and it’s just so hard to do things differently.

PHP on the other hand, lets you do a lot more. And with composer (e.g. using packagist) being more and more adopted, code snippets are shared more and more.

I’m not saying that PHP is perfect – far from it! But it suits me a lot better.

What I’m taking with me from rails is:
Automated deployments
(and a lot more…)

It should be said, that I’m not a big fan of frameworks though. This goes for both Ruby and PHP. I cannot conform and tend to use bits and bites from different frameworks to handle my stuff.

I just quit, now what?

I have just said goodbye to the startup, leaving roughly 18 months of thoughts and code behind.

I am returning to the PHP world, after having left it behind for a long time. And I am actually returning to the same job I had before starting my startup.
This (new/old?) job has other developers and designers, that know what they are doing, which will be nice to get back to.

I feel a lot older and wiser though, having spent time doing all the developing, server handling and project planning myself. I feel like I have a lot more to offer now.

I also think that every developer should try to do a startup. I had often thought about it, but never dared venture into it. It seemed way to risky.

And it is…

You cannot make mistakes, especially if you have something to loose. That being a house, a car… or more importantly: have a family to take care of.

But you somehow grow with the experience… and become better. Perhaps not at coding, but at taking responsibility and at thinking about decisions.

Technical stuff

For those interested, this is what I used to develop and design. And the server software.

Software used (on the Mac):
* Firefox Developer Edition
* Google Chrome Canary
* Pixelmator (Graphics app – it’s dirt cheap, and works very well)
* Sequel Pro (SQL app)
* Sublime Text (2+3) (Lots of modules)
* Terminal

Server setup:
* Ubuntu (very easy to use, lots of updated packages etc)
* Nginx, with passenger (played around with puma + nginx in the end as well – just not in production)
* Redis (as a general cache and page cache)
* RVM (very easy when upgrading servers to new ruby versions)
* I also used Nginx as a load balancer, and it works very well.
* The VPS servers are from digitalocean.com (referral link)

Please note: This is not meant as a PHP vs Ruby blog post, but simply my experiences with it.

IE + iframe + cookies

After spending lots of hours building a system for a client, using Ruby on Rails, everything was deployed and worked perfect… until IE came along that is.

The customer had iframed the project into their current website, and since the user had to sign in to the new project using an iframe, IE began to give all kinds of errors.
The direct link worked like a charm, but IE seems to want more!

I stumpled upon an answer on stackoverflow – where else? (link to stackoverflow answer)
It simply states, that if you want to use cookies in iframes, using IE, you need to add a P3P header.
And it worked!

Rails howto

To do this in Rails, simply open up “application_controller.rb” and add a new filter:
before_filter :set_pthreep

The code for the filter is added in the private section of the application_controller.rb file:
def set_pthreep
response.headers['P3P']= 'CP="Potato"'

And that’s all there is to it.

Hackers came along…

There I was, minding my own business… Coding… drinking coffee… and you know… working :)

Then some hackers came along and ruined all the fun. Digitalocean were fast to take action and closed all the eth-interfaces on the server and contacted me.

Fortunately it’s a redundant setup, so closing down one server didn’t cause any issues for our customers.
It did however, leave me with a great deal of work in order to fix the server again.

I never reuse a server that has been hacked. “Delete it, create a new, and rethink the solution to make it better”.

The server only exposes ssh and a webserver to the world, so one of those would have had to be the target.

From what I could see in various logs, they (the hackers) simply tried a million logins to get into the server, using ssh.
So if brute forcing their way in, is the only way they can think of, we might as well ban their IPs.

I found a nice little opensource project called fail2ban which installs nicely on all major linux distros, using the packagemanagers. (which means no compiling and dependency management and manually updating the script for me!)
It simply parses the /var/log/auth.log file (on ubuntu/debian, and /var/log/secure.log on other distros) and sees if a certain IP has been trying too many times to signin without success. (well, this is the default SSH behaviour at least).

However, on ubuntu I found a problem with the auth.log file, since is grouped redundant messages.
So if fail2ban tests for multiple matching lines, and ubuntu groups similar lines, then it wouldn’t work… and it didn’t.

It was an easy fix though.
Simply edit /etc/rsyslog.conf and find “$RepeatedMsgReduction on”. Set this to “off”, save the file, restart the service (sudo service rsyslog restart) and you are good to go.

fail2ban will now start to parse the log file (/var/log/auth.log) and little by little the attackers are banned in the firewall.

Simple and efficient.

The default settings is to ban the attackers for 10mins after 3-6 tries.
Now you might ask: “why not just ban them forever?”… well, because I sometimes type passwords wrong :)
I use randomly generated passwords and I could easily hit 3 tries and get banned. So banning IPs for 10mins seems like a fair solution :)

And remember kids, always update the software on your servers at least once a week :)
A lot of critical errors are fixed all the time, and if you install your apps using the packagemanager, it doesn’t take more than a few mins per server.

Life and whatnot!


Lots of time has passed since the last post was made.
What can I say?

I became a dad, bought a house and left my job to start a company with two friends.

Time is still sparse, but thoughts are flying around in my head, so I’d better get writing again :)

More posts will soon appear on the site, so stay tuned!

Using a Zend view helper inside a partial

When rendering a page using one or more partials, I often need to call a helper to do some extra stuff for me.

One caveat I found was, that the view variables was not available in my helper.

After some research it seems that the partial acts as the new view.
So in order to access the views variables, you need to pass them to your partial.

<?php echo $this->partial('partial-path/partial', array('variable1', $this->variable1, ...)); ?>

Problems connecting to unix:///var/mysql/mysql.sock

In a previous post I talked about MySQL 5.5 and Mac OSX.

In this post I’ll go through fixing the problems with PHP and connecting to your local mysql install, using “localhost”.

The problems began a while back, with lots of errors in my apache error log saying:

[error] [client ::1] PHP Warning:  mysql_connect(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) in ...

In the rush I was in, I quickly changed my mysql connection to use:, which is the IP of your localhost. So basically the same thing.

Today I’m doing some freelance work for a customer, who has some problems with his server, after the PHP version was upgraded. I decided to fetch all his php files to my local Mac and then run through his webshop, fixing any errors I might see in the log etc.
When starting the my apache and running the website, I quickly found the MySQL connect error again.

Since this project should be a “search and fix” mission, I didn’t have time to change the mysql_connect(xxx) statements in the code (yes yes, not my code, so I didn’t create the mess…), so instead, I wanted to fix my local PHP->MySQL connection.

The fix was relatively easy, and only contains 1 to 2 steps:

Step 1 (if needed)

If you haven’t activated php.ini on your local install, open a Terminal and write the following command:

sudo cp /etc/php.ini.default /etc/php.ini

This copies the default php settings to the php.ini file, which the apache server uses.

Then restart your apache server. (Using System Preferences->Sharing->Web sharing)

Your PHP is now using the php.ini file.

Step 2

Open /etc/php.ini file using your favorite text editor.

Goto line 1216 (or search for “mysql.default_socket = ” without the quotes) and change /var/mysql/mysql.sock to /tmp/mysql.sock

Restart your apache server and you should now be able to connect to localhost again.

Still have problems?

If you still have problems, then try the following:

Open Terminal and write:

mysqladmin version

It should print something like this:

mysqladmin  Ver 8.42 Distrib 5.1.53, for apple-darwin10.3.0 on i386
Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL license

Server version		5.1.53
Protocol version	10
Connection		Localhost via UNIX socket
UNIX socket		/tmp/mysql.sock
Uptime:			2 hours 52 min 6 sec

Threads: 3  Questions: 58  Slow queries: 0  Opens: 16  Flush tables: 1  Open tables: 9  Queries per second avg: 0.5

The path in the UNIX socket is the “localhost” connection point. So go back to Step 2 and use that path instead.

Using rails and respond_to to include nested data

I want to display data from a nested table in my XML output using respond_to. I searched google a bit and it seems :include was the way to go. However I had some problems getting this to work properly.

I have two models (customer and user) that are linked together. When I fetch the data for a single user, I want my xml output to include the customer data.

The Customer model:

class Customer < ActiveRecord::Base
  has_many :users

The User model:

class User < ActiveRecord::Base
  belongs_to :customer

In my UserController I use the respond_to method to respond to html and xml data.
The show action is as follows:

# Shows the seleted user
def show
  @user = User.find(params[:id])
  respond_to do |format|
    format.xml { render :xml => @user) }

Calling the show action on the user controller as xml should yield something like:


But the customer data is not included.
This puzzled me a bit.

Searching a bit on google got me the following answer:

# Shows the seleted user
def show
  @user = User.find(params[:id])
  respond_to do |format|
    format.xml { render :xml => @user.to_xml(:include => @user.customer) }

source: http://rubydoc.info/docs/rails/3.0.0/ActionController/MimeResponds

(I also tried fetching the customer out separately, that didn't work either)

This gave me the following error:

undefined method `macro' for nil:NilClass

The solution

After a lot more searching I found out, that you shouldn't pass an object, but a symbol. The name of the symbol is the data block you want to show, so in my case it was :customer

The code to fix it was:

# Shows the seleted user
def show
  @user = User.find(params[:id])
  respond_to do |format|
    format.xml { render :xml => @user.to_xml(:include => :customer) }

The data for the customer now returned as well.

Need more data?

If you need data from several tables, just use an array of symbols instead.

# Shows the seleted user
def show
  @user = User.find(params[:id])
  respond_to do |format|
    format.xml { render :xml => @user.to_xml(:include => [:customer, :table2, :table3]) }

Hope you can use this tip.

Handle different environments with PHP

Being both a Rails and PHP developer, I’m often lacking a few things when I’m switching from Rails to PHP.

One of the things I miss the most, is the different environments that Rails has, which makes testing, developing, staging and production environments easy.
However, I found a way to do this in PHP as well, without using frameworks like Zend, CakePHP etc.

Getting started

There are basically two things you need to do, in order to get this setup to work properly.

Configure your server (apache)

First we need to configure the apache server (If you are using nginx, look at their documentation for help).

The magic lies in using the SetEnv function in the apache server.
This functions makes a variable available in the $_SERVER object in PHP, which is what we are using to differentiate between the environments.

Virtual hosts

If you are using virtual hosts, then simply add it with in the section.

An example configuration with a “test” environment could be:

<VirtualHost *:80>
  ServerName my_site.test.dk
  DocumentRoot /var/www/my_site.test.dk


  <Directory /var/www/my_site.test.dk>
    Options Indexes FollowSymLinks -MultiViews
    AllowOverride All
    Order allow,deny
    allow from all

  ErrorLog /var/log/apache2/my_site.test.dk.error.log

  # Possible values include: debug, info, notice, warn, error, crit,
  # alert, emerg.
  LogLevel warn

  CustomLog /var/log/apache2/access.log combined

Using the apache.conf / httpd.conf

On my mac the file is called httpd.conf, but on the ubuntu linux servers I’m managing it’s called apache.conf.

Anyways, just head to the bottom of the file located here:
Linux: /etc/apache/apache.conf
Mac: /etc/apache/httpd.conf

And add the following line:

SetEnv APPLICATION_ENV "development"

Exchange “development” with the environment you are configuring (In my setup, production = no value)

Configure your application

Now we need to use the variable passed to the $_SERVER object, in order to see what environment we are “in”, and configure the application accordingly.

In most projects I have an environment.inc.php file, which has the following structure:


// initializing the configuration array (mostly to avoid null warnings)
$envConfiguration = array();

// the environment configuration for the development environment (local machine)
if(isset($_SERVER['APPLICATION_ENV']) && $_SERVER['APPLICATION_ENV'] == 'development') {
  $envConfiguration = array(
    'db_password' => '12345',
    'db_user' => 'root',
    'db_host' => '',
    'db_name' => 'my_dev_db'
// the environment configuration for the unit testing environment (local machine)
if(isset($_SERVER['APPLICATION_ENV']) && $_SERVER['APPLICATION_ENV'] == 'unittest') {
  $envConfiguration = array(
    'db_password' => '12345',
    'db_user' => 'root',
    'db_host' => '',
    'db_name' => 'my_unittest_db'
// add more environments here... E.g. staging, test etc

// Not having the APPLICATION_ENV variable set, forces the application to
// use PRODUCTION settings!
// The reason for this is, that I don't always have control of the production
// servers, while I have control over the staging and test servers. 
// (You can of course have a production value set
else {
  // production environment settings here.
  $envConfiguration = array(
    'db_password' => 'some_strong_password',
    'db_user' => 'some_production_user',
    'db_host' => '',
    'db_name' => 'production_database'

Pretty simple ye?

What we are doing is simply checking if the APPLICATION_ENV variable is set in the $_SERVER object, and if it is, we test what it is.

The reason I’m checking if the APPLICATION_ENV isset, is because it gives a lot of warnings if the variable is not set (which would be in production for my setup).

What about unit testing? (phpunit)

Well, I have an answer there as well.

Since the $_SERVER variable is not available in unit tests, we simply create it ourselves and set the APPLICATION_ENV to “unittest”.

Here is a sample unittest include file, which should be included at the very top of your unittest.
Let’s call this file unitTestConfiguration.inc.php and put it in a folder called tests


// includes the phpunit framework
require_once 'PHPUnit/Framework.php';

// constructs the SERVER variable to set the environment to unittest.
$_SERVER = array(
  'APPLICATION_ENV' => 'unittest',
  'SERVER_NAME' => 'localhost',
  'REQUEST_URI' => ''
// SERVER_NAME and REQUEST_URI is not needed, but nice to have

// includes our environment file (remember to add a unittest section there!

// includes the database file, which reads the $envConfiguration variable
// (which is set in the environment.inc.php file) and connects to the database

// sets the default timezone (Because strftime will throw a warning in PHP5+)


When creating your unit test, simply do the following:


// includes the unit test configuration (including the PHPUnit framework)

class EnvironmentTest extends PHPUnit_Framework_TestCase {
   * A small test to see if our environment is actually set.
   * (You don't need this test in your test files, this is 
   * just for the scope of this post!)
  function testEnvironment() {
    $this->assertTrue($_SERVER['APPLICATION_ENV'] == 'unittest');

To run the unit test, simply open a terminal / command / cmd (or what ever you are using), and go to the project folder.
There you should run the following command:

phpunit tests/environmentTest.php

On my machine that gives the following output:

$ phpunit tests/environmentTest.php 
PHPUnit 3.4.14 by Sebastian Bergmann.


Time: 0 seconds, Memory: 5.75Mb

OK (1 test, 2 assertions

Files for this post: 2011-01-08_php_testing_article