Blog

  • Rackspace and load test automation

    Well, last workday of this week turned out nice.

    I’ve been working with LoadImpact.com for a few months, providing text material for their blog. Mostly hands on technical stuff about load testing, how their tool can be used, and fun things you can find out with a good Load Testing tool at hand.

    But this week, one of my posts actually got published on the Rackspace Devops blog. I don’t have the numbers, but I’m suspecting that Rackspace to have quite a decent amount of readers. So Hooray! Not that Rackspace called me personally and begged for it, rather that Rackspace and LoadImpact are working together in other areas, but still, more readers, heh? Anyway, the space available for this first post on their blog was limited, so I almost immediately followed up with some nitty gritty details and a working demo. And yes, there’s code in there.

    I other news. Ferrari finished 9 and 10 in Belgium Grand Prix qualification (that’s F1) and I’ve just decided to port an ongoing project from CodeIgniter to Laravel 4. Only things that bugs me about that is that Laravel 4 seem to have more runtime overhead than CI. Expect more on the conversion process in the next few weeks.

    Now, time to prepare saturday dinner and a get glass of the red stuff.

     

     

     

     

     

  • WordPress file permissions

    WordPress file permissions

     

    In order for WordPress to be able to install a plugin and plugins or themes automatically there are a number of conditions that have to be met. If all those conditions aren’t met, one-click installations or upgrades won’t happen, instead, whenever you try to upgrade, WordPress will show you the FTP credentials input form. If you’re anything like me, you hate it.

    I sometimes run into this problem. My first instinct is to check the obvious file permissions. Will the web server have write access to all the important places. As long as we’re only talking about plugins and themes, important places means the wp-content folder. When I’m certain that the web server have write access, I typically try again and successfully upgrade my plugin.

    Every once in a while, installing or upgrading still won’t work even if I’m 100% certain that WordPress should be able to write everywhere it needs to. I end up searching for a solution for about 10 minutes, give up and resort to manually uploading plugins via ssh and get on with my life. Today I decided to find out the root cause of this problem and solve it. Writing this blog post about it servers as much as a ‘note to self’ as assistance to anyone else that trouble shoots this without finding a solution.

    The rules

    So, the rules for WordPress to be able to install and upgrade plugins and themes:

    1. The web server needs to have write access to the wp-content folder. For example on a Debian based system (i.e Ubuntu), this will be user ‘www-data’, on RedHat/Fedora, it’s typically user ‘httpd’ (plese correct me here if I’m wrong). WordPress will test this by writing a temporary file to wp-content and then remove it. There are plenty of blog posts, howtos and forum posts about this. They usually points back to this article: http://codex.wordpress.org/Changing_File_Permissions
    2. The files in wp-admin needs to be owned by the web server user. WordPress will test this by using PHP function getmyuid() to check if the owner of the currently running PHP script is that same as the owner of the newly created temporary file. If it’s not the same, WordPress will select another method of installation or upgrade.

    Rule #2 is what typically gets me. Whenever I move an existing WordPress installation to a new home, I’m sometimes (obviously) not careful with setting file permissions and file ownership and end up in this situation. Rule #1 is extremely intuitive, checking for write permission is close to second nature. But Rule #2,  checking file ownership in wp-admin… well, I’d even say it’s slightly unintuitive. If anything is worth protecting it should be the admin area, and being more restrictive with file ownership and permissions under wp-admin would even kind of make sense.

    Anyway. Comments, questions or other feedback. Please post a comment below.

     

    [wysija_form id=”3″]

     

  • Gmail and Google Apps mail migration

    I’ve been a long time Google Apps user, I think it’s a perfect solution for a smaller company like mine. In fact, it’s a perfect solution for bigger companies as well. Right now, I’m working with a client that wants to consolidate 10 individual Google Apps domains into one single account that handles all the domains. About half of the 15 users have accounts in all the other domains, the other half have accounts in 2-3 of them. Even if this is not the most typical kind of work I’m doing, this assignment brings some welcome new challenges into my current work.

    Migration tool requirements

    Anyway. One of the client requirements is naturally that all existing email is migrated into the new Google Apps account. Easy peasy right? Well, it turns out that migrating email to and from Gmail (and therefore Google Apps) is quite a challenge due to a number of reasons. Of course, there are plenty of advice on the Internet, but none of the proposed solutions would cover all my needs which are:

    1. The tool must be able to handle XOAUTH on the source.
    2. The tool must be able to handle XOAUTH on the target.
    3. The tool must be able to handle Gmail’s rather special model of treating folders like labels.
    4. The tool must be scriptable
    5. The tool must be able to handle delta changes, running the script a second time should not create duplicates on the target email account.

    I’ve looked at plenty of alternatives but the only tools I found that could potentially do the work for me came at a too high cost. The cloud based tools that exist typically charge per user account, that would have been fine if there was a 1-1 mapping between users and accounts. But in my scenario, each user have on average 7 accounts and I had given my client a fixed price for the entire job. So, even if I had loved to try, I can’t afford to lose money on a job.

    The two most problematic requirements was to handle xoauth on both ends and to handle the gmail folder/label magic. The two official tools from Google, the migration API and the Google migration tool for Exchange failed. The API only gives you write access, so it’s not possible to get email OUT of a Gmail account using it. The Google Exchange Migration tool assumes that the source server is something other than Gmail and requires you to know the username / password for all source accounts.

    A solution… almost

    Enter imapsync. Imapsync used to be a free open source tool that is now under a commercial license. But for a mere EUR 50, I bought access to the source code (in perl). Imapsync is able to handle XOAUTH on both source and destination, it’s scriptable and it’s able to use MessageId to keep a kind of state. Running imapsync twice with the same parameters will not duplicate the amount of emails on the target server, more on that later.

    The one problem I had with imapsync was the folder vs label management. The problem that most people know of is that Gmail doesn’t really use folder, it uses labels. Even if it’s similar in a lot of cases, there are differences. What I learned is that there’s another issue regarding the concept of subfolders or nested labels. An example

    • Via IMAP, create a folder  named foo. => Gmail creates a root level label “foo”. 
    • Via IMAP, create a folder named foo/bar => Gmail creates the label “bar” nested under the label “foo”.
    • Via IMAP, create a folder named “fuu/bar” => Gmail create the root level label “fuu/bar”.

    See the difference? In the last example, you’d perhaps thought that Gmail would create a root level label “fuu” and then a nested label “bar” under it. But nope, Gmail will happily create a label containing the actual IMAP label separator character. Bummer. So the end result is that if you transfer email with imapsync out of the box, you will get a flat structure of really long label names. And that flat list that can grow to be quite long if your’re actively using nested labels. And you don’t want that.

    I was pondering a whole lot of various solutions to this problem. I actually got to the point where I tried to migrate the source account to a local IMAP account on my own machine, manipulate the Maildir directly on disk to insert dummy email in strategic places and then migrate to the target account. It worked, but it also introduced a whole new set of moving parts.

    The final solution (thanks Dennis)

    dennis_filter

    It took a long sunday walk with the dog before I realized that the proper solution would be to work with the imapsync source to fix folder creation. As I described above, the cause of the folder / label problem is that Gmail treats things differently depending on the order of folder creation. So, after the initial shock of seeing 5000 lines of Perl code (I don’t consider Perl to be part of my standard toolbox) I got to work and built me a patch. With the patch in place, the folder creation now works as I’d expected it in the first place. The one downside to this solution is that it won’t be able to see the difference between a label on the source Gmail account that actually contains a / (forward slash).

    The other thing I with this patch is that it don’t have a switch to tell imapsync if you want the different folder creation behavior or not. I guess that’s needed before I submit it back to the maintainer.

    Anyway, this patch assumes that you have imapsync 1.542 even if it’s likely to work well with other versions as well. If you have another version of imapsync and want to work with Gmail migrations, consider upgrading anyway since only 1.542 supports xoauth. On line 2312, replace the existing create_folders function with this modified version:

    sub create_folder {
    	my( $imap2, $h2_fold, $h1_fold ) = @_ ;
            my(@parts, $parent);
    
    	print "Creating folder [$h2_fold] on host2\n";
            if ( ( 'INBOX' eq uc( $h2_fold) )
             and ( $imap2->exists( $h2_fold ) ) ) {
                    print "Folder [$h2_fold] already exists\n" ;
                    return( 1 ) ;
            }
    
            @parts = split($h2_sep, $h2_fold );
            pop( @parts );
            $parent = join($h2_sep, @parts );
            $parent =~ s/^\s+|\s+$//g ;
            if(($parent ne "") and !$imap2->exists( $parent )) {
            	create_folder( $imap2 , $parent , $h1_fold);
            }
    
    	if ( ! $dry ){
    		if ( ! $imap2->create( $h2_fold ) ) {
    			print( "Couldn't create folder [$h2_fold] from [$h1_fold]: ",
    			$imap2->LastError(  ), "\n" );
    			$nb_errors++;
                            # success if folder exists ("already exists" error)
                            return( 1 ) if $imap2->exists( $h2_fold ) ;
                            # failure since create failed
    			return( 0 );
    		}else{
    			#create succeeded
    			return( 1 );
    		}
    	}else{
    		# dry mode, no folder so many imap will fail, assuming failure
    		return( 0 );
    	}
    }

     

     

  • Load testing tools vs monitoring tools

    Seems all writing I have time for these days is for others. Anyway, latest post on the LoadImpact blog is published. Go read.

  • Node.js scalability and tech writers

    I’ve just published a text on the LoadImpact blog. This time I write a little bit about the findings I made when trying to put a little heavier load on a very simple Node.js, turns out that Node.js is not a silver bullet after all. Who would have guessed.

    If you need content for your blog or magazine. I’m available as a guest writer. I write mostly about coding, web standards, open source technology and similar subjects. Don’t hesitate to contact me if you have questions or if you want to get a quote.

     

  • LoadImpact

    1-loadimpact

    I’ve just started blogging as a guest writer at LoadImpact.com. If you’re not already familiar with LoadImpact, go check them out. They provide the word leading load test as cloud service solution and is free to try out.

    Today my first post was published about the difference between Node.js and PHP as server side languages/environments.

    So, get out of here already and read about it.

  • Debugging your phpunit test cases in CodeIgniter

    I don’t know why it feels ironic, but it does. Sometimes I need to debug my phpunit test cases and it wasn’t very very self explanatory to understand how to set it up . The solution however, is quite easy.

    I’ve previously written about how to enable the php debugger xdebug from a command line script. The end result from reading that article should be that you have a php5d command on your system that will trigger xdebug to hook into your IDE (mine is Sublime Text 2, yours may differ). For the rest of this article, I’m going to assume you have the php5d command available.

    Next step is to prepare your test classes for execution via a php command rather than via the phpunit framework (don’t worry, it will be included). The solution to this was found at Stack Overflow (naturally, read it, it’s good), but I  found out that two simplifications can be made when used in CodeIgniter with CI_Unit.

    The foundation of the StackOveflow trick is to make sure that a couple of PHPUnit classes are loaded, either via explicit statements as the sample suggests, or via the PHPUnit/Autoload functionality. Turns out that when bootstraping PHPUnit via CI_Unit, it already brings in PHPUnit/Autoload.php. That makes it one less requre_once statement to forget about.

    The second simplification is to minimize the room for copy / paste related errors. As long as you are a bit obsessive with class names and make sure that your file name is always classname.php (so class FooTest is implemented in the file FooTest.php), then you can avoid typing the class name in one additional place. That’s exactly what I need to do myself to avoid hard to catch errors.

    My test cases are now modeled after this template:

    <?php
    /**
    * Tests for the Foo class
    */
    require_once('../application/third_party/CIUnit/bootstrap_phpunit.php');
    
    class FooClassTest extends CIUnit_TestCase
    {
    static function main() 
    {
      $suite = new PHPUnit_Framework_TestSuite( __CLASS__);
      PHPUnit_TextUI_TestRunner::run( $suite);
    }
    
    public function setUp()
    {
    
    }
    
    public function testSomething()
    {
      $this->assertTrue(FALSE);
    }
    }
    
    if (!defined('PHPUnit_MAIN_METHOD')) {
      $class = str_replace('.php','', basename(__FILE__));
      $class::main();
    }

    And to actually debug it (assuming that the above test class is stored in tests/lib and is naturally named FooClassTest.php), I type:

    $ cd tests
    $ php5d libs/FooClassTest.php

    And to run it under PHPUnit, it’s works just as you’re used to already.

    /E

  • Using CodeIgniter migrations with PHPUnit

    Even some of the CodeIgniter developers are not especially happy about how Migrations are implemented in the current version. Never the less, if you have a CodeIgniter 2.x code base that you want to write unit tests for, you may want to use them.

    In a PHPUnit test class, you can use setUp() and tearDown() methods to prepare, run fixtures, create mock objects or whatever else you need to do. Since testing always should be targeted at a test database, one of the things I do is to run migrations. In the setUp() method, I execute migrations to create tables and insert data, in the tearDown() function, I do the opposite. Something like this:

    public function __construct($name = NULL, array $data = array(), $dataName = '')
    {
    	parent::__construct($name, $data, $dataName);
    	$this->CI->load->library('migration');
    }
    
    public function setUp()
    {
    	this->CI->migration->version(3);
    }
    
    public function tearDown()
    { 
    	$this->CI->migration->version(0);
    }

    Please note: Calling migrations from the setUp() or tearDown() functions may or may not be a very bright idea. It depends a lot on how you organize your tests. setUp() and tearDown() are called once before/after every individual test function in your test class, so static methods setUpBeforeClass() and tearDownAfterClass() may be a lot better:

    class InflowLibTest extends CIUnit_TestCase
    {
    	static public function setUpBeforeClass()
    	{
    		$CI =& get_instance(); 
    		$CI->load->library('migration');
    		// Also, make sure that the test db is up to the correct level:
    		$CI->migration->version(0);
    		$CI->migration->version(1);
    	}
    
    	static public function tearDownAfterClass()
    	{
    		$CI =& get_instance(); 
    		$CI->load->library('migration');
    		// Tear it down.
    		$CI->migration->version(0);
    	}

    Anyway, I discovered that in the current version,  CodeIgniter Migrations doesn’t do this vey well. The problem is in system/libraries/Migration.php. Line 160 in my file (version 2.1.3), but line 227 in the current Github version.

    // Cannot repeat a migration at different steps
    if (in_array($match[1], $migrations))
    {
    $this->_error_string = sprintf($this->lang->line('migration_multiple_version'), $match[1]);
    return FALSE;
    }
    
    include $f[0];
    $class = 'Migration_' . ucfirst($match[1]);
    
    if ( ! class_exists($class))
    {
    $this->_error_string = sprintf($this->lang->line('migration_class_doesnt_exist'), $class);
    return FALSE;
    }

    $f[0] is the variable holding the name of the Migration file to load and execute next. Problem is that that line assumes that the file in question isn’t already loaded. Probably an OK assumption in most cases. But when running in context of PHPUnit the way I do it, both setUp() and tearDown() will be run in the same PHP session. So the second time around, when tearDown() executes, all the migration modules will already be included into the PHP process. Simply changing include into require_once fixes the problem. The resulting code should look like this

    // Cannot repeat a migration at different steps
    if (in_array($match[1], $migrations))
    {
    $this->_error_string = sprintf($this->lang->line('migration_multiple_version'), $match[1]);
    return FALSE;
    }
    
    require_once $f[0];
    $class = 'Migration_' . ucfirst($match[1]);
    
    if ( ! class_exists($class))
    {
    $this->_error_string = sprintf($this->lang->line('migration_class_doesnt_exist'), $class);
    return FALSE;
    }

     

    Enjoy,

     

    /E

     

    [tags2products]

  • Debug PHP-cli scripts with Xdebug and Sublime Text 2

    In my previous post, I explained how I’ve set up debugging PHP scripts with Xdebug and Sublime Text  2 in a web based environment. In this part, I’ll outline how I debug PHP command line scripts.

    If you want to follow this guide, make sure you have everything setup as explained in the previous post.

    Triggering Xdebug

    When using Xdebug from a web browser, I use the Chrome extension Xdebug Helper to send a valiid XDEBUG_CONFIG parameter string to the PHP process. The magic part is to set the idekey parameter to sublime.xdebug (sent via the cookie). To do the  same thing when running a script from the command line, the magic trick is to use the environment variables. This is explained in the Xdebug manual, like this:

    export XDEBUG_CONFIG="idekey=session_name"
    php myscript.php

    Oh, how cumbersome to type. Let’s do that in a script instead. Create a file php5d like this:

    #!/bin/sh
    export XDEBUG_CONFIG="idekey=sublime.xdebug"
    php5 $@

    Make the file executable:

    chmod +x php5d

    Then give it a try, let’s debug the test script we created in the previous post, test.php:

    1. Open test.php in Sublime Text 2
    2. Start a debugging session by hitting Shift+F8
    3. Set a breakpoint on a suitable line (must be a non-blank line)
    4. Run the script: php5d test.php

    You should see something like this:

    Debugging_sublime

    One last thing

    With the php5d script in place, it’s easy and straight forward to debug command line scripts without doing too much damage on the normal environment. The last thing I did in my environment was to put the script in my /home/erik/bin folder to make it callable from everywhere on my machine:

    Make sure I’ve got a central place for your personal scripts. At the end of your ~/.bashrc, make sure you have a line like this (you may already have something similar in place, I’m using subfolder bin, you may use something else):

    ## Additional personal scripts etc.
    PATH=$PATH:"~/bin"

    Then, copy the php5d script to the intended folder:

    mv php5d ~/bin

    And reload .bashrc (or just start a new terminal window)

    . .bashrc

    There you go, a globally available php5d command that triggers debugging in Sublime Text 2 for whatever PHP script you launch. Enjoy

    /E

     

     

     

     

  • CodeIgniter for PHP CodeSniffer gets better

    Just a quick note, my PHP CodeSniffer standard for CodeIgniter improves gradually, since I originally mentioned it a month ago, the following improvements have been made to the repo:

    1. Improved indentation checks in switch statements, works with tabs now
    2. Correct file and class naming when creating libraries (CodeIgniter expects capitalized file names for libraries)
    3. Support for accepting a range of short (less than 4 characters) variable names. Common and meaningful names such as $sql, $id, $ret etc.
    4. (kind of a hack) Accept the public function name _init(). Only because it’s a requirement when using GasOrm.

    So, go update your installations, all 3 of you 🙂

    /Erik