searchzen combating complexity so you do not have to

Web development and the order of things

Las Meninas,Picasso

I got the chance to visit the Picasso museum in Barcelona in April 2014. Here I saw the collection of Picassos interpretations of Velasquez "Las Meninas" that he completed during the later stages of his career.

During my visit to the museum I saw some dutch high school students that had been given the task of sketching their own version of Picasso's version of Velasquez' work. It was fascinating to watch - it looked like that some of them were approaching the task with a high level of energy and some of theme seemed quite indifferent to the task.

Sitting there, watching students create an interpretation of an interpretation I realized that I lack ways to describe how software development ideas are formed.

Software development, like art, is underpinned by tradition. The software development community has developed a tradition of capturing and sharing ideas via "Software patterns" - some of the most influential being captured by "The gang of four" in "Design Patterns: Elements of Reusable Object-Oriented Software" and "Patterns of Enterprise Application Architecture" by Martin Fowler.

But I have come think of the limitations of design patterns. The most striking way I can describe the limitiations is to describe how hard it is to express how the visual vocabulary from Velasquez "Las meninas" is carried over to the Picasso version. These are two completely different works of art but it is obvious to the onlooker that the scene is the same and the ideas are the same, but that they are expressed in different ways. It seems to impossible to form an exhaustive list of patterns to describe to describe Velasquez '"Las meninas" that can be used to describe the Picasso version. But when you stand there and watch it seems obvious.

Another way to describe the limitations of design patterns would be to point how hard it is to describe the success of Ruby on rails and how it has influenced modern web development. Rail encompasses most of the popular enterprise design patterns, but the Active Record in particular. Rails stands as pillar stone in the formation of ideas for modern webdevelopment frameworks today - I would consider it the "Las Meninas" of web development. I see the inspiration from Rails in most of the other web frameworks I use today. Granted, some of the ideas expressed in Rails can only be expressed by a combination of tools - e.g I would choose a combination of express and yeoman as the "Picasso version". Today - Rails itself and frameworks being inspired by it are hardly comparable - but when you sit there and watch , the feeling is the same.

Every now and then we talk of a "paradigm" shift in software development. I do not think that the word "paradigm" is adequate to describe the plethora of tools that we have available today being inspired by Rails. I think that a word could be used is "Episteme" as used by Foucault in "The order of things" . In my understanding of the word it better expresses the unconscious choices we make due to our cultural settings and influences ( our "epoch").

I am just starting out reading the works of Michael Foucault . The introduction to "The order of things" where he describes "Las Meninas" in great detail fascinates me. I look forward to the journey. I hope to find new ways to describe how software ideas are formed.

Using qunit from grunt

When poking around in the jquery and jquery ui codebases I noticed an extensive use of qunit from grunt. I get it that the jquery guys also did qunit - so this makes sense. But why grunt? I remember the earlier versions just used makefiles.

Grunt and npm replaces make, autoconf and apt for javascript projects

I remember Makefiles as simple - just as long as you got the use of tabs right and kept them simple. It was just "the other tools" that made the experience bad for me. Remember automake and m4 scripting? Not to mention cmake. Tools like Make or ant are simple tools in themselves - they started getting complicated when the tradition to use associated tools arose - like Makefiles tend to assume that dependencies are handled using automake and m4 scripts and ant buildfiles retrieve dependencies with tools like ivy. When I consider the complexity I know from existing build toolchains on linux, then grunt starts to look simple. The simplicity of grunt comes from how easy it is to combine grunt plugins as compared to how complex the build situation using other tools has become. Using grunt it is now possible to build using only javascript based tools.

Installing grunt and npm.

To be able to use grunt you will need nodejs and npm. You can find a nodejs installer for most platforms at nodejs.org - this will include npm. When you install you should make sure that the "node" and "npm" commands are available on your commandline via your "PATH" environment variable. (For my less commandline-savvy friends there are some detailed instructions for windows 7 here)

You can install grunt globally from the commandline like this:

npm install grunt-cli

To take your newly installed grunt for a spin, you could try it out with building jquery-ui. To checkout jquery-ui you could do this:

git clone http://github.com/jquery/jquery-ui
cd jquery-ui
npm install
grunt --force

This should take you through an example of using grunt on an existing project. If all goes well then all the tests should pass and a new jquery-ui build should be avaible to you inside the "dist" folder.

Scaffolding a grunt project that supports qunit

The major strength of grunt is its strong tradition for plugins - but when starting up it can also be a major drawback. you need to setup some plugins to be able to start working. It does not help that the grunt-init command has been separated out in a plugin in version 0.4 (most of the existing blog entries just refers to it as being inside grunt) . See The Project scaffolding section in docs for more information.

To be able to run grunt-init you need to install the grunt-init plugin and grab a working template to start working:

npm install -g grunt-init
cd c:\users\jacob\.grunt-init
git clone https://github.com/gruntjs/grunt-init-gruntfile.git ~/.grunt-init/gruntfile gruntfile

Note that my username is "jacob" and that I am on windows here. You will probably have to insert another directory.

Now I could run "grunt-init gruntfile" and answer a couple of questions:

D:\Sites\2>grunt-init gruntfile --force
Running "init:gruntfile" (init) task
This task will create one or more files in the current directory, based on the
environment and the answers to a few questions. Note that answering "?" to any
question will show question-specific help and answering "none" to most questions

will leave its value blank.

Warning: Existing files may be overwritten! Used --force, continuing.

"gruntfile" template notes:
This template tries to guess file and directory paths, but you will most likely
need to edit the generated Gruntfile.js file before running grunt. If you run
grunt after generating the Gruntfile, and it exits with errors, edit the file!

Please answer the following:
[?] Is the DOM involved in ANY way? (Y/n)
[?] Will files be concatenated or minified? (Y/n)
[?] Will you have a package.json file? (Y/n)

After hitting Enter three times I got this file "Gruntfile.js":

/*global module:false*/
module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
    // Metadata.
    pkg: grunt.file.readJSON('package.json'),
    banner: '/*! <%= pkg.title || pkg.name %> - v<%= pkg.version %> - ' +
      '<%= grunt.template.today("yyyy-mm-dd") %>\n' +
      '<%= pkg.homepage ? "* " + pkg.homepage + "\\n" : "" %>' +
      '* Copyright (c) <%= grunt.template.today("yyyy") %> <%= pkg.author.name %>;' +
      ' Licensed <%= _.pluck(pkg.licenses, "type").join(", ") %> */\n',
    // Task configuration.
    concat: {
      options: {
        banner: '<%= banner %>',
        stripBanners: true
      },
      dist: {
        src: ['lib/<%= pkg.name %>.js'],
        dest: 'dist/<%= pkg.name %>.js'
      }
    },
    uglify: {
      options: {
        banner: '<%= banner %>'
      },
      dist: {
        src: '<%= concat.dist.dest %>',
        dest: 'dist/<%= pkg.name %>.min.js'
      }
    },
    jshint: {
      options: {
        curly: true,
        eqeqeq: true,
        immed: true,
        latedef: true,
        newcap: true,
        noarg: true,
        sub: true,
        undef: true,
        unused: true,
        boss: true,
        eqnull: true,
        browser: true,
        globals: {}
      },
      gruntfile: {
        src: 'Gruntfile.js'
      },
      lib_test: {
        src: ['lib/**/*.js', 'test/**/*.js']
      }
    },
    qunit: {
      files: ['test/**/*.html']
    },
    watch: {
      gruntfile: {
        files: '<%= jshint.gruntfile.src %>',
        tasks: ['jshint:gruntfile']
      },
      lib_test: {
        files: '<%= jshint.lib_test.src %>',
        tasks: ['jshint:lib_test', 'qunit']
      }
    }
  });

  // These plugins provide necessary tasks.
  grunt.loadNpmTasks('grunt-contrib-concat');
  grunt.loadNpmTasks('grunt-contrib-uglify');
  grunt.loadNpmTasks('grunt-contrib-qunit');
  grunt.loadNpmTasks('grunt-contrib-jshint');
  grunt.loadNpmTasks('grunt-contrib-watch');

  // Default task.
  grunt.registerTask('default', ['jshint', 'qunit', 'concat', 'uglify']);

};

To be able to run this Gruntfile you need to install the necessary plugins. If you append "--save-dev" to the installation commands then installation info will inserted into package.json:

npm install grunt-contrib-jshint --save-dev
npm install grunt-contrib-qunit --save-dev
npm install grunt-contrib-watch --save-dev
npm install grunt-contrib-concat --save-dev
npm install grunt-contrib-uglify --save-dev

This will retrieve the plugins and place them inside the node_modules and add them inside the devDependencies section of package.json . As you might have noticed the Grunt files are a bit verbose but they support easy composition of plugins - e.g. if you would like to another target , using another plugin , then you could do this pretty easily. The grunt files are written in javascript, so if you wish to insert custom logic in the build files, then it should be pretty easy to do so (Without worrying about tabs and spaces ).

Note that once you have created package.json describing your dependencies, then you can simple write "npm install" to install your dependencies. There is no need to store the "node_modules" folder in your version control system.

Qunit replaces junit and phpunit on the frontend.

After installing grunt-contrib-qunit and enabling it your gruntfile you now have the option of writing automated qunit tests that can be run directly from grunt. grunt-contrib-qunit uses phantomjs behind the scenes to enable you run your tests directly from grunt (without opening a browser). This should make it easier to automate your tests.

I think that best way to learn qunit is to look at existing tests. The button core test in jquery ui is a good place to start.

The essential functionality in qunit is :

  • ok (truthy [,message])
  • assert ( value, expected [,message])
  • expect ( number of assertions)

Combining these let's you write a test like the check for #7534 in jquery ui:

test( "#7534 - Button label selector works for ids with \":\"", function() {
  expect( 1 );
  var group = $( "<span><input type='checkbox' id='check:7534'> <label for='check:7534'>Label</label></span>" );
  group.find( "input" ).button();
  ok( group.find( "label" ).is( ".ui-button" ), "Found an id with a :" );
});

Here we expect 1 assertion to be run.

There are more advanced options in qunit that you can explore - feel free to take a look at the documentation on qunitjs.com - or be inspired by the existing tests in jquery-ui

Building postgres on windows

So. Lately I have been advocating a switch to postgres from various BigCo databases. I have been inferring that "postgres" is "just better". But basically I don't have a clue. I am not a database expert - and I did not take the advanced database classes in school. So - why I am I doing this?

Beating myself on the head with a wooden stick

I believe in having the source available for all my tools.This is just a personal preference of mine ... I like to poke around to learn new stuff .. discover new ways and combine projects in new ways .. but mostly I like to learn from the insights of others. See implementation tidbits buried deep down. My list of projects to poke around in is long. Postgres is just one of the projects I'd like to poke around in. So - now I am going to poke around in postgres. Let's see if we can compile it on window 7. I'll just write down notes as I go along. Are you ready? Bring forward your wooden stick!

Checking out postgres on windows

So - the plan is to compile C/C++ code on windows. To do this you need a C/C++ compiler. I'll just take the easy route here on windows and download Express 2012 for Windows Desktop . Note that you need to register to do this. Downloading and installing Visual Studio Express can take a while.

Then you need git . If you have been living under a rock for the last decade, then git is a distributed version control system - popularized by the linux kernel and github.com . You can grab a windows installer on git-scm.com . Go ahead and install it if you do not have it yet. You'll be glad you did.

After installing git you can clone the postgres code base like this:

mkdir build
cd build
git clone git://git.postgresl.org/git/postgresql.git

After a while then you should have the source code available . If you are like me ,then you'll probably hurry into \build\postgresql\src and notice win32.make. Maybe this will work?

nmake /f win32.mak

No luck! It fails with:

        link.exe -lib @C:\Users\Jacob\AppData\Local\Temp\nm3F8F.tmp
NMAKE : fatal error U1073: don't know how to make 'libpq-dist.rc'
Stop.
NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\BIN\nmake.EXE"' : return code '0x2'
Stop.
D:\Build\postgresql\src>

Luckily there is \build\postgresql\src\tools\msvc\build.pl .

Huh ? What's *.pl files ? That's perl. If you don't know what perl is , then you are in for a treat. Grab active state perl and install it if you don't have it yet, so you will be able to process the file.

Now. After installing perl, let's cross our fingers and type

perl build.pl

No?? msbuild throws up with:

D:\Build\postgresql\src\tools\msvc>perl build.pl
Detected hardware platform: Win32
Microsoft (R) Build Engine version 4.0.30319.17929
[Microsoft .NET Framework, version 4.0.30319.18052]
Copyright (C) Microsoft Corporation. All rights reserved.

Building the projects in this solution one at a time. To enable parallel build,
please add the "/m" switch.
Build started 06-10-2013 17:58:55.
Project "D:\Build\postgresql\pgsql.sln" on node 1 (default targets).
Building with tools version "2.0".
Target "ValidateSolutionConfiguration" in file "D:\Build\postgresql\pgsql.sln.m
etaproj" from project "D:\Build\postgresql\pgsql.sln" (entry point):
Using "Error" task from assembly "Microsoft.Build.Tasks.v4.0, Version=4.0.0.0,
Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".
Task "Error"
D:\Build\postgresql\pgsql.sln.metaproj : error MSB4126: The specified solution
configuration "Release|MCD" is invalid. Please specify a valid solution configu
ration using the Configuration and Platform properties (e.g. MSBuild.exe Soluti
on.sln /p:Configuration=Debug /p:Platform="Any CPU") or leave those properties
blank to use the default solution configuration. [D:\Build\postgresql\pgsql.sln
]
Done executing task "Error" -- FAILED.
Done building target "ValidateSolutionConfiguration" in project "pgsql.sln" --
FAILED.
Done Building Project "D:\Build\postgresql\pgsql.sln" (default targets) -- FAIL
ED.


Build FAILED.

Oh.you probably spotted it also. "Detected hardware platform: win32". I ran this using the "Developer Command Prompt for VS2012" - maybe this targets win32 pr default? If I select "Microsoft Visual Studio 2012"|"Visual Studio Tools"|"VS2012 x64 Cross Tools Command Prompt" and execute "build" again - then it works!

After the compilation finished I typed:

mkdir c:\postgres
install c:\postgres

Now I can use postgres from c:\postgres !

And now for something completely different

After finishing what I did above I throw out my custom compile and starting using the postgres zip archive again. I kept the code locally though. Right now I am poking around in the source code using "Run Source code analysis on solution". This gives me the lowdown on what Microsoft thinks could be improved in the code. Let's see an example:

C6001    Using uninitialized memory  Using uninitialized memory 'replace_val'.   libpgtypes  timestamp.c 845
        'replace_val' is not initialized            388
        Enter this loop, (assume '*p')            394
        Enter this branch, (assume '*p==37')          396
        Assume switch ( '*p' ) resolves to case 99:            401
        'replace_val' is an In/Out argument to 'pgtypes_fmt_replace' (declared at d:\build\postgresql\src\interfaces\ecpg\pgtypeslib\extern.h:37)         845
        'replace_val' is used, but may not have been initialized            845

Note that this is a random example. Right now I have a limited understanding of the postgresql codebase, so following the hardening guidelines from OWASP seems like a good idea.

But wait! There's more

I like what I see. Looks like there is an active community for developers here . And . oh. Here is the official "Installation From Source on Windows" in the documentation. It looks solid. I'll go check that out now :P

installing geoserver on debian

I just installed geoserver on debian using apache 2.2 . Here's what I did:

First of all I installed jetty using "sudo aptitude install jetty" , then I grabbed the geoserver source from http://svn.codehaus.org/geoserver/trunk/ and compiled it using openjdk-6 and maven 2.2  (looks like the build fails using the standard maven in debian, so I grabbed a version of maven from ftp://mirrors.sunsite.dk ) .

After compiling, I copied geoserver/src/web/app/target/geoserver.war to /usr/share/jetty/webapps/ and restarted jetty using /etc/init.d/jetty restart .

I can only access port 80 on my webhost, and I need apache 2 for other purposes , so I had to configure modproxy. I setup a virtual host in /etc/apache2/sites-available/geo.searchzen.org and symlinked it to /etc/apache2/sites-enabled/geo.searchzen.org . To enable modproxy , I created  symlinks  for  /etc/apache2/mods-available/proxy.load and /etc/apache2/mods-available/proxyhttp.load to /etc/apache2/mods-enabled. (modproxy fails without the symlink to proxy_http.load)

Here's the relevant parts of my mod_proxy configuration in /etc/apache2/sites-enabled/geo.searchzen.org :

Mindful software

I have spent considerable amounts of  time thinking about the concept of information and how to convey it in software.  Some years ago I came to the conclusion that I want to present information in context, e.g not present crude extracts from databases, but adapt it to the presentation context, with the user in mind.

Some useful contexts could be "location" or "social networks" - the context should be varied depending on the type of information context - e.g information about abstract concepts have no use of location information. The use of context should increase the likelihood of the information being conveyed to the user in a understandable manner.

When I observe users, I rarely see one user that uses only one tool to achieve her goal. Information gathering is usually done using a variety of sources - so a good system design principle could be to know where the system should stop - and how to present the information  in a such a manner that supplemental or related information can be retrieved from another system.

So , to me, presenting information in context is also about presenting the least amount of "friction" in the system. Here I consider any obstacles hindering information flow in and out of the system as "friction". By minimizing the "friction" we make it easier to present information in context by connecting data between systems.

This has lead me to think that good system design should focus on how information is shared  between users via connected systems.  Turning the attention to how information is shared between users via connected systems, then we obtain a understanding what the data is (since we need to be able to share it).