searchzen combating complexity so you do not have to

MVC considered harmful

Having spent considerable amounts of time looking at large javascript and HTML related codebases I have come to one conclusion: most attempts at structuring frontend code or anything UI related in a team setting will fail if the attempts does not consider the long traditions for software development - and understand the mental models that have been formed so far. Or as George Santayana put it "Those who cannot remember the past are condemned to repeat it".

To better understand the historical context I think all web developers should be at least familiar with a basic description of the MVC pattern and familiar with the existing frameworks that are allready available.

A detour around Xerox PARC

Prior to the internet - programming mostly involved a workstation and in some cases a central server with a central datastore . Programming was mainly about data entry and computation and how share the results of that in a meaningfull manner. Great care and effort was put into meeting the immediate expectations of the users and programming was typically done using a structured plan - something that we know today as the waterfall model . Software modelling was (and still is) done by using metaphors to map user needs into software and carefully designing the sofware using object oriented constructs. One prevalent approach was to apply the MVC pattern invented by Trygve Reenskaug in 1978 while visiting Xerox PARC.

At the core of the MVC paradigm is :

  • Model - Encapsulating the data and the domain logic
  • View - Presenting data to the user
  • Controller - Responsible for controlling the flow in the application and sending data back and forth between the model and the view.

The MVC pattern has been been the foundation for many of the modern user interfaces and frameworks we know today. Examples include Java Swing, Ruby on Rails and asp.net MVC .

Several implementations of javascript MVC frameworks allready exist. You can compare the different implementations via the TodoMVC . As you might have noticed there are allready a lot of implementations readily available so there is no need to go about implementing your own.

Some of the most currently publicly visible projects are:

It should be noted that most of the listed frameworks now have implemented MVVM as an adaption of MVC. Microsoft introduced MVVM as an adaption of MVC to with considerations about the costs of exposing the entire model to the view. This was originally available via Silverlight and WPF and to some extent Adobe Flex - but has now been widely implemented by other frameworks.

Using MV* javascript frameworks is not a quick fix

With the advent of javascript MV* frameworks we now should have a shared vocabulary for programming , but for many practical use cases regarding web applications it is typically disregarded. Why is that?

One might argue that the average bravado of your average-day web hacker resembles that of a hired gun in the wild west. It is very easy to reach small implementation goals fast using HTML, css and javascript. Javascript was introduced during the early days of the web to introduce more advanced ways of interaction in a web application - not as a means of structuring applications per se. The early version of javascript was implemented by Brendan Eich in 10 days and contains shortcomings that has lead to use of linters like jslint. Popular javascript libraries like prototypejs (that was popular during the rise of ruby on rails ) and jquery tried to address some of the shortcomings and disparaties in
DOM manipulation the various browsers. Somewhat deservingly javascript gained the notion of a toy language by professional programmers in its early days - this situation has changed with the introduction of tools like jslint and the ability to focus on the good parts of javascript. And with the advent of javascript on the serverside it is possible to end-to-end solutions for clean room MVC - but in most practical scenarios where the server implementation will be in something other than javascript, then there will be some kind of duplication of logic on the server and the client. Or as zxibit could have said:

Yo Dawg I hurd you like MVC so I put a MVC in your MVC

Jokes aside ... It is important to understand that taking the MVC paradigm and applying it on web application programming in the same manner as you would have applied it to application programming in the PC era could be considered harmful. The web has a entirely different delivery mechanism than your local PC had.

Understanding the web as a delivery platform

The main difference between a local pc application and a web application today is typically that the web application involves data available from another machine. This should not pose a noticeable problem when rendering logic is expressed on the server - e.g when all the view is rendered via one call to the server. But if the view is constructed on the client side and requires several calls to fetch data e.g for a compound view then this could pose a problem performancewise. It should be immediately obvious that a solution where all data comes prerendered will be faster than a solution that requires data to be assembled via several calls. When you also consider the added complexity of tackling layout changes as a result of unpredictable results then you could start considering looking for ways to improve performance. So - one obvious choice would be to minimize the amount of needed calls to the server.

Where to go from here?

First of all - You should think very hard about what you are doing if you are planning to roll your own MVC framework. If you are going to do it anyway I hope you at least browse through the allready available frameworks and familiarize yourself with the basics of MV* and understand the performance penalties of cleanroom MVC.

One way to minimize the number of calls to server would be to setup a fullstack solution - where javascript is used both on the server and on the client side. This could be done e.g with react.js that has introduced the use of a virtual dom so that you can reason about the entire page before starting rendering on the client side (here is an example to get started: react server example ). react.js could be used for the V in MV* . For the M you could use Backbone.js or vanilla javascript constructs.

If you are thinking about refactoring an existing codebase to support better separation of concerns then I suggest that you consider a temporary goal before switching your entire codebase to e.g angular og ember.js . If you are in a situation where view rendering are constructed via javascript or some custom constructs on the serverside the you could consider if you could express the sample logic via an existing templating engine. E.g if you are considering a switch to ember.js then it could be worthwhile to try to port some of your viewlogic to handlebars first. If your existing codebase was constructed sometime after 2000 then chances are that data retrieval/manipulation parts allready are expressed in manner that are without rendering logic - if not - then you could refactor them to be so.

Web development and the order of things

Las Meninas,Picasso

I got the chance to visit the Picasso museum in Barcelona in April 2014. Here I saw the collection of Picassos interpretations of Velasquez "Las Meninas" that he completed during the later stages of his career.

During my visit to the museum I saw some dutch high school students that had been given the task of sketching their own version of Picasso's version of Velasquez' work. It was fascinating to watch - it looked like that some of them were approaching the task with a high level of energy and some of theme seemed quite indifferent to the task.

Sitting there, watching students create an interpretation of an interpretation I realized that I lack ways to describe how software development ideas are formed.

Software development, like art, is underpinned by tradition. The software development community has developed a tradition of capturing and sharing ideas via "Software patterns" - some of the most influential being captured by "The gang of four" in "Design Patterns: Elements of Reusable Object-Oriented Software" and "Patterns of Enterprise Application Architecture" by Martin Fowler.

But I have come think of the limitations of design patterns. The most striking way I can describe the limitiations is to describe how hard it is to express how the visual vocabulary from Velasquez "Las meninas" is carried over to the Picasso version. These are two completely different works of art but it is obvious to the onlooker that the scene is the same and the ideas are the same, but that they are expressed in different ways. It seems to impossible to form an exhaustive list of patterns to describe to describe Velasquez '"Las meninas" that can be used to describe the Picasso version. But when you stand there and watch it seems obvious.

Another way to describe the limitations of design patterns would be to point how hard it is to describe the success of Ruby on rails and how it has influenced modern web development. Rail encompasses most of the popular enterprise design patterns, but the Active Record in particular. Rails stands as pillar stone in the formation of ideas for modern webdevelopment frameworks today - I would consider it the "Las Meninas" of web development. I see the inspiration from Rails in most of the other web frameworks I use today. Granted, some of the ideas expressed in Rails can only be expressed by a combination of tools - e.g I would choose a combination of express and yeoman as the "Picasso version". Today - Rails itself and frameworks being inspired by it are hardly comparable - but when you sit there and watch , the feeling is the same.

Every now and then we talk of a "paradigm" shift in software development. I do not think that the word "paradigm" is adequate to describe the plethora of tools that we have available today being inspired by Rails. I think that a word could be used is "Episteme" as used by Foucault in "The order of things" . In my understanding of the word it better expresses the unconscious choices we make due to our cultural settings and influences ( our "epoch").

I am just starting out reading the works of Michael Foucault . The introduction to "The order of things" where he describes "Las Meninas" in great detail fascinates me. I look forward to the journey. I hope to find new ways to describe how software ideas are formed.

Using qunit from grunt

When poking around in the jquery and jquery ui codebases I noticed an extensive use of qunit from grunt. I get it that the jquery guys also did qunit - so this makes sense. But why grunt? I remember the earlier versions just used makefiles.

Grunt and npm replaces make, autoconf and apt for javascript projects

I remember Makefiles as simple - just as long as you got the use of tabs right and kept them simple. It was just "the other tools" that made the experience bad for me. Remember automake and m4 scripting? Not to mention cmake. Tools like Make or ant are simple tools in themselves - they started getting complicated when the tradition to use associated tools arose - like Makefiles tend to assume that dependencies are handled using automake and m4 scripts and ant buildfiles retrieve dependencies with tools like ivy. When I consider the complexity I know from existing build toolchains on linux, then grunt starts to look simple. The simplicity of grunt comes from how easy it is to combine grunt plugins as compared to how complex the build situation using other tools has become. Using grunt it is now possible to build using only javascript based tools.

Installing grunt and npm.

To be able to use grunt you will need nodejs and npm. You can find a nodejs installer for most platforms at nodejs.org - this will include npm. When you install you should make sure that the "node" and "npm" commands are available on your commandline via your "PATH" environment variable. (For my less commandline-savvy friends there are some detailed instructions for windows 7 here)

You can install grunt globally from the commandline like this:

npm install grunt-cli

To take your newly installed grunt for a spin, you could try it out with building jquery-ui. To checkout jquery-ui you could do this:

git clone http://github.com/jquery/jquery-ui
cd jquery-ui
npm install
grunt --force

This should take you through an example of using grunt on an existing project. If all goes well then all the tests should pass and a new jquery-ui build should be avaible to you inside the "dist" folder.

Scaffolding a grunt project that supports qunit

The major strength of grunt is its strong tradition for plugins - but when starting up it can also be a major drawback. you need to setup some plugins to be able to start working. It does not help that the grunt-init command has been separated out in a plugin in version 0.4 (most of the existing blog entries just refers to it as being inside grunt) . See The Project scaffolding section in docs for more information.

To be able to run grunt-init you need to install the grunt-init plugin and grab a working template to start working:

npm install -g grunt-init
cd c:\users\jacob\.grunt-init
git clone https://github.com/gruntjs/grunt-init-gruntfile.git ~/.grunt-init/gruntfile gruntfile

Note that my username is "jacob" and that I am on windows here. You will probably have to insert another directory.

Now I could run "grunt-init gruntfile" and answer a couple of questions:

D:\Sites\2>grunt-init gruntfile --force
Running "init:gruntfile" (init) task
This task will create one or more files in the current directory, based on the
environment and the answers to a few questions. Note that answering "?" to any
question will show question-specific help and answering "none" to most questions

will leave its value blank.

Warning: Existing files may be overwritten! Used --force, continuing.

"gruntfile" template notes:
This template tries to guess file and directory paths, but you will most likely
need to edit the generated Gruntfile.js file before running grunt. If you run
grunt after generating the Gruntfile, and it exits with errors, edit the file!

Please answer the following:
[?] Is the DOM involved in ANY way? (Y/n)
[?] Will files be concatenated or minified? (Y/n)
[?] Will you have a package.json file? (Y/n)

After hitting Enter three times I got this file "Gruntfile.js":

/*global module:false*/
module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
    // Metadata.
    pkg: grunt.file.readJSON('package.json'),
    banner: '/*! <%= pkg.title || pkg.name %> - v<%= pkg.version %> - ' +
      '<%= grunt.template.today("yyyy-mm-dd") %>\n' +
      '<%= pkg.homepage ? "* " + pkg.homepage + "\\n" : "" %>' +
      '* Copyright (c) <%= grunt.template.today("yyyy") %> <%= pkg.author.name %>;' +
      ' Licensed <%= _.pluck(pkg.licenses, "type").join(", ") %> */\n',
    // Task configuration.
    concat: {
      options: {
        banner: '<%= banner %>',
        stripBanners: true
      },
      dist: {
        src: ['lib/<%= pkg.name %>.js'],
        dest: 'dist/<%= pkg.name %>.js'
      }
    },
    uglify: {
      options: {
        banner: '<%= banner %>'
      },
      dist: {
        src: '<%= concat.dist.dest %>',
        dest: 'dist/<%= pkg.name %>.min.js'
      }
    },
    jshint: {
      options: {
        curly: true,
        eqeqeq: true,
        immed: true,
        latedef: true,
        newcap: true,
        noarg: true,
        sub: true,
        undef: true,
        unused: true,
        boss: true,
        eqnull: true,
        browser: true,
        globals: {}
      },
      gruntfile: {
        src: 'Gruntfile.js'
      },
      lib_test: {
        src: ['lib/**/*.js', 'test/**/*.js']
      }
    },
    qunit: {
      files: ['test/**/*.html']
    },
    watch: {
      gruntfile: {
        files: '<%= jshint.gruntfile.src %>',
        tasks: ['jshint:gruntfile']
      },
      lib_test: {
        files: '<%= jshint.lib_test.src %>',
        tasks: ['jshint:lib_test', 'qunit']
      }
    }
  });

  // These plugins provide necessary tasks.
  grunt.loadNpmTasks('grunt-contrib-concat');
  grunt.loadNpmTasks('grunt-contrib-uglify');
  grunt.loadNpmTasks('grunt-contrib-qunit');
  grunt.loadNpmTasks('grunt-contrib-jshint');
  grunt.loadNpmTasks('grunt-contrib-watch');

  // Default task.
  grunt.registerTask('default', ['jshint', 'qunit', 'concat', 'uglify']);

};

To be able to run this Gruntfile you need to install the necessary plugins. If you append "--save-dev" to the installation commands then installation info will inserted into package.json:

npm install grunt-contrib-jshint --save-dev
npm install grunt-contrib-qunit --save-dev
npm install grunt-contrib-watch --save-dev
npm install grunt-contrib-concat --save-dev
npm install grunt-contrib-uglify --save-dev

This will retrieve the plugins and place them inside the node_modules and add them inside the devDependencies section of package.json . As you might have noticed the Grunt files are a bit verbose but they support easy composition of plugins - e.g. if you would like to another target , using another plugin , then you could do this pretty easily. The grunt files are written in javascript, so if you wish to insert custom logic in the build files, then it should be pretty easy to do so (Without worrying about tabs and spaces ).

Note that once you have created package.json describing your dependencies, then you can simple write "npm install" to install your dependencies. There is no need to store the "node_modules" folder in your version control system.

Qunit replaces junit and phpunit on the frontend.

After installing grunt-contrib-qunit and enabling it your gruntfile you now have the option of writing automated qunit tests that can be run directly from grunt. grunt-contrib-qunit uses phantomjs behind the scenes to enable you run your tests directly from grunt (without opening a browser). This should make it easier to automate your tests.

I think that best way to learn qunit is to look at existing tests. The button core test in jquery ui is a good place to start.

The essential functionality in qunit is :

  • ok (truthy [,message])
  • assert ( value, expected [,message])
  • expect ( number of assertions)

Combining these let's you write a test like the check for #7534 in jquery ui:

test( "#7534 - Button label selector works for ids with \":\"", function() {
  expect( 1 );
  var group = $( "<span><input type='checkbox' id='check:7534'> <label for='check:7534'>Label</label></span>" );
  group.find( "input" ).button();
  ok( group.find( "label" ).is( ".ui-button" ), "Found an id with a :" );
});

Here we expect 1 assertion to be run.

There are more advanced options in qunit that you can explore - feel free to take a look at the documentation on qunitjs.com - or be inspired by the existing tests in jquery-ui

Building postgres on windows

So. Lately I have been advocating a switch to postgres from various BigCo databases. I have been inferring that "postgres" is "just better". But basically I don't have a clue. I am not a database expert - and I did not take the advanced database classes in school. So - why I am I doing this?

Beating myself on the head with a wooden stick

I believe in having the source available for all my tools.This is just a personal preference of mine ... I like to poke around to learn new stuff .. discover new ways and combine projects in new ways .. but mostly I like to learn from the insights of others. See implementation tidbits buried deep down. My list of projects to poke around in is long. Postgres is just one of the projects I'd like to poke around in. So - now I am going to poke around in postgres. Let's see if we can compile it on window 7. I'll just write down notes as I go along. Are you ready? Bring forward your wooden stick!

Checking out postgres on windows

So - the plan is to compile C/C++ code on windows. To do this you need a C/C++ compiler. I'll just take the easy route here on windows and download Express 2012 for Windows Desktop . Note that you need to register to do this. Downloading and installing Visual Studio Express can take a while.

Then you need git . If you have been living under a rock for the last decade, then git is a distributed version control system - popularized by the linux kernel and github.com . You can grab a windows installer on git-scm.com . Go ahead and install it if you do not have it yet. You'll be glad you did.

After installing git you can clone the postgres code base like this:

mkdir build
cd build
git clone git://git.postgresl.org/git/postgresql.git

After a while then you should have the source code available . If you are like me ,then you'll probably hurry into \build\postgresql\src and notice win32.make. Maybe this will work?

nmake /f win32.mak

No luck! It fails with:

link.exe -lib @C:\Users\Jacob\AppData\Local\Temp\nm3F8F.tmp
NMAKE : fatal error U1073: don't know how to make 'libpq-dist.rc'
Stop.
NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\BIN\nmake.EXE"' : return code '0x2'
Stop.
D:\Build\postgresql\src>

Luckily there is \build\postgresql\src\tools\msvc\build.pl .

Huh ? What's *.pl files ? That's perl. If you don't know what perl is , then you are in for a treat. Grab active state perl and install it if you don't have it yet, so you will be able to process the file.

Now. After installing perl, let's cross our fingers and type

perl build.pl

No?? msbuild throws up with:

D:\Build\postgresql\src\tools\msvc>perl build.pl
Detected hardware platform: Win32
Microsoft (R) Build Engine version 4.0.30319.17929
[Microsoft .NET Framework, version 4.0.30319.18052]
Copyright (C) Microsoft Corporation. All rights reserved.

Building the projects in this solution one at a time. To enable parallel build,
please add the "/m" switch.
Build started 06-10-2013 17:58:55.
Project "D:\Build\postgresql\pgsql.sln" on node 1 (default targets).
Building with tools version "2.0".
Target "ValidateSolutionConfiguration" in file "D:\Build\postgresql\pgsql.sln.m
etaproj" from project "D:\Build\postgresql\pgsql.sln" (entry point):
Using "Error" task from assembly "Microsoft.Build.Tasks.v4.0, Version=4.0.0.0,
Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".
Task "Error"
D:\Build\postgresql\pgsql.sln.metaproj : error MSB4126: The specified solution
configuration "Release|MCD" is invalid. Please specify a valid solution configu
ration using the Configuration and Platform properties (e.g. MSBuild.exe Soluti
on.sln /p:Configuration=Debug /p:Platform="Any CPU") or leave those properties
blank to use the default solution configuration. [D:\Build\postgresql\pgsql.sln
]
Done executing task "Error" -- FAILED.
Done building target "ValidateSolutionConfiguration" in project "pgsql.sln" --
FAILED.
Done Building Project "D:\Build\postgresql\pgsql.sln" (default targets) -- FAIL
ED.


Build FAILED.

Oh.you probably spotted it also. "Detected hardware platform: win32". I ran this using the "Developer Command Prompt for VS2012" - maybe this targets win32 pr default? If I select "Microsoft Visual Studio 2012"|"Visual Studio Tools"|"VS2012 x64 Cross Tools Command Prompt" and execute "build" again - then it works!

After the compilation finished I typed:

mkdir c:\postgres
install c:\postgres

Now I can use postgres from c:\postgres !

And now for something completely different

After finishing what I did above I throw out my custom compile and starting using the postgres zip archive again. I kept the code locally though. Right now I am poking around in the source code using "Run Source code analysis on solution". This gives me the lowdown on what Microsoft thinks could be improved in the code. Let's see an example:

C6001  Using uninitialized memory  Using uninitialized memory 'replace_val'.   libpgtypes  timestamp.c 845
        'replace_val' is not initialized            388
        Enter this loop, (assume '*p')            394
        Enter this branch, (assume '*p==37')          396
        Assume switch ( '*p' ) resolves to case 99:            401
        'replace_val' is an In/Out argument to 'pgtypes_fmt_replace' (declared at d:\build\postgresql\src\interfaces\ecpg\pgtypeslib\extern.h:37)         845
        'replace_val' is used, but may not have been initialized            845

Note that this is a random example. Right now I have a limited understanding of the postgresql codebase, so following the hardening guidelines from OWASP seems like a good idea.

But wait! There's more

I like what I see. Looks like there is an active community for developers here . And . oh. Here is the official "Installation From Source on Windows" in the documentation. It looks solid. I'll go check that out now :P

installing geoserver on debian

I just installed geoserver on debian using apache 2.2 . Here's what I did:

First of all I installed jetty using "sudo aptitude install jetty" , then I grabbed the geoserver source from http://svn.codehaus.org/geoserver/trunk/ and compiled it using openjdk-6 and maven 2.2  (looks like the build fails using the standard maven in debian, so I grabbed a version of maven from ftp://mirrors.sunsite.dk ) .

After compiling, I copied geoserver/src/web/app/target/geoserver.war to /usr/share/jetty/webapps/ and restarted jetty using /etc/init.d/jetty restart .

I can only access port 80 on my webhost, and I need apache 2 for other purposes , so I had to configure modproxy. I setup a virtual host in /etc/apache2/sites-available/geo.searchzen.org and symlinked it to /etc/apache2/sites-enabled/geo.searchzen.org . To enable modproxy , I created  symlinks  for  /etc/apache2/mods-available/proxy.load and /etc/apache2/mods-available/proxyhttp.load to /etc/apache2/mods-enabled. (modproxy fails without the symlink to proxy_http.load)

Here's the relevant parts of my mod_proxy configuration in /etc/apache2/sites-enabled/geo.searchzen.org :