Wednesday, 21 September 2011

The Cost of Woo

Last Friday I wrote about The Woo of Software Development. On Monday I came across this article in the Guardian about the failure of the last government's attempt to update the fire service's IT systems.

An attempted reorganization of the fire service by the last government cost nearly £500m and was one of the worst cases of project failure MPs have ever seen, according to a highly critical report published on Tuesday by the all-party Commons public accounts committee (PAC). It warns that finally getting the system working properly is likely to cost an additional £85m.

You can bet your bottom dollar that this project had more 'method' than you can shake a stick at.

The project's development was heavily reliant on advice from PA Consulting, whose services alone cost £42m.

That's a whole of expensive advice on how to run a big project, and yet...

The scheme was terminated last December with no objectives achieved and at least £469m wasted, the MPs say.

What happened?


Judging by the rest of the article, the people who commissioned the system suffered a £500m knee-jerk reaction in the post-9/11 gestalt, and the people who built it followed their process, which resulted in a massive overspend with no delivery at the end.  In other words, the commissioners reacted instead of thinking, and the implementors executed their methods rather than thinking.


Most processes have a feasibility study phase, but the report says the project was pushed ahead without undertaking any feasibility checks.  What is really frightening is that no-one questioned its feasibility during the subsequent years, even as the budget escalated.  That is the downside of methods and processes.  Once they're engaged, rational thought outside the box becomes difficult and anyone who says, 'hey, wait a minute...' is going to be swiftly side-lined or dropped from the project.  Active thought is actively discouraged.


Industry gurus react to problems like this by instituting more method and more standardization that constrains critical thinking, rather than arguing for more up-front thinking before anyone goes anywhere near a method.  The result is a growing fixation with style over substance, and a £500m failure of Woo! to deliver.

Friday, 16 September 2011

The Woo of Software Development Methods


There's a common misconception in the software industry.  It assumes that people are rational thinkers.

When you work in IT, you work with systems that are logical.  If you put X in then you get X' out.  There's a cold, logic to this that is self-evident given the nature of silicon.  It only seems reasonable, therefore, that software should be developed in the same way, and so the industry has put its faith in methods that define the processes and products of the software development activity.

Back in the day, big process definitions like SSADM and Prince were conjured up to turn the creative activity of making software into a domesticated beast that would deliver quality results in predictable timescales.  The people behind the methods saw the chaotic, personalized working methods of the time and figred that what they needed to do was turn software engineering into a production line.  Every set of the process could be defined as inputs, transformations and outputs.  The idea, it seemed, was that you input a client's requirements at one end of the factory, and the finished software would fall out of the other.  All the steps in between would be semi-automated with manual workers operating machine stations along the way, each of which would perform some transformation on the product as it whizzed along the conveyor belt towards the door.

When the factory approach didn't work, the methodologists sought to break the manufacturing line down into ever smaller and more controlled steps.  Different methodologists broke the process down into different micro-steps.  Grady Booch, Jim Rumbaugh and Ivor Jacobson were three prominent personalities at the time when object-oriented methods were undergoing their Cambrian explosion.  The reason they're remembered, when the others are forgotten, is that they had the wits to team up and present a unified front in the method wars.  The result of their alliance was the Unified Modeling Language (UML), and it was the worst thing that ever happened to software engineering.

From the perspective of Booch, Rumbaugh and Jacobson, the UML was roaring success.  It sold like hot cakes.  Instead of having to choose between methods, users could simply adopt UML because it incorporated all methods.  With UML you could slice and dice a system pretty much any way you wanted to.  The trouble with UML was that... you could slice and dice a system anyway you wanted to.  Instead of having to choose a specific method, developers suffered the paralysis of too much choice.  UML was all about unifying all the diagramming types out there into a single standard.  Objects would now be drawn thus, end of problem.  Unfortunately, the UML had nothing useful to say about what an object was or what constituted a good way of partitioning concepts into objects.  During the late 90s and early 00s, I sat through innumerable presentations of models that were simply disasters in the making.  A couple are quite memorable.  One had about 100 subtypes hanging off one super type that when printed out covered four conference tables, yet the print was so small that I had to squint to read it.  Another had a mere four objects in it, but one of those objects had over three hundred states behind it, and the developers wondered why the tool took so long to load it.  Credit where credit is due: Booch, Rumbaugh and Jacobson realized that the UML was just a drawing standard and they addressed the lack of process guidance behind it by publishing the Rational Unified Process (RUP).  But the RUP, it turned out, had exactly the same fatal flaw built into it that the UML had: the process it presented was in fact the cross-product of all the processes out there in software engineering land.  The RUP allowed you to use any process you could think of by simply 'tailoring' its contents.  Users were, once again, faced with the paralysis of choice.

When you get right down to it, the UML/RUP won the war, but lost the peace.  Instead of helping software engineers to think about software development, it delivered a screaming Tower of Babble, and it was up to each engineer to design his own method using the syntax of the UML/RUP.

The UML/RUP were all syntax, and no semantics.  How did such a useless 'method' win the war?

The answer is simple: people like options.

Digital photography is a massive past time these days.  Millions of people take billions of pictures every day.  For many of them, photography presents a challenge that they enjoy, and there a huge industry has arisen around the problem of how to take better pictures.  Magazines catering to those with the shutter bug crowd the shelves of supermarkets and newsagents and scores of books are written every year telling the amateur photographer how to they can improve their photography.  A sizable proportion of them are dedicated to explaining how the latest camera with the Xmegapixel sensor and 200 plus marketing features produces 'better' images than the outdated one you've got.  Go to a camera club meet-up and you'll find yourself surrounded by people talking about lens sizes, sensor sizes, metering options and post-processing software.  Look at their images, though, and you'll generally be underwhelmed.  These people are all kit, and no clue.  But, they think, if I buy the next camera with the bigger sensor, the faster processor and the facial recognition option then I'll take better pictures.  Meanwhile, some teenager out there is shooting stuff on a $30 Holga that has the one thing none of the kit-chasers is even thinking about: soul.

Method developers are like the digital camera manufacturers in that they are continually seeking to automate the creative process of engineering software.  If they can just manage to lock down the development process tightly enough then all the difficulties of engineering systems will go away.  Okay, that didn't happen with the last revision, but if we get a standards committee together and extend the scope of the UML that little bit more then everything will be fixed.  In other words, when it comes to methods, more is more.

Run-of-the-mill software developers and project managers like more-is-more.  More is going to take the risk out of their work.  More is going to deliver systems on time and to budget.  More is going to ensure the quality of their systems, too.  More method equals more software, right?

Run-of-the-mill software developers and managers are like the camera kit-chasers.  All kit, and no clue.  To see why, we need to take a look at the history of software development and consider the one thing that the methodologists are seeking to eliminate from the development process: the human mind.

Think about these milestones in the development of programming and software engineering: Dennis Richie's C language and Unix operating system, Bjarne Stroustrup's C++, Kristen Nygaard and Ole-Johan Dahl's Simula (the first object-oriented language), James Gosling's Java, Brad Cox's Objective-C, John W. Backus' Fortran, Bill Gates' MS-DOS, Andy Bechtolsheim's first Sun workstation, E.F. Codd's 3rd normal form for relational databases, Tim Berners-Lee's HTML, Craig McClanahan's Struts.

All these technical advances sprang from the personal, creative visions of very specific individuals.  In other words, none of them were designed by committee or developed using a method.  Amazing software is developed by amazing people, and yet the software engineering methodologists seek only to 'improve' software engineering by taking the people out of the equation.  They seem to think that if they can lock the process down tightly enough then even a monkey could deliver amazing software.

The trouble with the factory metaphor for software development is that factories are dumb.  Factories are set up *after* someone has designed a product, not *as* they are designing it.  All the difficult engineering and design decisions are made long before the shop floor is tooled up.  Factory-style software engineering is like delivering raw ingredients to the goods in door and expecting the ladies and gentlemen on the shop floor to figure out how to deliver a finished product at the goods out door.  Factories create, but they are not creative.

People are creative.  Some people are amazingly creative, though, while others are not, and the difference between these two types is down to the way they think.

As I said at the beginning of this post, there is an assumption out there that people are logical, rational thinkers, especially in IT, but this assumption ignores the fundamental architecture of the human brain.

The human brain has a pattern matching, and pattern forming, architecture.  By default, it will find the best pattern it already has and fit the data to that: http://lesswrong.com/lw/7mx/your_inner_google/

Matching patterns is instinctive; forming them is hard work.  You *know* this to be true.  Learning to, say, play a guitar is very hard work to start with.  Initially, you have great difficulty getting your fingers to go where you want them to go.  The only way to make them comply is to consciously place them where you want them.  Through repetition, though, you gradually rewire your brain and eventually your fingers seem to find their places without effort.  Thinking rationally is no different.

Deep thought is a matter of seeing past the surface of things, and the misconceptions surrounding them, to the fundamental patterns that underly them.  Johans Kepler, presented with Tycho Brahe's raw data on the orbit of Mars, looked past the contemporary dogma that orbits had to be perfect circles to find that an elliptical orbit was the best model that fit the data.  He paid attention to the data, and he questioned existing theory, to come up with a new theory that still stands today.

Run-of-the-mill developers look to their existing patterns to match software engineering problems to their solutions.  This is not necessarily a bad thing.  Many software problems have good existing solutions.  Extant programming languages and operating systems, for example, solve many of the problems of how to instruct a computer and co-ordinate its activities.  Up at the application level, however, things aren't so easy.  Different businesses do similar things in different ways, so there is no one-size-fits-all, once-and-for-all solution to, say, the problem of how to build an accounting system.  What's more, as a professional software engineer, you can are expected to be able to tackle a new application problem from scratch.  As input, you have a lot of vague English statements from potential users and maybe an handful of documents, screens and tables used by the current system.  It's awfully tempting to try and map these inputs to some diagrams in a method and expect the software to write itself, but the history of software development is littered with projects that took this approach and failed.  The highest failure score I've personally come across was 750 consultants working for three years before it was realized that the entire project had to be started again from scratch.  That's at least a £180m write-off.  It was a UML/RUP-based project.

I got an insight into what goes wrong when a method is adopted, when I came in as a consultant on a 60-man project.  The team was using a rigorous method and had all been trained in its use, but things just weren't getting delivered.  To start with, I collected weekly metrics on the models that the team were building.  By going through all the revisions in the case tool's database I found that their object populations had started off small, built up over the weeks, and had roughly plateaued by the time I arrived.  This is just what you'd expect as they analyzed their problem domains and built up an understanding that was represented by the models.  But, despite the models' apparent stability, the engineers were always reporting that they were 90% complete.  What was going on?

Now, the case tool allocated unique identifiers to each object in a model so, instead of counting the classes in the models, I started tracking these identifiers.  It turned out that in any given week as many as 50% of the identifiers in the models could disappear, only to be replaced by a similar number of new identifiers.  The engineers were churning over the elements in their models.  Each week they'd change their minds about something, delete a load of model elements, and then create a new bunch to replace them.  The result - model churn - is what happens when you expect the method to solve the problem for you.  Imagine if Kepler had stuck to the theory that orbits are only ever circular.  Each time he tried to define the 'right' circle, the data would have contradicted him.  So he would try another circle, then another, and so on.  This churning is a sure and certain sign that you don't have an adequate theory of the system you are trying to understand.  Find the right theory and, hey presto, everything falls into place.

As a software engineer, your job is to construct a theory that explains the problem domain.  My advice is to never use the tools of a method during this phase.  Every problem has its own unique theory.  The method you are using is a theory about the structuring of software, and as such has nothing to say about the problem you are facing.  You need to find a way of thinking about the problem that suits the problem.  You need to find a notation that expresses the concepts in the problem domain in a way that is elegant and intuitive, and that allows you to reason about the problem.  If you are building a theory about a billing system with price changes, discounts and limited time offers, then you might use a timeline to draw scenario's.  If you are processing text then you might develop a grammar to represent the language you expect to encounter.  If you are designing the UI for a case tool then you might draw screen shots and label the components and annotate them with what the user can do to them.  If you are designing a signal processing chain then you might use a data flow diagram to show how the signal is passed from one stage to another.  The point is to build a theory of the problem using the right style of thinking and a notation that supports that thinking.  The notation of your method is seldom the right tool for thinking about the problem.  Going meta - thinking about your thinking - is a great way to solve the problem.

Methods, when used as thinking tools, do more harm than good.  The rise of Agile and Xtreme approaches a few years after the UML won the method wars is testament to that.  Unfortunately, these lite methods are not that scaleable.  Fortunately, theories are.

If you go back over the list of computing innovations I gave above, you see that each one of them was underpinned by a theory.  That theory was simple and elegant, and once it was formulated its implementation was a matter of turning the handle.  Furthermore, because the theory was simple and elegant it was easy to communicate it to other people.  Get the theory right and people will just 'get it', and the model or code will write itself.  If you're lucky then you've had this experience before.  Remember a time when the code just flowed out of your fingers, when build after build just worked and the next step was obvious?  Think back to that and I'll bet it was because you had a solid theory behind everything you did.  Now remember a time when stuff just didn't work, when you had to backtrack and rewrite, when you finished every day in a foul mood, and I'll bet that was a time when you expected the doing to give you the understanding.  Methods have their place, and that place is after the thinking, not instead of it.

Wednesday, 14 September 2011

The Simpleminded Atheist

I have a bone to pick with our militant atheists. I think they're less effective than they could be because they focus on the negative side of religion and ignore its positive side. It's this positive side that keeps religions going, despite the horrors that are performed in the name of its darker side.

I am not religious. I think religion is an outdated philosophy based on superstition, fantasy, and the over-projection of various Parent metaphors onto the world 'out there'.

I have read The God Delusion, by Richard Dawkins, and God is Not Great, by Christopher Hitchins, and I pretty much agree with everything they have to say on the subject. Nevertheless, I regard them as about as useful as that idiot Southern Baptist preacher (who I shall not glorify by naming) who announced he was going to burn some copies of the Koran in 2010, which resulted in 14 innocent UN workers being murdered in Kabul.

Dawkins' and Hitchens' books are basically one-sided rants. 'Here's a list of all the things that are shit about religion...' They should know better, especially Dawkins.

Dawkins wrote The Selfish Gene, and then went on to give us the meme as a way of thinking about the transmission of, and competition between, ideas. To survive in either the gene pool or meme pool, a gene or meme should have some utility to its host, or at least do it no harm. Genes and memes that actively harm their holders should weed themselves out of circulation sooner or later. If you listen to Dawkins and Hitchens then it's hard to believe that the religion meme has any benefit to its host; in their writings it does nothing but harm. So why hasn't it been weeded out of the meme pool?

I want to explain why the religion meme stays in circulation by drawing an analogy with a harmful gene - the sickle cell gene.

The sickle cell gene produces an abnormal variant of haemoglobin. People with one copy of the gene produce a mixture of normal and abnormal haemoglobin. The sickle cell trait is so called because it results in abnormal sickle-shaped red blood cells. Holders of two copies of the gene are especially vulnerable to sickle-cell anemia and can typically expect to live only into their forties.

Like Dawkins and Hitchens religion, the sickle cell trait would seem to do nothing but harm to those afflicted by it. So why hasn't it been weeded out of the gene pool?

The sickle cell trait has persisted because although it shortens the lifespan of its holders, it has an upside, too. It confers some degree of protection against the malaria parasite. In the right environment, i.e. one where malaria is prevalent, the sickle cell trait has a marginal benefit to its host, and so it persists. Those with the gene may live a shorter than normal life, but they are less likely to die very young from malaria.

Religion has thrived in the human meme pool for millennia, despite all the negative effects catalogued by Dawkins and Hitchens. This is because it has also conferred benefits on those afflicted by it.

The benefits of religion are primarily social and psychological. Religion can result in a sense of community on a small scale. Humans are first-and-foremost social creatures and functional relationships with other humans is essential to our well being. Religious practices can define and maintain a framework for those relationships. Religion, being a meme, can also evolve to suit its environment. Thus, the Angry Bastard god that was suitable for a small tribe of Israelites surrounded by numerous other hostile tribes was able to evolve into the Submissive Christian god when that tribe had to survive under the heel of the Roman Empire. They wouldn't have got very far if they'd adhered to their Angry Bastard god's command to kill everyone else and take their women for slaves. Rome would have smited them. Instead, they adapted their religion to allow them to survive under their oppressor's, and that's why Christianity is such a good religion for slaves and those effected by both natural and man-made disasters. Christianity provides its believers with the psychological and psychotherapeutic tools for coping with poverty, oppression and disasters by binding its practitioners together in mutually supportive communities, telling them to submit to their fate, and promising them that it will all be better once they get to heaven.

If Dawkins and Hitchens could somehow (miraculously?) remove all the religion memes from everyone's head while they slept tonight, what kind of world would we wake up to?

I don't think things would be much different in the first world. The effect of the Enlightenment on the West's meme pool was to introduce of a meme that conferred resistance to religion, so long as people live in relatively well-off circumstances. Stress a westerner and the religious meme is likely to flare up again, like shingles, hence the outbreaks of religious warfare in Northern Ireland and Serbo-Croatia. The downside of the Post-Enlightenment Western model of civilization, though, is the weakness of community and the absence of religion's psychological benefits. Nowadays, community is based around jobs, X-Factor and Facebook, and these are no substitute for face-to-face encounters with our fellow human beings. The metaphor for life now has changed from 'be good, and you'll get into heaven' to 'he who dies with the most stuff wins'. Success in this world is not measured by friendships, but by how big a house/car/HDTV you can afford. As for psychological coping strategies, it's Prozac or Oprah, take your pick.

Meanwhile, in the third world, the effect of taking away people's religion overnight would be outright slaughter. Think Pol Pot, Idi Amin, Robert Mugabe, North Korea and the Democratic Republic of the Congo, except EVERYWHERE. It doesn't bear thinking about.

The West is full of quietly religious people who do no harm. They aren't out there strapping suicide bombs to themselves. They aren't holding barbecues fueled by someone else's religious text. They're just getting along. If you ask them if they believe in god they'll say they believe in 'something, a higher power'. They get some of the benefits of religion without all the nasty side-effects. Dawkins and Hitchens would take their delusion away from them, and in the process they'd take away its benefits, too, and give them nothing to replace them.

The sickle cell trait survives because malaria survives. If we could eradicate malaria, then the sickle cell trait would weed itself out of the gene pool.

Religion survives because people live in adverse circumstances. If we could eradicate those adversities then religion would weed itself out of the meme pool.

In The Moral Landscape, Sam Harris makes a brave attempt to sketch a world in which morality is grounded in maximizing human well-being, rather than obeying the tenets of religious texts designed for a very different world. Well-being is rather a vague concept, but there are governments out there that are starting to ask how they can measure it. It's a start, and I hate to be a party-pooper here, but it's not enough.

If we focus on human happiness then there's a danger that we're all going to end up with bigger houses, cars, and HDTVs, because at the moment we equate happiness with success in the consumer lifestyle game. Meanwhile, the system that keeps us alive - the Earth's biosphere - is getting trashed. I tell you now, if we don't get the relationship between our happiness and the Earth's resources sorted out then in 50-100 years time you are going to see a resurgence in religion, and religious warfare that'll throw us back into the stone age.

The shrill, simpleminded atheism of Dawkins and Hitchens is no basis for sustaining a world with 9+ billion people on it. Their breed of atheism is however a great way to get that population down to 5- billion.

Harris' anthropocentric, well-being model is no answer either, because we could so easily shop ourselves to death.

What we need is an attitude towards life that is grounded in both the nature of ourselves and the world that we live in.

And that's what I'm going to write about in future posts.

Thursday, 21 July 2011

Woo Alert! The Brain Science Podcast, Episode 44: Meditation and the Brain

Ginger's guest in this episode is Daniel Siegel, MD, author of The Mindful Brain. He is looking at how the experiences of meditation map onto the hardware of the brain, and what we can learn from such a mapping.

This is another great podcast from Ginger, but right at the end Siegel lapses into a poetic fit of 'WOO'.

The brain is an infinite space, he tells us. It has one hundred billion neurons, each with an average of 10,000 connections. The potential combinations of neurons x connections are so huge that if we started trying each combination out from birth we couldn't live long enough to experience them all, therefore, the brain is effectively infinite and aren't we awesome!

I've heard this 'WOO' before. Someone on a philosophy forum quoted these numbers at me and claimed there were more connections in a brain than there were particles in the Universe and, therefore, 'Woo, the brain is vast enough to hold the entire Universe in itself!'

Neurons in the brain: 100,000,000,000 = 10^11
Connections per neuron: 10,000 = 10^4
Total connections: 10^15

That looks like a hell of a big number, but it's easily beaten by, say, two brains.

Okay, that's an easy way to beat that number, but the real WOO comes in when the WOO-vulnerable contemplate the number of permutations possible with the two input numbers, i.e. (10^15)! That's (10^15)x(10^15 - 1)x(10^15 - 2)... etc. Now that's a seriously big number. I mean, it's bigger than the number of particles in the universe, isn't it?

Hmm, the trouble is that brains don't work like that. To see why, consider this little thought experiment.

Professor Woo has developed a demorphologizer. This is a cunning device for teasing apart all the individual cells in an organism to obtain a cellular broth. The broth is aerated to keep the cells alive while Professor Woo transfers them from the demorphologizer to his patented remorphologizer. This second device re-binds all the cells together to create a new permutation of the organism.

(Those of you who want to try this out at home can replicate Professor Woo's experiment with a large frog, a blender, a packet of gelatin, and a frog-shaped jello mold.)

'Oh', you say, 'I think I can see what's coming now.'

Yes, 'oh'.

Suppose Professor Woo is using one of those really big African bull frogs that's about the size of a human brain, and let's say it has about 10^11 cells in its body and each cell, on average, sits at the center of a 3x3x3 cube of cells. Each cell will therefore have an average of 26 neighbors. This means that there are 2.6x10^12 actual intercellular connections in our big ole frog. But that number is nothing compared to the number of possible permutations, i.e. (2.6x10^12)! That's another one of those awesome bigger-than-the-Universe numbers, eh!

But you see the problem with Professor Woo's logic, don't you?

In that vast space of frog-cell permutations, only a tiny number of arrangements will constitute a viable frog. The rest will just be frog-flavored jello. (Which, by the way, doesn't taste like chicken.) Professor Woo could spend the lifetime of the Universe de- and re-morphologizing frogs without ever seeing a functional frog hop out of the frog mold.

The brain is a hideously complicated network, but it is also highly constrained in the arrangement of neural connections. Even if we only swap a few connections between neurons, instead of blending a whole brain, you could find that your heart only beats when you say the word, 'sausage'. Swap a few different connections and you might find yourself instinctively leaping in front of passing cars. The scope for disaster in randomly rewiring the brain is rather sobering. The portion of 'brain space' that contains properly functioning brains is no more than a tiny, fuzzy point in the space of possible brains. Evolution 'beat' the odds that will flummox Professor Woo's efforts by i) starting really small, and ii) always building on something that already worked. That's the reason we only took 4.5 billion years to emerge from the primordial frog soup.

Sunday, 10 July 2011

What is philosophy?

Philosophy is what we do when we can't do a proper job.

Time after time, the topics that philosophers chew over are appropriated by scientists, once the scientists get the right tools for studying the field in question objectively. At the moment, consciousness seems to be the topic that is slipping from the philosopher's grasp as more and more scientists start to work in the field, thanks to advances in the tools of neuroscience. The result is philosophers getting defensive about their value to the intellectual community.

If philosophical theories are doomed to extinction when science starts to get some traction in a subject matter then what is point of philosophy?

It seems to me that philosophy should be focused on looking just over the horizon of our objective knowledge. Given that we know A, B and C, then if D and E are true we should expect X, Y and Z. In other words, philosophy can generate hypotheses and predictions based on current knowledge.

On the other hand, what philosophy often seems to do is sit in an armchair gazing out the window at the horizon and drawing fantasy maps of the far side of the planet. The worst offenders are not those who forecast 'There Be Dragons', but those who conclude 'Here Be God'.

And yes, I am currently disillusioned by what I've seen pass for philosophy recently.

Monday, 20 June 2011

A Test for Desirism

Alf and Betty return home after a hard day's work gathering and scattering stones. Alf grabs a beer from the fridge and starts to unwind. Sitting in his arm chair he suddenly realizes that the house is too hot for comfort. He goes to the thermostat on the wall and sees that it is set at 22.5ºC. Just then, Betty comes up to the thermostat as well and declares that the house is too cool for comfort.

Now, Alf is the warm blooded sort and he 'desires' the house to be at 20ºC.

But, Betty is the cold blooded sort and she 'desires' the house to be at 25ºC.

If they leave the thermostat at 22.5ºC then both have their desires thwarted.

If they set it to either 20ºC or 25ºC then one of them is going to have their desire satisfied while the other's desire is thwarted.


How does Desirism solve their dilemma?

Tuesday, 24 May 2011

Desirism vs. Morality

Alonzo Fife has this to say about the criticisms of Desirism that he’s received:

Many people reading my postings on desirism do so under the assumption that it must start with a set of fundamental moral commandments. With this in mind, they then search for an interpretation that is consistent with this assumption. They find their commandment, then set about to criticize it.

However, their fundamental assumption is wrong, which means that their interpretation is incorrect. Consequently, the theory the criticize is not the theory that I wrote.


This posting came a few days after I had a go at Desirism, because it appears to validate any moral code from the fanaticism of suicide bombers to the saintliness of Mother Teresa, and I can’t help feeling that it was aimed at me, seeing as I was the only one saying anything critical at the time. The trouble with the above is that it exactly misses my critique of Desirism: my problem with it is that it has absolutely no moral ‘commandments’ in it.

Now, I’m not a stupid person, and neither is Alonzo. We’re both very smart individuals, and yet we have come to loggerheads over the basic nature of Desirism. How did this happen? I think the root cause is that we have very different conceptions of what Desirism stands for.

My impression, upon finding Alonzo’s podcast and blog, was that he was building up to presenting a moral code. His blog is called Atheist Ethicist: A view of right and wrong, good and evil, in a universe without gods. The subtitle was right up my street, and I expected him to say something about right and wrong, and good and evil. In other words, about morality. Unfortunately, over the months Desirism has turned out to be about something else: the mechanisms of human negotiations.

To me, morality as a subject should be discussing what is right and what is wrong, what is good and what is evil. In other words, morality is not about how people do behave, but about how people should behave. What Alonzo is doing has nothing to do with right and wrong, or good and evil, so to me it has nothing to do with morality. It’s about basic human cognitive and communicative mechanisms whereby people determine what they want and get their way in the real world. It’s about peer pressure.

Desirism is to Morality what the car is to driving.

Desirism is about desires, beliefs, and actions, i.e. the mechanisms by which human behavior is implemented. Morality is about right and wrong, good and evil. It’s about rules for people getting by with each other; unjustified killing is evil, keeping promises is good, lying is bad, etc.

Cars are about wheels, engines, clutches and brakes, i.e. the mechanisms that implement a car. Driving is about how to govern road traffic so that accidents are avoided. Driving is about right and wrong, and good and bad. Some traffic rules are arbitrary - thou shalt drive on the left side of the road. Others are based on possible bad consequences, e.g. collisions at junctions if there isn’t a system for prioritizing traffic.

The car/driving analogy explains why I am disappointed with Desirism. It’s as if I had turned up for a course on advanced driving skills, only to receive a lecture on automobile engineering.

Alonzo doesn’t get what my problem is, and sadly, the vast majority of his commenters don’t get the distinction, either. His descriptions of how moral agents think and interact are all logically coherent, and Desirism is defined in a ‘predicate’ style that is bound to impress those who haven’t actually done any predicate calculus. I suspect that the real reason they don't understand my problem with it is that they’re all tacitly assuming that Desirism entails the standard, American Christian moral code.

I’ve driven in the US, the UK and all over Europe. The cars were pretty much the same apart from the arbitrary left- vs. right-hand drive thing. But India… OMG, that place is insane. The cars are just the same, but the rules for driving are bewildering. Coincidentally, I saw a phenomenal number of people with lower limb deformities while I was there. My hypothesis: Bangalore's anarchic driving code results in a lot of pedestrians and scooterists being hit in the legs by cars. But, is it cause or correlation? I’m not sure, but one thing is certain - I wouldn’t attempt to drive myself over there. Making a car go and making it though the Bangalore traffic are two completely different subject matters.

And so are Desirism and Morality.