There's a common misconception in the software industry. It assumes that people are rational thinkers.
When you work in IT, you work with systems that are logical. If you put X in then you get X' out. There's a cold, logic to this that is self-evident given the nature of silicon. It only seems reasonable, therefore, that software should be developed in the same way, and so the industry has put its faith in methods that define the processes and products of the software development activity.
Back in the day, big process definitions like SSADM and Prince were conjured up to turn the creative activity of making software into a domesticated beast that would deliver quality results in predictable timescales. The people behind the methods saw the chaotic, personalized working methods of the time and figred that what they needed to do was turn software engineering into a production line. Every set of the process could be defined as inputs, transformations and outputs. The idea, it seemed, was that you input a client's requirements at one end of the factory, and the finished software would fall out of the other. All the steps in between would be semi-automated with manual workers operating machine stations along the way, each of which would perform some transformation on the product as it whizzed along the conveyor belt towards the door.
When the factory approach didn't work, the methodologists sought to break the manufacturing line down into ever smaller and more controlled steps. Different methodologists broke the process down into different micro-steps. Grady Booch, Jim Rumbaugh and Ivor Jacobson were three prominent personalities at the time when object-oriented methods were undergoing their Cambrian explosion. The reason they're remembered, when the others are forgotten, is that they had the wits to team up and present a unified front in the method wars. The result of their alliance was the Unified Modeling Language (UML), and it was the worst thing that ever happened to software engineering.
From the perspective of Booch, Rumbaugh and Jacobson, the UML was roaring success. It sold like hot cakes. Instead of having to choose between methods, users could simply adopt UML because it incorporated all methods. With UML you could slice and dice a system pretty much any way you wanted to. The trouble with UML was that... you could slice and dice a system anyway you wanted to. Instead of having to choose a specific method, developers suffered the paralysis of too much choice. UML was all about unifying all the diagramming types out there into a single standard. Objects would now be drawn thus, end of problem. Unfortunately, the UML had nothing useful to say about what an object was or what constituted a good way of partitioning concepts into objects. During the late 90s and early 00s, I sat through innumerable presentations of models that were simply disasters in the making. A couple are quite memorable. One had about 100 subtypes hanging off one super type that when printed out covered four conference tables, yet the print was so small that I had to squint to read it. Another had a mere four objects in it, but one of those objects had over three hundred states behind it, and the developers wondered why the tool took so long to load it. Credit where credit is due: Booch, Rumbaugh and Jacobson realized that the UML was just a drawing standard and they addressed the lack of process guidance behind it by publishing the Rational Unified Process (RUP). But the RUP, it turned out, had exactly the same fatal flaw built into it that the UML had: the process it presented was in fact the cross-product of all the processes out there in software engineering land. The RUP allowed you to use any process you could think of by simply 'tailoring' its contents. Users were, once again, faced with the paralysis of choice.
When you get right down to it, the UML/RUP won the war, but lost the peace. Instead of helping software engineers to think about software development, it delivered a screaming Tower of Babble, and it was up to each engineer to design his own method using the syntax of the UML/RUP.
The UML/RUP were all syntax, and no semantics. How did such a useless 'method' win the war?
The answer is simple: people like options.
Digital photography is a massive past time these days. Millions of people take billions of pictures every day. For many of them, photography presents a challenge that they enjoy, and there a huge industry has arisen around the problem of how to take better pictures. Magazines catering to those with the shutter bug crowd the shelves of supermarkets and newsagents and scores of books are written every year telling the amateur photographer how to they can improve their photography. A sizable proportion of them are dedicated to explaining how the latest camera with the Xmegapixel sensor and 200 plus marketing features produces 'better' images than the outdated one you've got. Go to a camera club meet-up and you'll find yourself surrounded by people talking about lens sizes, sensor sizes, metering options and post-processing software. Look at their images, though, and you'll generally be underwhelmed. These people are all kit, and no clue. But, they think, if I buy the next camera with the bigger sensor, the faster processor and the facial recognition option then I'll take better pictures. Meanwhile, some teenager out there is shooting stuff on a $30 Holga that has the one thing none of the kit-chasers is even thinking about: soul.
Method developers are like the digital camera manufacturers in that they are continually seeking to automate the creative process of engineering software. If they can just manage to lock down the development process tightly enough then all the difficulties of engineering systems will go away. Okay, that didn't happen with the last revision, but if we get a standards committee together and extend the scope of the UML that little bit more then everything will be fixed. In other words, when it comes to methods, more is more.
Run-of-the-mill software developers and project managers like more-is-more. More is going to take the risk out of their work. More is going to deliver systems on time and to budget. More is going to ensure the quality of their systems, too. More method equals more software, right?
Run-of-the-mill software developers and managers are like the camera kit-chasers. All kit, and no clue. To see why, we need to take a look at the history of software development and consider the one thing that the methodologists are seeking to eliminate from the development process: the human mind.
Think about these milestones in the development of programming and software engineering: Dennis Richie's C language and Unix operating system, Bjarne Stroustrup's C++, Kristen Nygaard and Ole-Johan Dahl's Simula (the first object-oriented language), James Gosling's Java, Brad Cox's Objective-C, John W. Backus' Fortran, Bill Gates' MS-DOS, Andy Bechtolsheim's first Sun workstation, E.F. Codd's 3rd normal form for relational databases, Tim Berners-Lee's HTML, Craig McClanahan's Struts.
All these technical advances sprang from the personal, creative visions of very specific individuals. In other words, none of them were designed by committee or developed using a method. Amazing software is developed by amazing people, and yet the software engineering methodologists seek only to 'improve' software engineering by taking the people out of the equation. They seem to think that if they can lock the process down tightly enough then even a monkey could deliver amazing software.
The trouble with the factory metaphor for software development is that factories are dumb. Factories are set up *after* someone has designed a product, not *as* they are designing it. All the difficult engineering and design decisions are made long before the shop floor is tooled up. Factory-style software engineering is like delivering raw ingredients to the goods in door and expecting the ladies and gentlemen on the shop floor to figure out how to deliver a finished product at the goods out door. Factories create, but they are not creative.
People are creative. Some people are amazingly creative, though, while others are not, and the difference between these two types is down to the way they think.
As I said at the beginning of this post, there is an assumption out there that people are logical, rational thinkers, especially in IT, but this assumption ignores the fundamental architecture of the human brain.
The human brain has a pattern matching, and pattern forming, architecture. By default, it will find the best pattern it already has and fit the data to that: http://lesswrong.com/lw/7mx/your_inner_google/
Matching patterns is instinctive; forming them is hard work. You *know* this to be true. Learning to, say, play a guitar is very hard work to start with. Initially, you have great difficulty getting your fingers to go where you want them to go. The only way to make them comply is to consciously place them where you want them. Through repetition, though, you gradually rewire your brain and eventually your fingers seem to find their places without effort. Thinking rationally is no different.
Deep thought is a matter of seeing past the surface of things, and the misconceptions surrounding them, to the fundamental patterns that underly them. Johans Kepler, presented with Tycho Brahe's raw data on the orbit of Mars, looked past the contemporary dogma that orbits had to be perfect circles to find that an elliptical orbit was the best model that fit the data. He paid attention to the data, and he questioned existing theory, to come up with a new theory that still stands today.
Run-of-the-mill developers look to their existing patterns to match software engineering problems to their solutions. This is not necessarily a bad thing. Many software problems have good existing solutions. Extant programming languages and operating systems, for example, solve many of the problems of how to instruct a computer and co-ordinate its activities. Up at the application level, however, things aren't so easy. Different businesses do similar things in different ways, so there is no one-size-fits-all, once-and-for-all solution to, say, the problem of how to build an accounting system. What's more, as a professional software engineer, you can are expected to be able to tackle a new application problem from scratch. As input, you have a lot of vague English statements from potential users and maybe an handful of documents, screens and tables used by the current system. It's awfully tempting to try and map these inputs to some diagrams in a method and expect the software to write itself, but the history of software development is littered with projects that took this approach and failed. The highest failure score I've personally come across was 750 consultants working for three years before it was realized that the entire project had to be started again from scratch. That's at least a £180m write-off. It was a UML/RUP-based project.
I got an insight into what goes wrong when a method is adopted, when I came in as a consultant on a 60-man project. The team was using a rigorous method and had all been trained in its use, but things just weren't getting delivered. To start with, I collected weekly metrics on the models that the team were building. By going through all the revisions in the case tool's database I found that their object populations had started off small, built up over the weeks, and had roughly plateaued by the time I arrived. This is just what you'd expect as they analyzed their problem domains and built up an understanding that was represented by the models. But, despite the models' apparent stability, the engineers were always reporting that they were 90% complete. What was going on?
Now, the case tool allocated unique identifiers to each object in a model so, instead of counting the classes in the models, I started tracking these identifiers. It turned out that in any given week as many as 50% of the identifiers in the models could disappear, only to be replaced by a similar number of new identifiers. The engineers were churning over the elements in their models. Each week they'd change their minds about something, delete a load of model elements, and then create a new bunch to replace them. The result - model churn - is what happens when you expect the method to solve the problem for you. Imagine if Kepler had stuck to the theory that orbits are only ever circular. Each time he tried to define the 'right' circle, the data would have contradicted him. So he would try another circle, then another, and so on. This churning is a sure and certain sign that you don't have an adequate theory of the system you are trying to understand. Find the right theory and, hey presto, everything falls into place.
As a software engineer, your job is to construct a theory that explains the problem domain. My advice is to never use the tools of a method during this phase. Every problem has its own unique theory. The method you are using is a theory about the structuring of software, and as such has nothing to say about the problem you are facing. You need to find a way of thinking about the problem that suits the problem. You need to find a notation that expresses the concepts in the problem domain in a way that is elegant and intuitive, and that allows you to reason about the problem. If you are building a theory about a billing system with price changes, discounts and limited time offers, then you might use a timeline to draw scenario's. If you are processing text then you might develop a grammar to represent the language you expect to encounter. If you are designing the UI for a case tool then you might draw screen shots and label the components and annotate them with what the user can do to them. If you are designing a signal processing chain then you might use a data flow diagram to show how the signal is passed from one stage to another. The point is to build a theory of the problem using the right style of thinking and a notation that supports that thinking. The notation of your method is seldom the right tool for thinking about the problem. Going meta - thinking about your thinking - is a great way to solve the problem.
Methods, when used as thinking tools, do more harm than good. The rise of Agile and Xtreme approaches a few years after the UML won the method wars is testament to that. Unfortunately, these lite methods are not that scaleable. Fortunately, theories are.
If you go back over the list of computing innovations I gave above, you see that each one of them was underpinned by a theory. That theory was simple and elegant, and once it was formulated its implementation was a matter of turning the handle. Furthermore, because the theory was simple and elegant it was easy to communicate it to other people. Get the theory right and people will just 'get it', and the model or code will write itself. If you're lucky then you've had this experience before. Remember a time when the code just flowed out of your fingers, when build after build just worked and the next step was obvious? Think back to that and I'll bet it was because you had a solid theory behind everything you did. Now remember a time when stuff just didn't work, when you had to backtrack and rewrite, when you finished every day in a foul mood, and I'll bet that was a time when you expected the doing to give you the understanding. Methods have their place, and that place is after the thinking, not instead of it.