Some Observations on the
Professional Development Process

By Peter A. Bromberg, Ph.D.
Printer - Friendly Version
Peter Bromberg
Natasha: "Boris, you got plan?"
Boris: "Behehe... Plan? Of course I got plan. Dey don't ever work, but I got one!"

The above interchange is likely to be familiar to anyone who was born before say, 1960 or so and grew up with a television in the living room. Funny, because it speaks volumes about the professional software development process! Not funny, because so many IT departments in companies both large and small really don't have a plan, and even when they are enlightened enough to realize they should have one, excuses for not implementing same seem to abound. "We don't have the money", "We're not big enough", "We're understaffed", "It'll take too much time", ad nauseum.

  The following are some of the observations and steps I've been exposed to during my short happy life as a software developer and manager at several different organizations. This is by no means complete, but hopefully it will help spark some additional ideas for readers, and possibly some comments on our forums as well. Take Boris's advice - have a plan. Make sure it's a plan that will work, too:

1) Most successful projects have a functional specification, technical specification, and a project plan. While these can and often do change, they should be there at the very beginning and they should be in writing. That's not to say you can't have a successful project without these; just think of them as having a "map in the glove compartment". It's both comforting and productive to be able to refer to your roadmap (and even change it when necessary) to ensure that you are really going to get where you want to be.

2) During the coding process, developers typically are asked to "Switch off" and work with other people's code to do code review and / or bug fixes, and try to find things that the first developer may have missed during the coding process. This also gives developers a chance to become familiar with parts of the application that they are not specifically coding.

Many organizations make this type of Code Review mandatory before even allowing an application to go to QA / TEST. The reasoning is not only to facilitate the finding of errors in the coding process, but also to have better redundancy when somebody is out sick or on vacation. That guy in the corner who's been there for 10 years and thinks he's "slicker 'n snot on a doorknob" needs to have somebody new take over his code for a week!  And if he won't grow up and become a team player, make it happen to him twice a month!

3) Communication: While a developer lead on a large financial project, I suggested to management that we have a 15 minute non-mandatory developer meeting every single morning at 8:45. The proviso was that we would go around the table (usually 12 to 16 people) and each person would take about a minute to talk about what they were doing and any problems they were having. If you went over about 2 minutes, the moderator was required to ask you to take your discussion offline so the rest of the participants could finish. The main requirement was that the meeting would not be alllowed to go on beyond the 15 minutes allocated. The results were astounding. People gained insights into what others were doing and how it might affect their work, and inter-group communication was vastly enhanced. The project came in ahead of schedule and under budget, partly because of the enhanced communication that was taking place. If you want to know more about what I think of meetings, read an older article of mine here:

4) Additionally, documentation should be required. Advanced programming environments such as Visual Studio.NET have code comment facilities that generate XML from classes which can be converted into full-featured CHM help files with free tools like nDoc and HTML Help Workshop. But also, there should be documentation about the business process - -the logic flow through the application, what each method does, what databases and stored procedures are called, what they do, and so on. If a new developer comes onto the team they should be able to be provided with a copy of this documentation to study and come back the next day with a pretty good idea of what the application does and exactly how it does it. It doesn't take long to produce this kind of documentation, but you'd be surprised at how few organizations ever take the time to do it. This kind of documentation effort, once its organized, normally takes less than 5 percent of the total development time. It needs to be right there in the project plan with the proper number of hours of budgeted resource allocations, just like everything else. Documentation is mandatory. If your company is doing an ISO 9000 project, documentation is a contractual requirement for certification.

5) A typical software development / QA scenario is that developers and managers sit down with QA / testing people --while the software is still being developed, not afterward --and all write up a detailed test regimen. This QA/Test regimen is a requirement of the Software Development Process, and is included as part of the specifications for the project. Time is budgeted in the Project Plan for these steps. Does your organization have this? If not, why not? Is your software so great that it doesn't even need to be tested before deployment?

In smaller organizations (2 to 10 developers) this is often done in an Excel spreadsheet where there are a series of tests described by major Section, and each test specified and described in a row or rows in that section. Each Section tests a specific "area" of the application, and the testing regimens are designed to be progressive in that if a beginning section of the testing fails, there is no reason to go further until those steps have been fixed.

Some organizations use commercial software that is designed to track the build / test/ bug-fix process; some organizations write up their own. There can also be a way for end users to report errors and post screen captures into their reports so that developers can better understand what it is they need to fix.

6) Each time a test build is finished, it is deployed to the TEST ENVIRONMENT. By definition, the TEST environment is maintained as a complete and exact copy of the PRODUCTION environment, but it has nothing to do with production data nor does it touch production code or databases or machines in any way. You should be able to run stuff in TEST all day long without worrying about affecting the regular business process. The TEST environment should never be allowed to be commandeered, for example, by the Sales department for doing demos to potential customers.

Testers are required to go through the test regimen line by line, perform each test and record the results that the test succeeded or failed, and make appropriate notes into the bug tracking software.

For any tests that fail, the tester creates a bug ticket in the bug tracking software. This is assigned back to the developers by the managers based on who wrote the code or other considerations such as their familiarity with the code or how busy they may be at that particular time.

7) Each bug ticket that has been corrected by a developer is then reassigned back to TEST for the QA person or tester to revalidate the test(s).

8) Some organizations also write up specific NUnit tests or Mercury LoadRunner or Homer (or App Center Test) tests in addition to the above process, in order to measure and test scalability and stress of an application under load. Database tests are also conducted, perhaps using SQL Profiler and index tuning wizards. These tests are also conducted by the QA department, sometimes in collaboration with developers. In organizations too small to have a real "QA department", usually there is one person whose job it is to become familiar with the application under development and do all the testing. But this person is normally not one of the developers on the project (unless of course, your organization is like, "really" small).

9) When the entire series of tests, front to back, are marked as "PASSED", the project Manager then reviews and decides to give his OK to deploy to production. A labeled, segregated build is set up in Source Control, and the build person does the entire deployment from this to Production. By the way, in many applications there is also a Deployment Project (installer) whose sole purpose in life is to make it easy for end - users to install the product. This portion also goes through the development cycle, and is tested.

10) A production test or tests are run and if passed, the build is marked as complete and everybody goes out for beers.

In complex applications, it is advisable to keep the build / release process cycle short -- usually 2 to 3 weeks. This helps avoid complex problems that create a lot of extra unnecessary stress and work for both developers and managers.

I'm sure you can think of things I've left out. Feel free to add your comments and suggestions at our forums here.  

Peter Bromberg is a C# MVP, MCP, and .NET consultant who has worked in the banking and financial industry for 20 years. He has architected and developed web - based corporate distributed application solutions since 1995, and focuses exclusively on the .NET Platform.