How toLife at Calcey

What is in a code review? Here is how Calcey Technologies does it.

Code reviews are an important recurrent gatepost in agile software development, and a good engineering practice we follow at Calcey. As most software development teams know, frequent code reviews ensure the containment of poor code quality such as inefficiencies in unit-level design and lack of adherence to coding standards. Historically, the practice of code reviews existed in methodologies like RUP as both informal code walkthroughs and the more formal Fagan Inspection. At the onset of the agile revolution, code reviews were re-branded as peer reviews (which actually meant peer code reviews), as a necessary ingredient to building a stable software in an evolving fashion. The bottom line justification for the time spent on code reviews is that they are essential if we are to end up with a scalable and extensible piece of software, as opposed to a hack-job that is both unstable (difficult to scale) and impossible to extend later on, for emerging market needs.

I’d like to outline our approach to code reviews, and how we conduct them. We have a rule of thumb which developers and Scrum masters use to initiate code reviews –  any new release to a test environment must have been preceded by one. This simple rule gives Scrum masters the flexibility to plan the review, but binds them to conducting it within a given development sprint. Our review setting is that of an informal workshop, where the developer concerned projects the code on screen and walks through sections of the code based on the prompting of the reviewer. The review team consists of an architect and at least one other senior developer outside of the project under review, with competency in the programming language and frameworks concerned if possible. Other members of the project team are welcome to listen in and give their feedback. The Scrum master updates the code defects in the task backlog and assigns them to the developer(s) concerned. The duration of a code review session could vary from between 30 to 90 minutes, depending on the scope of work accomplished during a given sprint. We take our time, as faster is not better when it comes to an effective review; we inspect at most 300 lines of uncommented code for an hour.

The reviewers keep an eye out for all the typical code vulnerabilities during the review. We begin with readability, style and conventions –  there cannot be code that an experienced outsider cannot understand after a brief explanation by the developer concerned. If there is, the code is likely to be either poorly structured (design defects) or poorly presented (style defects), or both. Calcey generally follows the industry accepted coding style conventions for the major programming languages, such as the C# coding conventions from Microsoft. Unit tests are often a good place to assess the stability of the new functionality implemented, and the obvious presence of stringent unit tests can help reduce the subsequent line-by-line review effort. We’d then move on to trapping major issues in earnest, checking for algorithmic inaccuracy, resource leakage, exception propagation, race conditions, magic numbers and suchlike. There are several online sources that closely portray the Calcey code reviewer’s mindset, such as this checklist from projectpatterns.org.

One of the biggest benefits of a workshop-style code review is that the authors of the code themselves realize defects and improvements, as a direct result of trying to explain how the code works to reviewers who might not be fully acquainted with the design. In situations where pair programming is not feasible, the code review mitigates the risk of “coding in silos”  to a great extent.

Having said this, we also do our best to automate humdrum quality checks. Our .NET based app development projects are integrated with StyleCop (downloadable from CodePlex), to check for style issues like custom naming conventions or compulsory comments for XML. We also advocate enabling Code Analysis in Microsoft Visual Studio to warn us of potential code defects when compiling, from the viewpoint of the Microsoft .NET Framework Design Guidelines. Apple iOS development comes with its own set of code analysis tools –  we use Instruments for Performance and Behavior Analysis for profiling our code at runtime to identify memory leaks, a tendency when programming with Objective-C.

Coding review metrics such as code coverage and defect count are gathered from the individual reviews by the Scrum masters, and submitted to our principal architect for statistical analysis, strictly to improve the effectiveness of the review process (and not for finger-pointing). Junior developers can hope to learn a lot from well-conducted code reviews, not only about the specific technologies and design principles involved, but also about working together as a team to engineer a quality product. After all, our aim is to perform what Jerry Weinberg named nearly half a century ago as “egoless programming” .

“The objective is for everyone to find defects, including the author, not to prove the work product has no defects. People exchange work products to review, with the expectation that as authors, they will produce errors, and as reviewers, they will find errors. Everyone ends up learning from their own mistakes and other people’s mistakes.”  – Jerry Weinberg, “The Psychology of Computer Programming”, 1971

Life at CalceyOpinion

Haven’t yet been able to adapt Scrum to match the ground-realities of your business? Find out how we did it

Project management is a crucial weapon in the arsenal of any software development outfit. Its probably the most-discussed competency in software engineering, judging by the sheer volume of scholarly papers, conceptual models, blog articles and entire schools of thought that have been churned out on this subject over the past two decades. We’ve seen process frameworks like Waterfall, SSADM and RUP come and go, and a shift from centralized delivery responsibility resting on the service provider, towards distributed ownership across an extended team inclusive of the client. We live in a world of “Agile” software development today, a zeitgeist of management thinking patterns that are based on keeping processes to the bare essential, building products incrementally and eliminating humbug within teams. We have even seen the formal “role” of the project manager (stereotyped as the big, bad bogyman in the team) disappear within the modern agile paradigm.

Call it what you like, a person or the collective reasoning within a team, we find that effective project management remains an essential ingredient to “getting the job done” . Moreover, project management success in software development engagements often remains illusive. I’d like to summarize our own successful methodology at Calcey, and go on to explain a few of the deeper lessons we learned through our collective management experience, for the benefit of our future clients.

We follow a project management methodology that is a derivative of Scrum, which has benefited from long years of practical experience in delivering projects of varying sizes and technical complexities. Our conceptual framework is fairly simple. We agree with our clients to form a single team having joint responsibility for the project, at the early stage of pre-sales negotiation. Whilst in theory we are not supposed to estimate the end-to-end scope of work in Scrum, in practice we have found it impossible to find a client who would agree to an entirely open budget and no indicative calendar timeline for building a product. So an initial ballpark estimate is made. This is purely for purposes of budgeting, to provided the client with a broad feel for the costs involved and to determine the resource bandwidth to be deployed in order to meet a very approximate calendar schedule. This sort of budget is made against the broad set of features that the product comprises of, as understood at the inception of the project. Once a project is contracted, we move forward in earnest to apply our Scrum model.

A Calcey Scrum Master’s life revolves around their project backlog. They manage both the product’s roadmap of features and the specific tasks (or bugs) for the current sprint via an enterprise backlog app such as JIRA, TeamworkPM or Basecamp. JIRA offers the highest flexibility in managing the complete life-cycle of a development project, but both TeamworkPM and Basecamp have proved to be interesting alternatives to managing smaller-scale engagements. In any case, it is not the choice of the tool itself that we found important, rather the diligent use of the backlog as a concept for task management that helped us most. Handwritten backlogs diligently maintained in the corner of a whiteboard tagged with the words “don’t erase” seemed to work better in some situations!

We plan development for a time-boxed sprint, whose duration is usually a fortnight for technologies that we are well experienced in, and a month for greenfield technologies or projects of high engineering complexity. The duration would be decided at the initial sprint, where we estimate what could be achieved in the first sprint, within the budgeted engineering bandwidth. Once decided, we stick to this time-box throughout the lifetime of the project. The outcome of any given sprint is of course a release of working software –  working but not bug free or complete in functionality. As the sprints progress, the software “emerges” as a viable product for launch. A lot has been said in the industry about the generic form of the Scrum methodology, so I’d like to move on to a few specific lessons we learned at Calcey through our experience. A snapshot of the recurring activities that we practiced shown below.

The first and biggest lesson learned for those of us who were new to Scrum was that, unlike any other methodology, Scrum is an explicit activity like coding or testing. We’d scan the client horizon as well as our own engineering backyard each morning via the daily stand-up meeting, update our project backlog, and get into action to follow up on the individual tasks that need facilitation. We found that if we have a “living”  task backlog that gets updated without fail each day (with dates, milestones etc), we then could use it as the vehicle to drive our work; to psyche up the team, provide expert external assistance or reset client expectations. So the Scrum Masters don’t “go to sleep”  when not at sprint planning and the daily stand-up meeting –  on the contrary, they work hard each day to facilitate the resolution of issues arising from the daily stand-up meeting.
The effort required for effective sprint planning is not trivial, as we learned through experience. In theory, the estimate given at sprint planning (“I’ll finish task X within the next two weeks”) is considered sacrosanct. This aught to be, because cascading task “spillovers” into subsequent sprints could buckle the whole paradigm of time-boxed incremental achievement, and sprint velocities could take a nosedive. So what we found was that it was worthwhile to invest an entire day for sprint planning. This day is not counted into any given sprint, and the day’s principle goal is to freeze a list of tasks to be completed during the upcoming sprint. A full day provides enough time for the team to mull over the complexities of the tasks and break them down into smaller goals if necessary. The sprint planning meeting itself assumes a sort of “workshop” format where folks could pop outside for a quick R&D and return with better knowledge about the complexities of the work involved. Ultimately, everyone walks away to implement what they would consider as their own sprint plan, approved by the product owner.
Another significant learning was overcoming the common problem of trivialization of testing. The updating of automated unit tests and manual test plans, smoke testing, regression testing and bug fixing all take up a considerable percentage of the time taken for implementing a given functionality. Moreover, contrary to the idealistic belief amongst agile gurus that all competent software engineers are also competent testers (or aught to be such), we find in practice that the eyes of a person with a strong end-user perspective is essential to ensuring a healthy demo at the end of the sprint. So we found it useful to divide a given sprint timeline conceptually into a “new dev”  period and a “testing and bug fixing”  period, at sprint planning itself. This helped us to reduce the otherwise frightening tendency of “bug pileup”  that so often happens in Scrum projects –  where new development forges ahead of bug fixing, causing instability in the releases as time goes by.
The use of lifecycle automation tools was an immense help to us, and we considered them as part and parcel of our agile methodology. Anything useful ranging from build automation tools like Cruse Control, source repositories like GIT, test scripting frameworks like Selenium and Code Analyzers like FXCop were absorbed into our development framework.
The single hardest challenge though was making sure that the client representatives became an integral part of the team, and that they felt inherently responsible
for the incremental development in a hands-on fashion. Success or failure of a given sprint would be declared by the product owner immediately after the sprint demo; this is one important reason why the product owner (or his or her competent representative) is part of the team. This helped us prevent a situation where a client determines the product under development has veered radically off course after (say) a dozen sprints. Because if such a situation does arise, it basically tells us that the fundamental paradigm of dealing with a complex problem in small increments has not been adopted by the client. There are many ways to convey the message of joint ownership and incremental assessment, and in practice we have found that the most effective way is to discuss this very problem upfront prior to undertaking a new project. We usually stress the fact that the success or failure of each sprint must be determined at the next sprint planning, and adjustments must be made locally, at the scope of each sprint. These adjustments include making “management decisions”  like filling skill gaps or increasing the engineering bandwidth.
Lets face it, software development is not comparable to dam building –  a common misconception amongst management types. Although there is definite commonality in the broader values that are required of the team like honesty, dedication and professional competency, the fundamental drivers of work are not exactly the same. Intellectual effort taxes both our left and right brains equally with plenty of logical reasoning bootstrapped by flights of inspiration and lateral thinking. This “mind game” of software project management requires a management methodology that fosters creativity, whilst compensating for common human failings like poor memory, to drive it forward. Scrum is just such a methodology, and has proved highly effective for us at Calcey.