We often hear about the benefits that synergy brings into the human endeavour, in fields such as science, politics or sports. In the corporate world, we hear of teams working together as a cohesive group, brainstorming and combining the critical views of several heads to produce a greater outcome than that which would have been produced, had the same persons worked in isolation.
Although its hard to explain how exactly synergy works, we could say that it involves constant communication between team members that results in a clash of ideas, that causes a natural selection of the better ideas over time, akin to biological evolution. The better ideas get translated into good practices, as they are absorbed back into the minds of the individual participants in the given synergistic exercise. In other words, the team learns together as a group to do things smarter, and the good practices learned becomes intellectual infrastructure one can reuse.
We at Calcey have, over the years, explored various methods of “working together”, and have incorporated two notable practices into our engineering process that clearly facilitated group learning. They are, namely:
- The Group Code Review and
- The Sprint Review (UX Review)
Whilst these two practices might look widely different on the surface, the underlying social phenomenon is quite similar. We gather together in front of a draft solution presented on-screen, be it code or UX, and we brainstorm about it critically. The benefits are:
- The presenter of the idea sharpens his communication skills. An audience does not understand a poorly presented concept, and an audience would respond accordingly
- The owner of the draft solution, who is also the presenter, is forced to critically evaluate the solution, its limitations and its consequences. No one likes to look silly in front of their colleagues
- Critical feedback from the audience comes in thick and fast because there are multiple minds focused on the draft solution presented. There is guaranteed to be a healthy clash of ideas. There are many positive reasons for encouraging this pseudo-conflict, ranging from the owner of the draft solution being too close to it all (not seeing the wood for the trees), to the diversity of competency levels in the audience present
- Newcomers to the team learn about the frameworks, patterns and practices used within a given product or codebase
- Juniors learn about design, development and usability best practices, and about being more self-critical of their own work
- Developers who didn’t work on the immediate solution under review learn about new extensions to the product and codebase
- Accountability for the given solution extends to a group of persons, and as such reaches higher levels
- There is less opportunity for personality clashes to happen behind closed doors, where one person is “victimized” by his or her peer, unknown to the rest of the team
We found through trial and error that the optimal size of a review group would be around five to eight persons. This modest size helps maximize the conversation. We also find that it’s important to include at least two competent persons outside the immediate team that developed the solution in question, to eliminate groupthink and encourage out-of-the-box thinking.
The final outcome is that our teams got increasingly better at producing quality code, and at translating ill-defined functionality into wonderful, attractive user experiences. As such, we strongly advocate “group review” as a good practice in software development.