“It is easier to prevent bad habits than to break them.” – Benjamin Franklin
The elegant software we love to build doesn’t arise magically. We hire talented developers, but people are only one part of our recipe for creating quality code. Equally important are the processes our team uses.
We have a set of best practices to ensure that: development moves swiftly, nothing slips through the cracks, the work we do can be easily understood by future developers joining the project, and we are using the best tools for the job.
Linting is the term of art used for programs that identify style discrepancies and structural errors in code. Teams will define what styles they want their project to adhere to (for example, no more than 100 characters per line, 2 spaces per indentation, etc), and then the linter will either highlight or automatically fix those discrepancies.
This consistency makes moving between files and codebases more seamless and improves the overall readability of the code. What’s more, you can even have rules that identify common inefficient, dangerous, and broken code snippets, helping you write more performant and bug-free code. Fancy!
- Eslint – The linting tool (highlights issues, instead of saving fixes)
- eslint-config-airbnb – Widely used configurations for the tool
- eslint-config-prettier – Widely used configurations for the tool
- Prettier – The tool for automatically fixing the code when issues are found
An in our Android (Java and Kotlin) projects:
- detekt – Kotlin
- klint – Kotlin
This has had a huge impact on the velocity of our teams, allowing us to iterate more quickly and see up-to-the-second impacts that changes in our code create.
Some tools we use that either support or provide hot reloading are:
- Nodemon – A tool for adding hot reloading functionality to Node processes
- Webpack – A tool for building and hot-reloading web apps
One of the most important parts of our code creation process is code review. We never leave a developer on their own, writing and pushing code that is unreviewed. On every project, we always have at least 1 developer working directly in the codebase, with another developer to review the code.
During this process, the reviewer may find bugs or more optimal solutions. However, even if no issues are found, something equally important happens: the transfer of codebase knowledge to a second developer. This ensures that no matter what, more than one person knows about the code being written, which in turn improves accountability, builds camaraderie amongst our team, and guarantees that a project never lives and dies by a lone developer.
The review process typically occurs in Github’s reviewing interface, as well as on the reviewer’s computer as they pull the code and run it themselves. A common checklist we use when reviewing code is:
- Are there any typos?
- Are there new tests for the code? (more on that below)
- Do the continuous integration checks pass? (more on that below)
- Is the code easy to read and understand? Would a simple comment add clarity?
- Is the same code copied in multiple places? If so DRY it up.
- Would it be difficult to modify or extend the code if new requirements arise?
- Are all of the acceptance criteria met? (more on that below)
- Are any of the algorithms or data structures inefficient? Consider Big O notation.
- Are the SOLID principles followed?
Check out our blog post about code review for more detail about our process.
Testing is the complement to code review, as it provides a mechanism for ensuring that the code behaves as it appears. Over time as more tests are written, it also serves as a form of regression testing, guaranteeing that new changes to code do not break old features.
A common approach to writing software is called test-driven development, whereby you first write a test to check the behavior of a piece of code, and then write code to be tested. This guarantees that every incremental step of your code works, and that your code is thoroughly tested. What’s more, that flow encourages problem deconstruction and more focused, compartmentalized solutions as you write individually testable pieces of code. It really is a win-win.
We do not have a one-size-fits-all approach to testing, as each project’s resource and timeline considerations can vary, but always prefer to have at least baseline tests to verify basic functionality of core features.
A common checklist we use when writing tests is:
- Do I test that the various new pieces of code work individually (unit tests) and together (integration tests)?
- Do my tests verify that errors are properly handled?
- Do my tests individually handle setting up and tearing down any external requirements?
- Was it difficult to set up my tests? If it was, could it be because something in my code was poorly written, or relies too heavily on non-deterministic behavior?
Some common testing tools we use are:
Continuous Integration Pipelines
A robust CI pipeline is the glue that binds code review and testing together. It handles listening for pull requests against repositories, and then running a set of scripts – such as the linting and testing scripts – against those pull requests to quickly and automatically verify that the changes they propose are acceptable.
This tooling can require a little bit of up front configuration but is ultimately worth the investment as it not only guarantees code quality, but also speeds up the code review process. This is possible because the results of the CI analysis can be integrated directly into Github or other repositories, allowing reviewers to see whether the code they are reviewing passes the required tests without having to run those tests themselves.
What’s more, CI pipelines can be seamlessly integrated with Continuous Deployment tools to automatically deploy the now-verified code once the pull requests are merged into specific branches.
We frequently use the following CI tools:
Documentation comes in many forms including research, meeting notes, tickets, and readmes. Depending on the project, we will work with our clients on each of those forms to varying degrees.
No application gets created without thorough research and planning. Sometimes our clients come to us with a full set of technical specifications and all of the research already done, while others seek a collaborative process where we work with them to gather requirements, create user stories, and plan out the architecture for a project. We love digging into the details, and so are always happy to help with or take ownership over this process. When that happens, we will create a clear papertrail of the facts and conclusions, ensuring that all stakeholders are on the same page. We like to use Confluence for this process, as it provides a deep set of features for creating, linking, and organizing the mountain of information we produce.
Regardless of whether we are conducting research, there are always various meetings over the course of a project. In the words of management author Patrick Lencioni: “The majority of meetings should be discussions that lead to decisions.” If nobody is writing those decisions down, they’re bound to be forgotten or misremembered, which can slow down the whole team, or worse yet, lead to incorrect courses of action. We build successful long term relationships by taking an active role in our partnerships, and that means doing whatever we can to make sure those sorts of mistakes don’t happen. So when clients prefer it, we will keep notes on the major takeaways from our meetings and send them out to the team afterwards.
Tickets define the tasks to be completed, and are an essential part of keeping a development team moving swiftly through the countless discrete and interconnected steps of building an application. The most widely-used tool for creating and managing these tickets is Jira. Some of our clients prefer to create these tickets themselves so that we can just focus on writing great code, while others seek our input during every phase of the ticket research, writing, and organizing process. We’re happy either way, as long as quality work gets done.
Last but certainly not least is the humble readme. Even the most clearly-written application requires a readme, so as we create and maintain projects we strive to produce readmes that describe the application’s purpose, explain how to get it running, and document relevant APIs and architectures. This in turn reduces onboarding time and improves developers’ ability to leverage the application, while providing a single source of truth about expected application behaviors.
In the aggregate, this documentation improves the efficacy of our teams and gives our partners a sense of true ownership over the technology we create, as it ensures all stakeholders are aware of the process every step of the way and empowers any future developers to onboard onto the project as quickly as possible.
Our tools and processes help us consistently produce high quality code as quickly as possible while ensuring full buy-in from our clients.
If you’re looking for a partner to help make your project a reality, or want to join a team of passionate devs that take pride in their work, then we’d love to chat. You can contact us through our site, or at email@example.com.