by James McQuillan, Development Team Lead
Software quality assurance (QA) is a large and critically important part of the development process at Remote Learner. Everyone loves new and exciting products, but providing well-tested, reliable products is just as important for us. Software QA is a huge topic and we use a wide variety of processes, tools, and methods to make sure we cover as much as possible. There is much more involved than I will be able to explain here, but I’d like to share a bit of how we make sure you get the most reliable products possible.
Education is not just something we believe in for our clients, it is an integral part of making sure our team is doing the best job they can. For development, it’s vitally important that all developers are as knowledgeable as possible about the products and systems they work on — this helps us eliminate software defects before they occur.
To help developers learn, we periodically run sessions on different development topics led by a member of the development team. This can cover widely applicable development philosophies and methods, specific topics like UI development or automated testing, or focused development strategies like dependency injection or class autoloading. Not only does this help educate the whole team, but the developer running the session often learns more about the topic as well. We run traditional presentations and Q&A sessions, but also interactive projects and workshops or self-directed learning. We try to keep things interesting.
We also try to document as much as possible in our in-house wiki to allow team members to learn from others experiences, and contribute their own. We encourage team members to branch out into work they’re less familiar with—with oversight from expert team members— to make sure everyone has experience with all aspects of the company’s products and systems.
Education for software quality assurance can be harder to quantify than some of the other checks I describe below, but I mention it first because it’s the first line of defense against bugs and defects. The more informed and skilled our developers, the better products we create.
Code review is an essential part of software quality at Remote Learner. To start, each piece of code that a developer is working on is isolated during development. Once complete, the code moves into a “Peer review” stage where another team member reviews the proposed changes. Code is evaluated against a wide variety of standards and requirements before it is integrated into the code base. These requirements include (but are not limited to) security, testing, syntax, and a “Sanity check”. These checks not only help promote software quality by requiring adherence to a set of standards, but also serve as an educational tool as new developers quickly learn what to do and not do. If there is a problem encountered, the reviewer can send the code back to the original developer for changes before it is integrated.
Having a second set of eyes on code changes helps to uncover bugs the original developer may have missed, but also often brings up higher-level discussions about the way the problem is being solved. This “Sanity check” is the most important part of the peer review stage, and requires another member of the team to manually review the code change. The peer reviewer takes a wider look at how the problem was solved to help identify architectural defects or fundamental approach problems. This helps us make sure that problems are being solved in the best way possible, and that our code is clean, sensible, and accurate.
As we have evolved our processes, many of the simple checks for code syntax and style have been automated, but the higher level “Sanity Check” must always be done by a human.
As we have progressed and the number and complexity of our products have increased, automating as much as is reasonable has become invaluable. Much of this automation is contained in what we affectionately refer to as “CiBot”, or “Continuous Integration Robot”. CiBot is our development automation robot, and handles much of what was previously done manually. CiBot runs all code against thousands of automated checks to help us ensure software quality.
CiBot is custom software running on a Jenkins server. Jenkins provides a solid platform for general-purpose automation, and CiBot runs on top to carry out our custom QA process. CiBot handles installation, parse error checks, code style checks, and in-depth automated unit and integration tests.
CiBot also helps us test different server software stacks. We support a number of different versions of Moodle in production, and some of these versions have unique server requirements. To make sure we’re testing on production-like environments, CiBot runs on a cluster of Jenkins runners. Each Jenkins runner is a separate virtual machine, dedicated for testing a particular software stack, which in turn is tied to a specific version of Moodle. When testing a code change that applies to multiple versions, each one of these runners runs the set of tests and checks for its version and stack, and the results are assembled into a single report. This means we get more accurate results for each version. All of this helps to ensure our testing environment and our production environment are as close as possible to each other.
Developers can easily request CiBot reports through JIRA, our issue tracker, and get results throughout the development process, ensuring their code is on track. Results in JIRA look like this:
…and the full report looks like this:
Developers doing a peer review on an issue can use CiBot reports to get results for almost all of the checks involved in the peer review process. Our goal is to have the time-consuming, tedious, and simple (but important!) checks automated, to allow peer reviewers to focus on higher-level code review. If developers can focus on the wider question of how the problem is being solved, we can develop better, more innovative, solutions to problems.
Much of the time spent in our original peer review process focused on syntax and style checks – how well the code matched our coding standard. Rather than high-level architecture, code style deals with how each individual piece of code is written. While small and low-level, this ensures code readability, which helps future developers quickly read and understand code, reducing future bugs. Consider the following:
Whether or not you can read PHP code, it’s pretty easy to see how the first example is easier to read and understand. Checking for code style manually, and typing up results, is extremely tedious, time-consuming, and inaccurate. CiBot completely automates this process. CiBot uses PHPCodeSniffer to check code against a predefined coding standard and print out a report of any violations. Reports are limited to the actual code change made, so we can focus on any problems that this particular piece of code introduced. Since developers can request checks during development, this has reduced the number of violations that make it to the peer review stage, and any that do are easily and quickly identified and fixed. This has drastically reduced both the time it takes to do a peer review, and the number of re-reviews necessary if a problem is found.
While code style ensures readability, it doesn’t ensure the code actually works. Two of the most important checks CiBot runs to ensure our code works are unit and integration tests. We use a combination of PHPUnit for unit testing, and Behat for integration testing. For those not familiar with software testing, unit tests are simple, quick tests that test self-contained pieces of code, in isolation, to ensure that that code “unit” works as intended. These are numerous and quick to run, and ensure each individual piece of the code works, individually. Integration tests, on the other hand, are intended to match real-world use as much as possible. To understand why this is important, consider the image in the upper left.
Each of these drawers works, but you would not be able to use them very well! Here each of the parts works in isolation (unit tests), but the system does not work when put together. Integration tests test the system put together.
We do integration testing with Behat, which allows us to write tests like this:
CiBot runs all of our unit and integration tests for each issue, allowing us to see when a code change introduces a problem. We run all these tests and checks before integrating that code into the released product. Comprehensive unit and integration test suites, and the automation to ensure they are put to good use, has proven to be one of the best tools for ensuring product quality. These tests are written in a plain-English-like language called Gherkin. Each of these steps is tied to code that powers a real browser on a real site, and clicks through a test just as a user would. This allows us to see how code works in reality.
Of course, highlighting errors and warnings during development can be a little discouraging at times so we tried to make CiBot as fun and personable as possible. Each result comes with an easter egg quote at the top of the JIRA comment helping to brighten everyone’s day through continuous, automated QA! Here’s a sample:
CiBot is an ongoing project and we have plans to automate much more of our development process. I hope to share more with you as it develops!
Automation, peer review, and developer education are just some of the methods we use to ensure software quality at Remote Learner. Quality is something we integrate into everything we do and are continuously improving and evolving our processes to make sure we give you the best quality product possible. I hope this post has given you a little more insight into how we ensure quality. On behalf of the development team, I hope you enjoy what we build!