As with any change on your software development ecosystem (in this case: the acquisition of a software application for testing your brewed software), there are many aspects that need to be considered and which importance (or weight in the decision) might vary depending on the particular context of your project and your organization.
Traits like organization-wide policies, culture, tech-stack, software architectural decisions, type of SDLC, team composition and skills may have a significant impact on the successful implementation of a Test Automation Tool.
But what is a Test Automation Tool?
In the most simplistic way, a Test Automation Tool is a software application which purpose is to automate the testing of other software.
These tools can be used to automate different levels of tests (i.e. unit tests, integration tests, and system tests) for targeted systems (e.g., Citrix, SAP, Desktop, Mobile) in multiple platforms (e.g. Web + Mobile) over different operating systems (for instance MS Windows, MacOs or Linux).
Test automation tools typically offer features to kickstart and accelerate test automation implementation, improving the efficiency of the testing process, saving time during test execution and allowing test engineers to expand the test scope or focus on more complex scenarios. These tools also aim to enable collaboration between team members and across teams and provide abstractions that reduce the complexity of the automation implementation while providing out of the box features for specific testing needs. This in turn, reduces the skills needed by the QA Engineers to implement test automation.
What is the difference between a Test Automation Tool and a Test Automation Framework?
Opposed to Test Automation Tools, Test Automation Frameworks are a set of guidelines or rules used for creating and designing test cases, usually implemented as custom software projects to solve a specific set of testing needs. Test automation frameworks are often “opinionated” in the sense that they provide a particular structure for organizing and managing automated test cases and are generally constrained in scope and tightly coupled to a set of technologies, in the form of dependencies libraries, file formats, conventions and file/folder scaffolding for specific platforms.
While its use can greatly improve Test Automation Implementation for a team (in respect to raw automation code), the test engineers usually need software programming skills (many times even if the framework is data-driven, keyword-driven or oriented to record and play), specially for debugging and troubleshooting the framework.
Finally, these frameworks usually have to be customized, fine-tuned or extended to enable its integration with specific SDLC tools or to provide particular features for a project depending on its needs (for example evidence recording, reporting capabilities, metric gathering, etc.) or to be used outside the context for which it was originally created. As with any modification to a software project, these changes might represent a non-trivial effort, depending on its extension, scope, the architecture/design decisions, available documentation, in-house skills, etc.
How to know when my project is ready for Test Automation?
As the Quality Assurance practice is more widely adopted, in part because of the current trends of hyper personalization of applications with ultra-fast delivery times, and in part because of more mature software development processes (which consider a quality-centric approach to software engineering), it is very common that certain management roles push teams to adopt Test Automation as a mean to reduce time to market and proactively find defects before they reach the productive environments.
However, implementing test automation for a software project must not be trivialized, there are aspects that go beyond the mere implementation of a tool that play a significant role on who, how, when, where and with what? test automation can be achieved in a project (not to say within a whole organization).
To keep this post short, let’s focus on things you must already have (or be in the way of getting) within a software project with a small set of teams:
- Application and Environment Stability: Ever changing functionality or unstable environments (including versions of libraries, APIs or database schemas) tend to make any type of automation more difficult, inefficient and more costly.
- Quality Assurance Policies and Guidelines: Software Development Teams must already have well defined and adopted Quality Assurance processes, tollgates and metrics (i.e. comparable to TMMi level 3) without which, test automation would only become a burden if not a bottleneck (in the best of cases).
- Testing Plan: Teams must have a clear path to reach their own definition of what a “High-Quality Product” must be, and have defined (and at least partially implemented) a roadmap on how to achieve each milestone for each feature, including what levels and test types should be applied in what parts of the software and on which environments, at what times and by which roles, to achieve product quality goals and keep metrics and defects under reasonably pre-stabilized control.
- Test Automation Strategy: Teams must be clear on what levels and types of testing should be automated and what kinds of tests are better candidates for automation (a.k.a. automation criteria). These definitions must be aligned with the test plan to know when (within the SDLC), where (in their SDLC ecosystem and environments) and with what priority those test should be created an executed. The strategy includes automation goals that entail metrics which may require advanced calculations like ROIs.
How to choose a Test Automation Tool?
There are many aspects that need to be considered which importance might vary depending on the particular context of your project or organization and that may have a huge impact on the decision over a particular Test Automation Tool and its successful implementation. Here are 6 examples of non-traditional traits that could be subject of consideration:
- Organization-wide policies: for example, what is the posture on the use of Open-Source software, industry related regulation restrictions or security concerns.
- Culture: what are the de-facto or unwritten community agreements that may affect the adoption of a tool (e.g., whether it supports a particular kind shortcut key mapping, plugin-set, development modality, or other preference like dark-mode).
- Tech stack: related to your current hardware (servers, computers and other types of devices) and software infrastructure, ecosystem and SDLC tooling, for example version control system, encryption method, third-party integration, programming language, database support, file format support, etc.
- Current software architectural decisions: evaluate whether the tool should support for example, the use of micro services, API gateways, message queues, certain types of protocols, cloud infrastructure and even upcoming migrations or refactoring projects, upgrades to major versions (e.g., with breaking changes), etc.
- SDLC Type: teams may need to change from Kanban to Scrum or different teams might have customized (if not conflicting) SDLC processes which requirements and deliverables must be satisfied by the tool.
- Team composition: the roles, disciplines and seniorities of the teams can impact how fast and well a tool can be adopted and used.
Along with the former aspects, consider the following “more typical” items:
- Required Infrastructure: what is needed from your infrastructure to set up and configure the tool for your users (and possibly CI/CD pipelines) to take most advantage of its capabilities and features.
- Environment and Configuration: How long does the setup take and how well the tool aligns with your current tech stack (e.g., it might need customizations or third-party plugins for integrations).
- Inputs: what does the tool need from the application under test and what your test engineer needs to do to start using the tool for a testing cycle.
- Scripting & Automation: What features, approaches and technologies from the tool can be used and how they enable your test engineers to kickstart or accelerate the test automation process.
- Execution & Performance: How well the tool supports your test execution process (including evidence recording and metrics) and how performant it is under your environment.
- Test Closure: What level of compliance you get from the tool to meet your defined QA deliverables such as execution reports, evidence, metrics and statistics from historic data for analytics and forecasting.
- Scalability and Extensibility: How the tool is fit to “dynamically” adjust in changes on application scope, infrastructure/tech-stack, processes, and team/organization size.
- Support: How your team gets help with implementing, troubleshooting and leveraging the most out of the tool.
- Cost: Consider aspects that may impact your budget, like licensing model (e.g., by seat or by user, add-ons, monthly or yearly payments), the Total Cost of Ownership (maintaining infrastructure, implementation, support or consulting fees) and whether your testware would endure contract termination with the vendor.
- Fit to purpose: evaluate how well the tool serves a particular platform, system or specific need of your testing plan (e.g., functional + non-functional tests, SAP or Service NOW support, etc.)
There is no one-size-fits-all answer to the question of which test automation tool is right for your organization. The best way to choose a test automation tool is to carefully consider your specific needs and requirements. The proposed list of items is not meant to be exhaustive, but each item can be used as a category from which a further refined set of questions can be derived for them to be answered for each tool under evaluation. This can serve as a starting point to establish a more formal and standardized evaluation process of a Test Automation Tool.