Version: 3.0
Valid until: 2025-04-10
Classification: Low
3.0 | Edward Robinson Sayali Shitole | Additions/changes as part of the annual review. Added Production Deployment Approval section. |
In the interest of all the stakeholders, the top management of anDREa B.V. (hereafter called anDREa) is actively committed to demonstrably maintain and continually improve an information management system in accordance with the requirements of the ISO 27001:2017.
The purpose of this document is to describe the system security testing and system acceptance testing of anDREa and the associated controls, checks and administrations.
This document will be reviewed at least annually and when significant change happens.
The objectives of this control are:
To ensure that information security is designed and implemented within the development lifecycle of information systems (A.14.2).
The scope of this document corresponds to Clause 4 Context of the organisation.
This document is:
required reading for:
all employees and contractors of anDREa.
available for all interested parties as appropriate.
Requirement analysis
During this phase, the test team goes through the acceptance criteria, security criteria and/or requirements/user stories mentioned on the sprintboard to identify test scenarios. If the acceptance criteria are not clear, the tester may contact system architects or business analysts to understand the requirements.
Test planning
This phase involves effort estimation and the creation of a Test plan. Each Test plan is reviewed by the involved developer(s), anDREa management and in some cases involved system architects or business analysts.
Regression test plan
This phase involves the creation of a Regression Test plan in which we have identified tests which need to be executed after every deployment. This usually happens at the end of a sprint.
Effort estimation of a functional PBI
A task is created in the new feature PBI and an estimate in hours is added for the amount of testing required.
Test design
During this phase, the test team creates test cases for the PBI which will be developed in the current sprint. Test cases are stored in Azure DevOps.
This environment is controlled by the development team. The test environment is deployed with new builds/bug fixes or any change requests. Once the environment is ready, it is handed over to the test team to test the deployed changes.
Test execution
During this phase, the testers will carry out the testing based on the Test plans and the test cases prepared. Bugs will be reported back to the development team.
Test execution is done using Azure DevOps. Tests can be executed as below:
This involves providing a test summary to the entire team after completion of all tests. The final test result can be found in DevOps. The Regression Test report can be found in the Regression Test plan. Once testing is completed, the Test team updates the Product backlog item and moves the item in a state Ready for Acceptance, further the PBI will get deployed on acceptance for the testing.
Types of testing
Sanity testing
This is done to verify that basic functionality of the application is still working. This is done after a new build is received on the test environment. If there are any failures in the build, the Test team immediately informs the Development Team for further investigation.
The tests below are conducted as part of sanity testing:
Log in and log out.
Verify if Workspaces are loading.
Verify if the Data Request page is loading.
Verify if the Owner can add/remove members from the Workspace.
Functional testing
In this testing phase, the system is tested against functional requirements/specifications. The purpose of functional testing is to test features by feeding them input and examining output. Functional tests are written on the PBI level and are tracked in the PBI.
Please refer the screenshot for the reference-
Regression testing
Regression testing is a full partial selection of already executed test cases which are re-executed to ensure existing functionalities still work. Regression testing is performed based on the Regression Test plan.
This testing checks whether an application behaves as expected with positive inputs.
Examples of positive testing done in Shared Tenant:
Verify that only owners can add/ remove members from workspace
Verify that an email is sent to all owners of workspace when a member is added to a workspace
Negative testing is a method of testing an application that ensures that the plot of the application is according to the requirements and can handle the unwanted input and user behaviour.
Examples of negative testing done on myDRE:
Verify that members(non-owners) of a workspace cannot add/ remove other members in a workspace
Verify that members cannot approve data transfer requests
In this testing, the feature or a bug is tested against functional requirements/specifications before it is introduced to the production environment. The purpose of acceptance testing is to test whether a feature is functional in a production-like environment.
Findings of acceptance testing are tracked in a PBI with a separate task.
Below is a cycle which we follow in Shared Tenant. The only thing is we don’t create bugs for the test environment. We only create bugs found in Production. Bugs which are found in the test environment are tracked using tasks. But the cycle remains the same.
As part of this testing, we need to validate that any changes made to the Shared Tenant APIs do not break any dependencies of that API. Either on the API surface (Contracts) or the expected results (Behaviour). The team has agreed to create a regression test bench for the Shared Tenant APIs using Postman together with the release pipelines.
We are writing API tests for following APIs to start with:
Workspace API
Compute API
We are also planning to integrate these tests with release pipelines. This will be owned by developers.
To view API tests. Login to postman → Go to Team workspace → Collections
As part of this implementation, we have automated UI based scenarios. This is done using Playwright automation tool, reasons we started with Playwright-
Automates web application scenarios
The framework supports cross-browser development
Auto-wait, smart assertions that retry until an element is found
It's available as a VS Code extension to run tests in a single click and comes with features for step-by-step debugging, exploring selectors, and recording new tests.
Testing cross-language, including JavaScript, TypeScript, Python, Java, and .NET – choose the environment that suits you while still covering all areas and formats
Easy to integrate with azure pipeline
Generate an HTML report to view test execution results in the browser
Playwright framework looks something like this -