OOP is short form for Object-Oriented Programming;
OOPS is a short form for Object-Oriented Programming and System.
It is a programming paradigm with the core concept of programming completely on the objects, which are analogous to real world objects. It means everything (thinking and programming) is around objects.
Below are the fundamental principles of the OOP:
Advantages of using OOP
Typically, QA team representative (Director or Manager or Lead) would prepare the answers to the following questions while tracking the status summary.
- How many open bugs and list of these bugs?
- How many blocking bugs (P1)
- How many tests are failing (count and %)? Example: 30% failing
- How much test development is pending? Example: 40% tests
- When will be the test automation is going to be completed?
- What are the bugs inflow (new bugs came since last reported time) and outflow (fixed bugs since last reported time)? (Example: Weekly)
- What are the total number of bugs fixed and bugs opened? Bugs trend.
- How many bugs yet to be verified?
- Please note that the bugs verification & closing usually done by the bug filer.
Get the answers to the above questions in a wiki or document and update in the email for the weekly status report and send to stakeholders.
The quality is being tracked on the convergence of meeting the release criteria such as a number of tests executed/passed/open P1/P2/P3 bugs, Stress/Longevity criteria, Performance targets, code coverage, etc.
To track the quality, below are some of the measurements done on each build or sprint or weekly basis.
- Test Execution metrics: track the dev code changes and testing progress since the last build or tracking period.
- Bug Metrics/Defect Tracking: the number of open bugs and closed bugs since last build or tracking period.
- Code Coverage: %class, %method, %line coverage since the last build or tracking period.
- Performance Numbers: throughput numbers since the last build or tracking period.
- Bugs metrics containing incoming and fixing of bugs would be plotted as graphs and tracked on a weekly basis. A bell curve is what expected for the period of product testing cycle. That means initially, incoming or new bugs would be low and later in high and then later goes down again as fixing bugs. If deviated from this bell curve, then there might be the possibility of low quality as bugs keep coming, and convergence of release is not meeting the criteria.
- The quality is assessed and projected at different levels than in absolute numbers. These are –
- High quality
- Medium quality
- Low quality
The high-quality product should be the target of any good product and of course, it would cost more compared to a low-quality product.
[Visit http://www.everydayon.com , India http://www.everydayon.in]
Severity (S) is the indication of bug’s effect and impact on the customers. For any new bug, this is the first thing to be determined while filing the bug.
Typically, severity is divided into five levels and are: S1 (Highest), S2, S3, S4, S5 (Lowest)
- S1: means that the feature can’t be tested or blocking further testing
- S2: majority features are not working, and many tests are failing
- S3: some cases are not working
- S4, S5: very corner cases and not impacting the major functionality
The bug filer would determine the severity and set while filing the bug.
No. Typically, once set the severity in the bug, then no changes can be done. Typically, bug tools will not allow changing the severity.
Bug priority is nothing but an indication of how important or urgent to get the bug fix.
Typically divided into five levels: P1 (highest), P2, P3, P4, P5 (lowest).
- P1: Need immediate fix ( tests blocking) within 24hrs
- P2: Need by next build (major functionality is not working)
- P3: It is ok to wait but fix before release
- P4/P5: It is not mandatory, it is ok to fix or not in the current release.
The bug filer would determine the priority and set while filing the bug. The thumb rule is to set the same priority as severity while filing the bugs. Example: P1/S1, P2/S2, P3/S3, P4/S4, P5/S5.
Note that anyone can change the bug priority and bug tools would allow that. Typically, changed after the conversations with stakeholders like release managers and product managers. Example: QA can file a bug, but Dev manager or product manager can change the priority.
The bug life cycle is the process managing the events starting from creating a new bug until it is closed.
Below are the typical bug lifecycle states but might have different named states based on the bug tool:
- New (state) → sets when filed a bug by QA or re-opened.
- Assign → sets when assigned to a developer by QA/QA manager/Dev Manager/Developer.
- Evaluate and in-progress → sets by the developer during evaluation.
- Fixed → sets by the developer when code change happened for bug fix.
- Duplicate bug → sets by the developer if already that bug exists.
- Not a bug → sets by the developer if it is not a bug.
- Not reproducible → sets by the developer if it is not reproduced.
- Verified → sets by QA after verified as OK.
- Re-open to New → sets by the developer
- Re-open to New → sets by clarifying how to reproduce again by QA
- Closed → by QA/Filer after verification.
The Test scenario is nothing but a sequence of steps to be performed with a goal of verifying a user story or use case or requirement. These scenarios are going to become the test cases or tests and grouped as test suites.
Test scenarios are going to be created during the test design process and will be documented in the test specification.
The Release criteria is the minimum checklist or list of goals to be achieved to release the product. This effort is driven by RE team, and all the project stakeholders would contribute to making this release to happen.
Below is a sample release criteria/checklist and can be taken as a template if none is available.
- 100% feature coverage and test development
- 100% test execution
- 95-100% pass rate
- >90% automation for regressions
- Minimum throughput numbers related to the product.
- Comparison to other competitive products
- >90% class level
- >50% method level
- >40% instruction/statement level
Internationalization (I18N) & Localization (L10N)
- Internationalization support
- Specific languages to translate
- NOTE: I18N is short form as there are 18 characters between I and N in the “InternationalizatioN”. Similarly, L10N is short form as there are 10 characters between L and N in the “LocalizatioN”.
- Release notes
- User Manuals
- Installation Guide, Admin Guide, Developer Guide, Troubleshooting guide, etc.
- License and license text
- Support/Maintenance plan
The Test development process is the part of SDLC with strategy, methodology, frameworks and tools, etc. to be applied in the product testing.
The below are the key steps involved in the process.
First is to understand the component/module or integration of the product functionality expected behavior. It means that one should go through the use cases/user stories/features and discuss with developers for any clarifications. For this, use functional specification or JIRA tickets for user stories or tasks.
Second is to think and brainstorm with the team about all test scenarios for a particular use case. That means to create the exact sequence of steps to be performed for each use case as if like testing manually. Put of those scenarios incrementally in the Test Spec (TS) based on agile process or waterfall SDLC process.
The Third is to execute the tests manually and file bugs if any deviation from the expected behavior. In summary, for all test scenarios (based on priority) from TS
- Run it manually if possible (example: going through web pages navigation/flow)
- Write the test code/test script
- Typically, it is done in the same language that code has been developed, like for java project is java based test code.
Fourth is to create a Test Development Framework, which is a set of utilities/tools to help in the following tasks:
- Execution engine/driver to trigger test cases runtime/execution. Otherwise one has to run manually, say using simple java command line.
- Supply input data as properties or CSV or other data files. Otherwise, one has to run manual supplying of arguments.
- Generate Result/Report document (say HTML or XML). Otherwise, one has to look at the console for results.
Fifth is to select or create a custom Test Tool/Driver
- Identify and select an open source/free/commercial right tool that helps in the testing or automation of tests. For example, Selenium for web site test automation or SoapUI tool for web services testing.
Six is to design and implement a Reporting mechanism
- Design the final reports like TestNG or any customer HTML reports with basic system under test details, number of tests executed, tests passed, tests failed, tests skipped, etc.
Seventh is to setup SUT (System Under Test)
Eighth is to run test scenarios manually and file bugs if any.
Ninth is to automate the tests for regression testing.
- Create the test scripts/test code along with a base build framework (as dev used in their builds) like Ant or Maven or Gradle build tools.
- Review the test code with team (peers/dev).
- Check-in the scripts/code into SCM repository (say git and along with product code).
Finally, perform regression testing
- Execute the automated tests build by the build of the product.
- Schedule automated execution jobs using tools such as Jenkins/Hudson CI.
- Send the daily test result reports
- Analyze the failures
- File regression bugs
- Verify and close bugs