Building Your Own Software Part 6: Testing

| | Technology & Integration, UR Software

FacebookTwitterGoogle+LinkedInEmail
Todds Build Your Own Software Series Graphic V 3 Copyright Free

Welcome back! If you’ve been using this guide to build your own software, you’ve now got some software to start looking at! Your coders are saying, “It’s done!” By this point in the process, they should have performed many types of tests. You’ll hear terms like “white box,” “black box, “unit,” “integration,” “functional,” and “end-to-end,” as names of the tests they’ve run. If your coders can’t tell you what tests they ran, the results they found, and what they did to fix the problems (there are always problems, or “bugs”), then you need to send them back to school.

Basically, it’s your coders’ responsibility to make sure they are producing code that is efficient, performs well, adds value to the software and doesn’t break anything else. What about the use cases and test plans that the team produced during Part 4? The developers should be reading, understanding and using them to test their code. When they discover additional testing scenarios (and they will) that aren’t covered, they should talk to their business analyst to get the test plan updated.

A good test plan is essential to making sure you’ve produced software that can withstand exposure to other people. People love to poke holes in things. They love to try to find new ways to do things that will make their jobs or lives easier. Hackers take that a step further, finding great joy in discovering and exploiting others’ mistakes. For these reasons, use cases are not enough. They are only part of your test plan (the “test to succeed” part). The test plan has to contain scenarios that deliberately try to make the software fail.

Once the coders are confident in the software, the quality control (QC) team gets involved. QC is the process of identifying defects and ensuring that the software meets consumer expectations. QC people work closely with the business analyst during design and development to make sure the test plan is as complete as possible. They walk through the software design, looking for possible failure points to test. Once they get hands-on with the software, they repeat some of the same tests the coders have done (“functional,” “system” and “end-to-end”) and then begin their own tests, performing “sanity” and “regression” tests, followed by “acceptance,” “stress” and “usability” testing.

As the QC team works, they create defect reports in their tracking system, detailing what they were trying to do, what the expected result was, and the actual result. If the coding team has done their work well (including their part of the testing), then the “bug list” can be small, but never expect zero bugs, especially in complex software systems like a utilization review management system. There are often unexpected relationships between parts of the software to be explored by both the coding and the quality teams.

At this point, we revert to part 5 – the coders have more work to do! Bug lists are usually grouped by functionality and then categorized by severity:

  • Showstopper – stop everything and fix it right now
  • High – must fix before release
  • Med – should be fixed when time allows
  • Low – fix only after everything else (may not ever be worked)
  • Cosmetic – no functional effect on usage, but not fixing it may reflect poorly if users see it

From the bug list, coders begin working on fixing the defects – and you start a mini-cycle of development again. Coders will typically use a tool called a debugger that allows them to step through the software and see what is happening as the code executes. By following the exact steps that the QC team documented, they can reproduce the error and determine how to fix it.

In this article, I’ve only touched on a few of the possible testing types. For a comprehensive list of all the different types of testing, and descriptions about each, refer to an excellent article that describes the bullet points below http://www.softwaretestinghelp.com/types-of-software-testing/

  • Black box testing –Tests are based on requirements and functionality, ignoring any knowledge of the code’s inner workings.
  • White box testing – This testing is based on knowledge of the internal logic of an application’s code and is also known as glass box testing. Internal software and code logic should be known for this type of testing. Tests are based on coverage of code statements, branches, paths and conditions.
  • Unit testing – The testing of individual software components or modules. Unit testing is typically performed by the programmer and not by testers because it requires detailed knowledge of the internal program design and code. It may require developing test driver modules or test harnesses.
  • Incremental integration testing – This is a bottom up approach for testing, or ‘continuous’ testing of an application as new functionality is added. Application functionality and modules should be sufficiently independent to test separately. Programmers and testers perform this type of testing.
  • Integration testing – This process tests integrated modules to verify combined functionality after integration. Modules typically include code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
  • Functional testing – This type of testing ignores the internal parts and focuses on whether the output occurs per requirements. It is like black box testing geared to functional requirements of an application.
  • System testing – The entire system is tested against the requirements. It is black box type testing that is based on overall requirements and specifications and covers all combined parts of a system.
  • End-to-end testing – Similar to system testing, this involves testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications or interacting with other hardware, applications or systems.
  • Sanity testing – This test is done to determine whether a new software version is performing well enough to accept it for major testing. If the application is crashing during initial use, then it is not stable enough for further testing.
  • Regression testing – This is testing the application as a whole for modification in any module or functionality. It is difficult to cover every system in regression testing, so automation tools are often used.
  • Acceptance testing – Normally this type of testing is done to verify that system meets the customer specified requirements. User or customer does this testing to determine whether to accept the application.
  • Load testing – This tests system performance behavior under heavy load. For example, it is important to test a web site under a range of loads to determine at what point the system’s response time degrades or fails.
  • Stress testing – The system is stressed beyond its specifications to check how and when it fails. This testing is performed under heavy load, such as significantly beyond storage capacity, exercising complex database queries, and continuous input to the system or database load.
  • Performance testing – This term is often used interchangeably with ‘stress’ and ‘load’ testing. Performance testing checks whether or not the system meets performance requirements and uses different performance and load tools to do this.
  • Usability testing – This user-friendliness check tests application flow and navigation. Can new users understand the application easily? Is proper help documented if the user is stuck at any point?
  • Install/uninstall testing – This includes testing full, partial or upgraded install/uninstall processes on different operating systems under different hardware and software environments.
  • Recovery testing – How well does the system recover from crashes, hardware failures, or other catastrophic problems?
  • Security testing – Can the system be penetrated by hackers? Security testing measures how well the system protects against unauthorized internal or external access and checks whether system and database are safe from external attacks.
  • Compatibility testing – This testing explores how well the software performs in a particular hardware/software/operating system/network environments.
  • Comparison testing – This evaluates product strengths and weaknesses against previous versions or other products.
  • Alpha testing – At the end of development, the organization can set up in-house virtual user environments for this type of testing. Minor design changes may still be made as a result of alpha testing.
  • Beta testing – This is typically done by end-users or other designated users and is the final testing before releasing the application.

Todd Davis

Todd Davis, Vice President of IT – ReviewStat Services for UniMed Direct, is responsible for the continued development the industry-leading ReviewStat system. Leading a team of like-minded professionals, Todd works to review and improve ReviewStat’s full-featured and robust system to make it even more efficient and easy to use.