top button

Evolving from Quality Assurance to Quality Engineering

+1 vote
15 views

If it has not crossed your desk or Inbox yet, you can catch up on tiny details at the next lunch break with your Project Managers or CTO; but for now, just glance at the World Quality Report 2018-19 that surveyed 1700 CIOs and other senior technology professionals (10 sectors and 32 countries)

The report unravels how ‘user satisfaction’ is coming on the top of the stack of most Quality Assurance (QA) and testing strategies. The advent of customer-centered innovation, digital transformations, agile approaches, DevOps, Internet of Things (IoT), Cloud, etc. is fuelling this change. But above everything else, what stands out is a list of recommendations that make you reconsider the very role of QA.

The report urges professionals and managers to:
– Amplify the level of smart test automation
– Transform QA and test function to support agile development and DevOps teams
– Invest in smart QA and Test platforms, and
– Define a test platform strategy and QA strategy on an enterprise level

But these are no surprises. When the report noted that 99 percent of respondents are using DevOps in at least one of their projects and that automation is emerging as the biggest bottleneck that is holding back QA and testing today – is it not just iterating what you see in your office and scrum-huddles every now and then?

The Big Burst – Choices, Challenges, and Complexities

We are in a time where devices are proliferating at a never-before speed, where 97 percent of respondents (in the report) show some kind of IoT presence in their products and where Cloud, DevOps, and Agile have ceased to be mere powerpoint enhancers. They are now stark realities that software and application teams encounter and leverage every day. No wonder, there is a pressing need to bring Software Development/Design Engineer in Test (SDETs) into the team; and to inject skills in security, nonfunctional testing, test environments, and data management among testers. The challenges and context of this modern software/application world have created a seismic shake. It is a post-Uber planet where only the best will survive. And relying on QA alone is not going to make the cut here.

Why QA needs an intervention?

Quality Assurance (QA) only entails activities for ensuring quality, spotting flaws early enough, code review, analysis, and refactoring. But the users need a stronger and broader variant of this approach. One that goes beyond testing cycles, one that percolates into the culture and the very way developers and designers think about software. Yes, get set for the arrival of Quality Engineering (QE).

Quality Engineering transcends quality control, quality assurance, and testing. It is proactive, strategic, forward-looking, intuitive and is way bigger in scope than QA. It is not limited to processes and procedures. It expands into the realm of the way these processes come up – right at the nucleus of ideation and user-empathy. It straddles across all areas of QA and testing and lifts quality to an altogether new level.

You will notice that QA is inclined towards some earlier stages of the software development, and had a postcode-writing role wherein QA teams checked what developers had written. But QE is not the fag end of a software cycle. It is a radical way that starts way ahead of where the code begins. It permeates the entire development flow. QE helps organizations and developer-tester teams to come together against the onslaught of the diversity as well as the exponential rise of too many devices, platforms, applications, and content needs.

QA alone would not suffice to match the speed, persistence, and thoroughness that the Agile development and Shift-Left world demands today. Quality Engineering ensures that quality is embraced early on and is enhanced at every step and desk – and not just at the exit door doormat of software code. It undertakes end-to-end and architectural approaches for comprehensive software quality.

But – How to evolve from QA to QE?

Organizations have to embrace this new culture and mindset to embark on this massive shift.

This is where a continuous integration model between developers and testers would come into play. Testing becomes consistent, embedded into code design and gets easy to integrate into the entire chain. Development becomes iterative, collaborative and adaptive. It also entails localization of problems and fixing of individual parts so that all red flags are addressed before the whole software adds up. Organizations will have to usher in a new way of looking at and designing the software development lifecycle. This has to be fortified with sustainable automation frameworks and methodologies, as well as Continuous Integration endeavors. Resources and bandwidth would have to be furnished so that a test infrastructure can flourish and integrate without any scalability or latency hiccups. Test environments and production environments would also need to be conjoined in a way that quality becomes a precursor rather than an afterthought. Automation of tests might be called for. QA folks might be asked to think and code like a user.

This is where the culture and habit parts would face a makeover. Helping and empowering others – beyond organizational silos, dependencies and software hierarchies – will be the new norm for Quality Engineering to get into action. Even the delivery aspects would change- get set for a scenario of multiple releases and entire-system-checks. QE teams are involved and impactful in software design at a new degree and depth; so that core functionality tests can be planned for with a proactive edge.

In short, the move to QE reflects but is not limited to, the switch-over from the waterfall era to the Shift-Left era of software. With QE in place, quality travels right up to the north-most point of any software.QE is about thinking of quality all the time, at every level and by everyone around.

QE – More than a Vowel Change

Businesses in the current era of impatient customers cannot afford to have even a small disruption in their business-uptime. That gives a different gravity to quality. That is where QE shines. And a QE professional is much more than a coder or a tester.

Project managers and CEOs are gearing up to tap this new face and fuel of quality. It’s time for a new regimen – one that does more than simply measure.

posted May 9 by Arunkumaarts

  Promote This Article
Facebook Share Button Twitter Share Button Google+ Share Button LinkedIn Share Button Multiple Social Share Button

Related Articles
+4 votes

The old order changeth yielding place to the new ..” – this saying is a universal truism in the world of software application development and testing is no exception.

The world of testing is today turning topsy turvy ... the rules of the game are changing and testing professionals need to adapt to this new normal. The testing discipline which was very rule based (entry exit criteria, well documented requirements …) is evolving very fast and undergoing a massive transformation. The reality on the ground that I see fast emerging is one where testers are no longer limited to testing in the testing phase of the SDLC but equally accountable for the quality of applications that go into production.

Based on years of experience in handling testing programs of varying sizes and complexity across industries and geographies, I see a perceptible shift in expectations. No longer can testing professionals take shelter in citing dependencies which are external and not in their control when defects get reported in production. The message in the enterprise is loud and clear – testers need to answer for application failures; it’s perhaps only the degree of ownership which can be debated. Also the “structured” testing approach while still required now needs to be flexible enough to cater to a workplace that is growing increasing chaotic or creatively chaotic as some would like us to accept.

In this post, I draw on experience in handling testing in an environment which does not support what a testing professional normally deems as necessary factors for working and succeeding.  How does one suddenly cope with a work environment where the rules of testing are breached more than followed; for testing to be successful and for testers to survive a new paradigm is required which is based on a  #Assurance  mindset and not just a testing mindset.

I will this article focus on the softer skills that need to be imbibed by testing professionals in this new age and time. Later will get into other aspects.

Testers today should inculcate a very high sense of ownership for quality – character traits  that aid are “extreme” pro activeness – finding ways of getting to know the requirements even when you are not formally invited for requirements elicitation workshops , “extreme” flexibility of adapting to changing priorities and crunched timelines ; “extreme” doggedness in tracing defects to closure and an “extreme” sense of end user centricity which means the ability to see beyond the written  specifications (if it does exist at all) to use experience and usability of the application. In other words it is the age of "extremes” in which testing needs to happen. 

While the above might appear as a compromise to the testing purists, let me clarify that  that I am an ardent supporter of structured and disciplined ways to test but realistic enough to understand the need to adapt testers  and testing approaches to an increasingly agile (some feel it borders on being unstructured in reality) and fast paced  world of software application development.

Signing off for now … would love to hear our comments & your experiences of assuring  quality of large scale enterprise applications  in a world where application development & project lifecycle do  not to follow the rule book  in most cases.

 

0 votes

This was on my old site as an HTML file for a long time. I’ve re-edited and corrected it for modern times.

A classic question asked about test strategy is “How much testing is enough?” If you’re testing strictly from pre-scripted procedures or automation, the answer may seem obvious: You’ve done enough testing when you’ve run all of that. But that answer is not worthy of a thoughtful tester. A thoughtful tester answers the question in a way that addresses the mission of testing, not merely the buttons that get pushed along the way. All the test procedures that currently exist might not be enough to satisfy the mission… or they may be more than needed.

Our mission is not to perform a certain set of actions. For most of us, our mission is to learn enough important information about the product so that our clients (developers and managers, mainly) can make informed decisions about it.

Testing as Storytelling

When you test, you are doing something much like composing an investigative news story. It’s a story about what you know about the product, why you think you know it, and what implications that knowledge has for the project. Everything you do in testing either comprises the story or helps you discover the story. You’ve done enough testing when you can tell a compelling story about your product that sufficiently addresses the things that matter to your clients. Since your compelling story amounts to a prediction about how the product will be valued by its users, another way of saying this is that your testing is finished when you believe you can make a test report that will hold true over time—so try to write a classic.

For instance, I once tested the install process of a complex product. My mission was to assess and catalog all the changes that this product made to systems on which it is installed. So my first step was to analyze the install process. Then I diagrammed it, decided how to test the important parts of that process, and found ways to do that under reasonably controlled conditions. I came to a conclusion about this product that flowed logically from the testing, and then I checked the conclusion to be sure that each aspect of it was indeed corroborated and supported by the tests I performed. I needed this to be a good, compelling story, so I tried to anticipate how it could be criticized by my audience. Where is my story weak? How might my story turn out to be false? I ran additional tests to rule out alternative hypotheses. I ran tests multiple times to improve my confidence that the results I was seeing were related to the processes and variables I was controlling and not coincidental events.

When I exhausted the concerns of my internal critic (and external critics I asked to review my work), I decided it was good enough.

A Short Story Can Be Just as Complete as a Novel

Perfect testing is potentially an infinite process. If complete testing means you have to run all possible tests, you will never finish. But you can say you’re done when you have a testing story with all the major plot points, and you can make the case that additional tests will probably not significantly change your story. Here’s the thing: Although you never know for sure if you have reached that point of diminishing returns, you don’t need to know for sure! All that’s required, all that anyone can expect of you, is that you have a compelling story for why a thoughtful and responsible tester like you might come to the judgment that you know enough about the product under test. In some situations, that will be months of testing; in other situations, only hours.

And maybe you don’t yet know how much testing that could be, because you are still in the middle of all that learning. You may have to walk the rest of the Yellow Brick Road, Dorothy before you get to click your heels and go home.

Plot Points for a Testing Story

A complete testing story answers the questions: What is the status of the product (bug, etc.)? How do you know (test strategy, including information about test coverage and oracles)? How good is that testing?

The testing usually unfolds in a complicated way. There are false starts. I report bugs that turn out not to be bugs. I investigate automation that might help. I try to secure the test environments I need, and often am only partly successful. I develop rich test data. Real testing is a complicated story, so I need to find ways to simplify. One way is by using Session-Based or Thread-Based test management. Another way is to simply not tell the whole story. I have to take care, though, because when I hide details of the testing, other people on the project may think that there isn’t much to testing.

One way that a lot of testers simplify the testing story is to hide it all behind test cases. Their story becomes “I wrote test cases. I ran test cases. The test cases passed.” There is no content to that story. It is generic and, I believe, vapid and irresponsible. You can do better! Talk about what those test cases mean. What do they cover? What risks do you investigate using them?

The concept of the testing story is not only about reporting, but it also helps you manage yourself. It helps you decide when enough is enough. For this reason, in the Rapid Software Testing Framework, the testing story has a central place.

+1 vote

This article highlights the essence and traits of finding bugs. It leaps to redefine the art, a tester should inculcate while finding a bug. It enumerates various artifacts in reporting a bug. Whereas, also voices the advocacy on the bugs that have been reported. The basic amenity of a tester is to fight for the bug until it is fixed.

Introduction:

As testers, we all agree to the fact that the basic aim of the Tester is to decipher bugs. Whenever a build appears for testing, the primary objective is to find out as many bugs as possible from every corner of the application. To accomplish this task as perfection, we perform testing from various perspectives. We strain the application before us through various kinds of strainers like boundary value analysis, validation checks, verification checks, GUI, interoperability, integration tests, functional – business concepts checking, backend testing (like using SQL commands into DB or injections), security tests, and many more. This makes us drill deep into the application as well as the business.

We would agree to the fact that Bug Awareness is of no use until it is well documented. Here comes the role of BUG REPORTS. The bug reports are our primary work product. This is what people outside the testing group notices. These reports play an important role in the Software Development Life Cycle – in various phases as they are referenced by testers, developers, managers, top shots and not to forget the clients who these days demand the test reports. So, the Bug Reports are remembered the most.

Once the bugs are reported by the testers and submitted to the developers to work upon, we often see some kinds of confrontations – there are humiliations which testers face sometimes, there are cold wars – nonetheless the discussions take the shape of mini quarrels – but at times testers and developers still say the same thing or they are correct but the depiction of their understanding is different and that makes all the differences. In such a situation, we come to a stand-apart that the best tester is not the one who finds most of the bugs or the one who embarrasses most programmers but is the one who gets most of the bugs fixed.

Bug Reporting – An Art:

The first aim of the Bug Report is to let the programmer see the failure. The Bug Report gives detailed descriptions so that the programmers can make the Bug fail for them. In case, the Bug Report does not accomplish this mission, there can backflow from the development team saying – not a bug, cannot reproduce and many other reasons.

Hence it is important that the BUG REPORT be prepared by the testers with utmost proficiency and specificity. It should basically describe the famous 3 What's, well described as:

What we did:

  • Module, Page/Window – names that we navigate to
  • Test data entered and selected
  • Buttons and the order of clicking

What we saw:

  • GUI Flaws
  • Missing or No Validations
  • Error messages
  • Incorrect Navigations

What we expected to see:

  • GUI Flaw: give screenshots with the highlight
  • Incorrect message – give correct language, message
  • Validations – give correct validations
  • Error messages – justify with screenshots
  • Navigations – mention the actual pages

Pointers to effective reporting can be well derived from above three What's. These are:

1. BUG DESCRIPTION should be clearly identifiable – a bug description is a short statement that briefly describes what exactly a problem is. Might be a problem required 5-6 steps to be produced, but this statement should clearly identify what exactly a problem is. The problem might be a server error. But description should be clear saying Server Error occurs while saving a new record in the Add Contact window.

2. The bug should be reported after building a proper context – PRE-CONDITIONS for reproducing the bug should be defined so as to reach the exact point where the bug can be reproduced. For example: If a server error appears while editing a record in the contacts list, then it should be well defined as a pre-condition to create a new contact and save successfully. Double click this created contact from the contacts list to open the contact details – make changes and hit save button.

3. STEPS should be clear with short and meaningful sentences – nobody would wish to study the entire paragraph of long complex words and sentences. Make your report step wise by numbering 1,2,3…Make each sentence small and clear. Only write those findings or observations which are necessary for this respective bug. Writing facts that are already known or something which does not help in reproducing a bug makes the report unnecessarily complex and lengthy.

4. Cite examples wherever necessary – a combination of values, test data: Most of the times it happens that the bug can be reproduced only with a specific set of data or values. Hence, instead of writing an ambiguous statement like enter an invalid phone number and hit save…one should mention the data/value entered….like enter the phone number as 012aaa@$%.- and save.

5. Give references to specifications – If any bug arises that is contradictive to the SRS or any functional document of the project for that matter then it is always proactive to mention the section.

6. Report without passing any kind of judgment in the bug descriptions the bug report should not be judgmental in any case as this leads to controversy and gives an impression of bossy. Remember, a tester should always be polite so as to keep his bug up and meaningful. Being judgmental makes developers think as though testers know more than them and as a result gives birth to psychological adversity. To avoid this, we can use the word suggestion – and discuss with the developers or team lead about this. We can also refer to some application or some module or some page in the same application to strengthen our point.

7. Assign severity and priority – SEVERITY is the state or quality of being severe. Severity tells us HOW BAD the BUG is. It defines the importance of BUG from FUNCTIONALITY point of view and implies adherence to rigorous standards or high principles. Severity levels can be defined as follows:

Show Stopper: Like system crash or error message forcing to close the window, System stops working totally or partially. A major area of the user's system is affected by the incident and It is significant to business processes.

Medium/Workaround: When a problem is required in the specs but tester can go on with testing. It affects a more isolated piece of functionality. It occurs only at one or two customers or is intermittent.

Low: Failures that are unlikely to occur in normal use. Problems do not impact the use of the product in any substantive way. Have no or very low impact to business processes
State exact error messages.

PRIORITY means something Deserves Prior Attention. It represents the importance of a bug from a Customer point of view. Voices precedence established by urgency and it is associated with scheduling a bug Priority Levels can be defined as follows:

High: This has a major impact on the customer. This must be fixed immediately.

Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development or a patch must be issued if possible.

Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.

8. Provide Screenshots – This is the best approach. For any error say object references, server error, GUI issues, a message prompts and any other errors that we can see – should always be saved as a screenshot and be attached to the bug for the proof. It helps the developers understand the issue more specifically.

...