Increasing volumes of new, digital technologies, more connectivity, and escalating levels of data in applications demand a smarter approach to QA and Testing operations. This includes intelligent test automation and smart analytics that enable informed decision making, fast validation and automatic adaptation of test suites. Manual testing can be highly subjective and, as such, is prone to error. On average an existing test set, 30% is typically irrelevant and doesn't tell you anything you didn't already know from other test cases.
By applying an intelligent approach you can enable QA and Testing teams to deliver quality with speed in a complex connected world at optimized cost. We call this Cognitive QA.
Whether you are just starting your journey to Cognitive QA or are already underway this 20 minute webcast by two of our Digital Assurance & Testing experts, Mark Buenen and Andrew Fullen, will provide useful insight into the key steps and ingredients you need to consider to ensure you deliver an optimised approach.
A clear understanding of all your information sources is key. What kind of data do you have available and is it accessible? How good is the quality of the data? What intelligence can you derive from that data and what does it enable you to deliver? 20 minutes could make all the difference.
Analyzing Software Repositories to Improve Testing – New Oil or Snake Oil?
At Sogeti we’re urging our clients to apply analytics to the software repositories being used on a day-to-day basis. Why? Quite simply, to improve the way they test. But I’m occasionally asked if such software analytics really is helpful, or is it simply the new hype – a case of new, beneficial oil versus snake oil. My answer is unequivocal: analyzing software repositories really does improve testing quality.
These repositories hold a wealth of information about how people collaborate to build software. It’s information that can be mined to better understand prior experience and dominant patterns, and from which historical data can be extracted to use as predictors.
But, of course, in a collaborative product development environment, this data is spread across different repositories and represented in different forms. In the development and test (engineering) cycle, for example, there are a number of entities/artifacts, such as Test, Code, Requirements and Defect, and there are multiple repositories that also contain information providing customer insights.
Learning from diverse data categories
For each of these entities, data falls into different categories:
The volume and disparate nature of this data raises a number of challenges, notably around data availability, data quality, and data accessibility. So how does analytics help? We can learn from the data we analyze. This is nothing new. Decades of research in analytics, visualizations, statistics, artificial intelligence (AI), etc. has generated a large number of powerful methods for learning from data.
Making AI part of your analytics toolkit
And AI is increasingly part of the analytics toolkit. Based on recent progress specifically in the field of Deep Machine Learning, there is a growing conviction that AI has the potential to significantly improve the way we leverage IT in pretty much any business domain. Analysts project substantial double digit growth rates for AI-empowered business in the coming years. IDC expects spending on AI technologies by companies to grow to $47 billion in 2020 from a projected $8 billion in 2016 (The Wall Street Journal, January 11, 2017).But, let’s take a step back and consider the huge value of analytics and AI in testing – explaining why it is true oil, not snake oil. I firmly believe that the testing function can improve and optimize the test strategy through better analytics of software repositories. In particular, there are four applications for analytics in this area that yield highly beneficial outcomes:
Right data, right quality, right access
To get to these outcomes, however, the testing function must first identify the right project or program with the right data sources on which to apply analytics activity; and then assess whether the data sources are analytics ready by carrying out a pre-assessment to check for data quality, data availability and data accessibility. This is an elaborate process whereby numerous characteristics can establish a project as a candidate for applying analytics to improve the way testing is undertaken. These characteristics include:
There are other challenges that must be overcome on this journey, such as a lack of integration between software repositories, poor data quality due to process violations, and an inability to validate recommendations due to a lack of SME bandwidth and the required skillset (data scientists and testing experience). But the huge value of software repository analytics makes the effort worthwhile.
From optimized test coverage, the auto generation of test scripts, and test sets aligned with real application usage, to the ability to predict and advise on the marginal value of additional testing to ensure release readiness, and better visualization of quality (e.g. cost of a test bug, test efficiency, etc), analytics is changing the game for software testers.
Taking an intelligent approach to QA
At Sogeti, we recommend analytics as one of the key steps towards Cognitive QA, where AI, robotics, analytics and automation come together with human insight and reasoning to transform QA and testing into a true business enabler.
With our intelligent Cognitive QA approach, we enable smart quality decision making based on factual project data, actual usage patterns and user feedbacks. It’s how we ensure our clients’ QA and Testing operations deliver quality with speed in a complex connected world at optimized cost.
Senior Director - Technology, Product Engineering Services V&V
Cognitive QA: Towards Wise Quality Assurance
The use of predictive techniques and artificial intelligence in Software Quality Assurance is not a futuristic approach. This is the present and Sogeti is pushing it through innovative CognitiveQA solutions. The aim is twofold: On the one hand, prediction for enhancing anticipation. On the other hand, use predictions in order to automate QA activities through Artificial Intelligence, in order to make testing and quality assurance more wise. The reason is clear: “The aim of wise is not to secure pleasure, but to avoid pain” (Aristotle).
Prediction is feasible if we capture experience (by structured data accumulation), combined with the development and application of predictive models able to use such experience in an “intelligent” way (wise vision). The result of this combination results in Cognitive Quality Assurance environments which may enhance the quality activities and its optimization. The experience is an essential ingredient for driving future decisions, but how this experience is managed and processed determines better or worse decisions. These are general life statements, but also applicable to the quality management.
Nowadays, in most of the projects we are able to accumulate huge amounts of data about development, testing, customer experience, defects, user rates, organizational data… When such data is consolidated into unified data models to become information to be analyzed through Business Intelligence techniques, we are then able to formalize analysis through Key Performance Indicators (KPIs) and visualize them through dashboards aimed at taking data-driven decisions without a blindfold.
But what if we would be able to process such information by using statistical models aimed at predicting tendencies, risks and expected efforts in the near future? In fact, although prediction is an innovative challenge in the software quality field, it is already a common-used technique in other fields (medicine, weather, etc.). And what if we could take benefit of predictions in order to take automated and assisted actions (selection of test cases to be automated, optimized resource assignment for test cases to be executed in an iteration, filtering of test cases based on the expected risk,…)? And what if we could take observed information and feed new reasoning procedures for pushing machine learning in these environments?
These are not simply future questions. This is our present at Sogeti.
Research & Innovation Solutions Lead. Digital Assurance and Testing
The Rise of the Modern Robot in Cognitive QA
My professional hobby is “Robotesting”. It’s a rapidly-changing world and I’m constantly thinking about what we’re currently able to do with robots, what’s on the near horizon, and what robotics might enable further into the future. What’s clear, is that robots are increasingly part of our lives, both in the world of software testing and in our own homes.
Of course, robotics is a very broad term. It might be artificial intelligence (AI) in a machine, or chatbots, or physical robots. In digital assurance and testing, within Sogeti’s Cognitive QA offering for example, AI makes it possible to generate test reports that support a test manager – typically, something that takes a lot of work without robots.
Cognitive QA is all about making use of AI and robotics to enable Quality Assurance and testing activities to work smarter and faster. Alongside the example above, Cognitive QA also supports the selection of the optimal test set out of an existing regression test suite, since regression tests often extend over time and are never re-evaluated or condensed.
My testing vision is that we will have AI systems supporting testers in creating test cases, releasing testers from the onerous task of doing this themselves. I use the word ‘vision’ here, but, I’ve already been talking to a tool vendor who’s working on going down this route, so I view this as a near-horizon prospect.
From software to physical products
I also see Cognitive QA being useful in the testing of physical products, not just software. As an example, my engineering colleagues in Toulouse, France have already developed a robotic arm that physically tests equipment in an aircraft’s cockpit. Importantly, apart from taking over tedious input jobs, the robot doesn’t make mistakes when it comes to pushing the right button to the. And it is able to register output data from the screen in the cockpit.
This brings me to data within the application lifecycle and, more specifically, the impact of Cognitive QA on test data. Artificial intelligence can be used to generate test data and I am currently looking at this in more detail because it’s clear that many companies struggle with the privacy aspect of using personal data in test environments. And with the General Data Protection Regulation (GDPR) on the horizon, I’m investigating how AI might analyze production data and then generate artificial test data – based on the actual characteristics of the real data – to define the correct test cases without violating privacy rules.
People are still required
But let’s not get too carried away and assume a world in which it’s all robots and no human intervention – well, not yet at least! There remains the challenge of analyzing all the different reports and anomalies from executing test cases generated by AI. This still requires the reasoning and insight of the human tester. Yes, real people will still be needed for investigating defects. Having said that, moving forward, AI will become smarter and better able to support testers in this aspect of the application lifecycle too.
Finally, I find the speed at which the use of robots is becoming mainstream fascinating. I often run workshops and presentations, and just two years ago the response to my question ‘do you have robots at home?’ was typically blank stares. Now, without fail, in every group someone is using a robot – such as a robotic vacuum cleaner or a robot lawn mower. While the AI in these is not huge, it’s already there and will become increasingly smarter.Thus, IT people and testers must buy into the future (and current) potential of robotics. Sogeti’s Cognitive QA is ahead of the game in this respect and, for my part, it’s a great place to be.
Management Consultant Quality & Testing
Traditional Digital Assurance and Testing is facing serious challenges – can Cognitive QA help?
Digital has changed the game in the world of QA and Testing. The rapidly escalating uptake of social media, mobility, business analytics, Internet of Things and cloud solutions is seeing an increasing proportion of QA and Testing budgets being consumed by digital. Assuring the efficacy of all things digital (Digital Assurance) raises several challenges, which Sogeti believes can be resolved, in part, by greater use of Cognitive QA.
So, let’s take a closer look at some of the key challenges Digital Assurance is facing today.
Expertise availability: Digital Assurance is traditionally a people-centric domain. The craftsmanship and advice of experts is a key cornerstone. And since cloning of experts is not a viable option, the availability of expert resources is a familiar bottleneck. But even the best expert has his or her limitations: the speed of innovation and variety of technological challenges make it difficult to stay up-to-date. Additionally, sharing of best practices and leveraging lessons learnt across the company is a constant area for improvement.
Siloed data: Digital Assurance professionals typically use data from Development and Testing repositories. While Operations repositories also contain valuable information, e.g. in application logs and usage data, it is not a common practice to leverage them. On a positive note, these data silos are being dismantled with the advent of DevOps removing the boundaries between Dev Test and Ops departments. In this context, we are seeing empowered teams approaching Digital Assurance far more comprehensively. Some argue though, that these teams could become the new ‘silos’.
Internal focus: Initially, Digital Assurance focused primarily on the correct functioning of IT systems. This focus has since expanded with the advent of the Internet and mobile apps to embrace more end user related aspects, such as end-to-end performance and usability, both of which have gained increasing importance. Nonetheless, there is still room for improvement when it comes to this user experience (customer centricity) within Digital Assurance practice. External data sources, such as social media, contain precious information on user experience that is seldom leveraged.
Increasing speed and complexity: Digital disruption is forcing companies to compete as if in a high-speed race – bringing new services to the market in ever decreasing intervals. Since most innovations are IT defined, this implies that the delivery cycle time is constantly being reduced. At the same time, the complexity of IT solutions is increasing at a similar pace. App architectures with loose coupling and microservices will only increase the number of moving parts to manage. Coping with this increasing complexity in ever shorter delivery cycles requires solutions that go beyond established automation approaches.
So, back to my headline question of whether Cognitive QA can help to resolve these challenges. I believe it can. How? By applying analytics and artificial intelligence (AI) in the Digital Assurance domain. At Sogeti, we have developed an approach to guide clients on their journey to future proof, data-driven QA utilizing Cognitive QA. We’ve already seen significant improvements in the first client projects using this approach. It recognizes that Digital Assurance and Testing needs not just AI and automation, but the human creative, questioning, and reasoning elements of testing as well. Our objective is to bring them all together in a Digital Assurance model that is fit for today’s increasingly digital enterprise.
Stay tuned to learn more about Cognitive QA!
VP Digital Assurance and Testing Services at Sogeti