Maintaining Quality in an outsourced product development model takes considerable efforts. Be sure you are prepared well.. way before you even embark on this journey. Here are some tips.

Your technology partner is responsible for ensuring that the commitments made to you/ their customers are delivered as per expectations, and that the clients are satisfied at all times. Repeat business from clients and payments as scheduled are couple of important indices for judging customer satisfaction. A Quality Assurance Framework woven around the key processes is an important ingredient to ensure consistency in quality of work, client satisfaction and value for money.

Overview

The Quality Assurance paradigm consists of a process framework to govern the areas which impact quality. This process framework  consists of the following elements:

  • Lead to Sales Ready Lead conversion
  • Project proposals & briefing sessions
  • Requirements identification and management
  • Project scheduling, risk & tasks management
  • Design Validation
  • Development Assurance
  • Testing & Validations

Lead to Sales Ready Lead

The process of lead sourcing and identification, qualification of lead and conversion is governed by the Quality Assurance framework. Leads are qualified based on their budgets, contact authority to take decisions,  intensity of needs and how fast leads can get converted. Therefore,  your technology partner focuses on the qualification process (or should be) to have the right set of customers only, whose need can be satisfied.

Project Proposals and Briefing Sessions

Commitments are key to determination of quality. The quality process kicks off when a sales team member commits on deliverables to the customer. The technical commitments of a proposal are based on commitments by the delivery team at your vendor/technology partner.

The project manager’s feedback is taken on timeline, risks, dependencies, assumptions and constraints to keep the customer aware of the expectations. A briefing session is proposed to explain the proposal to the client and obtain feedback. This helps in setting the right expectations, which in turn helps in delivering a solution to client’s satisfaction.

Requirements Identification & Management

The Business Analyst along with the Project Manager ensures that the Requirements are clearly understood from customer and shared internally within the team. A process of maintaining communication on a daily basis, documenting the requirements with customer confirmations and sign-offs is designed so that the customer requirements are not misunderstood.

Moreover, the requirements are configured in a Requirements Management Tool, or a centralized repository with visibility to every member of the team. Elaborate requirement management meetings are arranged to confirm that the requirements are well understood by the team.

Project Scheduling & Task Management

The Project Manager is responsible for governing the project and is responsible for estimation, scheduling, task management and monitoring project execution. The Project Manager informs clients about the risks involved in the project and updates the status of the risks throughout the project.

In addition, Collaboration tools and automatic task schedulers ensure that planned activities are executed as per plan, thereby adhering to commitments made to customer.

Design Validation

A Technical Architect is responsible for the design and the architecture of the system. The design of the system is not only prepared with the inputs from the Business Analyst and the client, but the design is also approved by customer before it can be considered for implementation.

Development Assurance

Quality during development is ensured by:

  • Baselining coding standards, code structures, rules or any policies that are to be followed.
  • Code walkthroughs to review whether code meets coding conventions, rules set, and organizational policies if any.
  • Code certification by the Technical Architect
  • Unit testing by each software engineer.
  • Version controlling the code using configuration management tool and labelling the releases.
  • Collaboration using automated tools.
  • Daily meetings to review progress and issues.
  • Client validation during and at the end of coding of each module

Testing & Validations

Quality Assurance activities during test management involves a series of testing and validations. The primary activities in testing are:

  • Creation of a test plan & cases- A test plan elaborates the procedure by which testing would be done for the features of each module.  Test cases on the other hand represent the testing specifications, which are run or executed to observe the outputs of the software for a set of inputs. The QA Analyst starts documenting the test cases and test plan early in the lifecycle of the project, so that the test cases are completely reviewed and accepted before start of testing activities.
  • Automation Regression suites – By automating tests which are frequently run, the regression test suites save a lot of manual effort, and are run frequently to catch regression errors, or conduct routine testing. Automation suites free up testing resources from mundane testing efforts, and bring forth predictability in testing, that could be lacking in manual forms of testing.
  • Creation of test data – Test data creation is an important aspect in software testing. A number of bugs often get replicated with adequate data, which are not to be seen with inadequate data. Data is created by the QA Analyst and team of considerable size for advanced functional testing and performance testing.
  • Black Box Testing – After the Lead Software Engineer approves the module based on integration testing, the QA Analyst does a black box testing for the features, and logs issues in an Issue Log. The issue log needs to be addressed by the development team based on priority of the bugs.
  • Client Validation – As soon as the module is approved by QA Analyst based on Black-box testing, the module is shared with the client for validation and confirmation. The issues reported by client are logged in the issue log which are fixed by development team, validated by QA and released in subsequent releases.
  • System Testing – QA Analyst is responsible for testing the system at the end of the project. The issues / bugs are logged and tracked till they are closed. An issue management tool helps in this process. Also, the Business Analyst tests the software to see that the requirements are met, as desired.

The System Testing is focused on the following areas to ensure that all the areas of the testing are covered:

  1. Functionality Testing – This involves testing the features of the product, i.e. what the product does.
  2. Usability Testing – This includes capturing and stating requirements based on user interface issues, e.g. issues such as accessibility, interface aesthetics, consistency, browser compatibility, etc.
  3. Performance – Performance involves issues such as throughput of processing information, response time, start up time and recovery time.
  4. Supportability – This group of requirements address supporting the software, such as adaptability, maintainability, compatibility, configurability, scalability, localization and installation requirements.
  5. Beta Testing – This testing is done at the client’s end by using number of users or mock users. By doing testing which closely represents usage by actual users, this testing helps in streamlining the product for releasing to actual users.

Maintaining Quality in an outsourced product development model

With teams across continents, it is important to focus on quality and ensure that right processes are established throughout the product development and during the project lifecycle, in order to maintain quality uniformly across teams.

We have also witnessed the need for having releases quicker and better over time and this has led us to carve out a framework for outsourced development. The framework is based on the widely accepted Agile Methodology, and therefore, applicable globally to existing users of Agile Methodologies and other companies which want to transition their operations and adopt this methodology.

Quality is a key attribute in this framework and is based on the “Agility Manifesto” that values “Working software over comprehensive documentation”. This seems to be in tune with J M Juran, one of the TQM gurus who had coined the interesting phrase “fitness for use” to define product quality. We believe that by consistently developing products using this framework, one can achieve higher levels of product owner satisfaction.

While a traditional outsourced software development project usually has a detailed Quality Plan, against which the project quality is monitored, such a plan may not be required in an Agile framework. The quality assurance and control, which are significant efforts in traditional projects are already inbuilt within the framework, and ensured by the Agile process. Here are a few practices in our framework which are effective for maintaining quality norms:

Product Owner Integral to the Team

Since the Product Owner is a person responsible for the requirements, and the team follows these requirements, the first step to ensure that the product is developed as per requirements is to have the Product Owner as part of the team (yes, you can still reap the benefits of outsourced software development ) . The Product Owner therefore participates in the daily scrum meetings and all the other meetings, and this ensures that the development is very well aligned to the requirements.

Software for Release Every Iteration

Though meeting the functional requirements of the Product Owner could be a priority, it is not only the criterion met during each release. As there are multiple releases based on iterations planned, the releases are carried out with all the basic requirements for a release such as:

  • meeting the Product Owner’s expectations for the specific Time box,
  • having the best design for the features implemented already (by refactoring wherever there is a need),
  • meeting the coding standards set (by code review and corrections),
  • testing satisfactorily by the relevant stakeholders and the team, and
  • arranging for easy maintenance of each release easily.

Product Reviews

There are two kind of product reviews:

  • Scheduled Product Reviews: The product is formally reviewed towards the end of each iteration. If required, more scheduled reviews can be organized. The observations of the reviews are documented, assigned to team; the progress is discussed during the daily review meetings and changes required are incorporated in the plan.
  • Unscheduled Product Reviews: Since Product Owner is part of the team, there are opportunities to review the product informally as well as from time to time. The review comments are noted and the action items from these meetings are shared with the team for further action.

Moreover, the product also gets reviewed during the daily scrum meetings to determine whether the right approach is being followed.

Testing Framework – a Test Driven Development Framework

The outsourced software development testing framework is a Test Driven Development framework. The tests written represent requirements or user stories. It is needless to say that the Testing team is in the forefront of understanding the requirements from the Product Owner and writing the test specifications.

The different types of testing that the testing team is involved in are:

  • Acceptance Tests: The requirements in agile development, also called User Stories represent the high level description of the business rules, or the external behaviors of the system. Each user story corresponds to at least one Acceptance test case. Acceptance test cases are usually reviewed by Product Owner to ensure that they follow the Product Owner’s intent correctly.
  • Unit tests: Both automated unit tests and manual unit tests are written to ensure that the issues particularly at unit level, are addressed. Automated unit tests help in regression testing.
  • Regression Tests: Where possible, acceptance tests and unit tests are automated so that they can act as a regression suite. This suite can validate that changes in code do not affect the basic functionality or reintroduce issues which have been fixed in previous iterations. As writing regression test suites involve some effort, they are carefully selected, and only those test cases, which are run repeatedly are converted to regression suite.
  • Exploratory Testing: This testing, as the name suggests is exploratory in nature and can be used in root cause analysis, identifying show stoppers like system crashing or unhandled errors, which are usually fixed immediately; and other less serious problems, which might be differed. This testing could lead to new user stories that would get added to the backlog.
  • Specialist Testing: The specialist testing includes extra testing activities, such as performance testing, etc., which require the help of specialists, or any other critical area where the regular testing team does not have the capability.

Development / Code Review

Code review is done either by:

  1. Traditional code reviewer method: A code reviewer is assigned for reviewing all the developers’ code.
  1. Two Pairs of eyes approach: In this approach, the code is developed by two developers. This ensures that while one is coding, the other is reviewing the code to ensure that coding standards are agreed to, design guidelines are met and are easily understood by the developer other than the author.

Metrics

A set of metrics is used to better govern the parameters of performance and improve the overall quality of work:

  • Drag Factor – This indicates:
    • Effort in hours which do not contribute to iteration / sprint goals.
    • Can be reduced by reducing number of shared resources / non-contributing work.
    • Estimates are refactored based on Drag Factor

New Estimate = Old Estimate + Drag factor

  • Velocity – This indicates the amount of backlog which can be converted to a shippable functionality of a sprint. This indicates the capacity of the team to complete requirements within a sprint, and can be used as an input for estimations.
  • Number of unit tests or acceptance tests added
  • Time taken to complete daily build
  • Bugs detected per iteration
  • Production defect leakage
  • Test coverage

Continuous Integration

Continuous Integration involves developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, usually done daily, allowing teams to detect problem early.

This can be extended by automatically integrating and running a regression test every time a build is deployed. Running an automated regression test frequently means defects are highlighted soon after they are introduced (i.e. when the build goes Red, or it fails). The team’s top priority is to get the build Green again.

Informative Workspace

Finally, project related dashboards and infographics are useful to provide a management overview of the project. Ideally, this has the Burn-down charts, the Sprint / Iteration Plan, current build status and additional metrics. This provides a better view for higher management, who can help if there is a need for the same.

Retrospective Meetings

Retrospective meetings are planned to reflect on the activities performed, find how the team performed during the last sprint, and assess the good practices or misadventures. The planning for the next sprint is done considering the learnings during this meeting. In the long run, these meetings help considerably in improving the quality of development (even more so, in outsourced software development development).


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *