Thursday, June 26, 2008

10+ priorities for testing critical systems

10+ priorities for testing critical systems: "

Testing is essential to make sure that systems function as expected, but the process can be complex and overwhelming. Rick Vanover looks at some testing strategies that will help you focus on what’s important and make your installations and upgrades go smoothly.





IT folks grumble at arduous inventories of test plans and scenarios, but the fact is that testing should be made a priority for critical systems. So what can we do to make testing effective and through? Here are 10 things to stress for your test environments to save surprises and present credible test results.


Note: This information is also available as a PDF download.


#1: Make your test environment represent the live environment


Having a test environment that’s quite different from the real environment is not effective. A good example is a Windows Active Directory domain. A domain with highly customized Group Policy configurations, complex DNS configurations, multiple domain trusts, many group memberships, and a large number of account objects is not representative of a separate domain that is empty and has no real configuration. Virtualization is a good option here: You can promote a domain controller on a virtual machine, move it to an isolated network for the testing, and then remove it from the live domain.


#2: Have multiple disciplines of the test achieve the same result


In outlining the steps of a test, identify components that can be tested two different ways to obtain the same result. For example, if you are considering going to a new version of Windows Active directory, within the test environment perform an Active Directory authoritative restore and a system backup restore to ensure that they both bring the system back to a workable state. This can be beneficial if, in the real world, one mechanism fails. Another strategy is to have one person prepare the test plan and another person implement to ensure that the plan is clear and that nothing is taken for granted or assumed in the testing.


#3: Test the rollback!


For test plans that revolve around an upgrade or enhancement to an existing system, you should test the reversion process. You can also test this multiple ways depending on the context of the upgrade. Some strategies include removing a hard drive in RAID 1 configurations (the removed hard drive would be unchanged), a full restore from a backup, uninstall functionality of the upgrade, database backups, or simply using new equipment only, with the current system turned off during the upgrade.


#4: Don’t proceed without the testing


If situations arise that cut into the test phase, take a stance that the testing is an important part of the overall project. Depending on the situation, this may be a difficult case to make or it may have political consequences. If it boils down to someone else deciding you can’t do the testing but you’ll have to take the blame if it does not work, raise the red flag!


#5: Remember the goal of testing: No surprises during a go-live


Surprises are the last thing you want during a go-live. Thorough (representative) testing helps prevent any ‘learning experiences’ when the new system is in use. Of course testing can’t be 100 percent like the actual environment, so there is always the risk of something new arising. For example, if you are testing a new version of a software product with a simple security model that may have every user configured with more permissions than required, when you go live, the security model will need adjustments to meet operational requirements. This can cost valuable time and introduce risk. Thorough testing would have a documented procedure for the security configuration or scripts to run to configure the live system as used in the test environment.


#6: Use pre-existing resources and testing standards


We may not all be certified testers, but we can leverage existing resources to deliver a credible test for our IT environments. Some good starting points include the Standard Performance Evaluation Corporation and a quick Internet search for sample test plans. If you do not have rigid requirements for testing, you will have some freedom in developing your test plan. Be sure to give the plan much thought and be comprehensive. The Sara Ford blog on MSDN gives a good perspective on how to develop a test specification, which is slightly different from a test plan.


#7: Assume nothing


Sure, your testing will provide an exercise in the rudimentary tasks associated with your environment — but some small pieces of functionality may be affected by an upgrade. Depending on your project, this can include extra options, permissions changes, and log file changes. This can come into play if you have built monitoring around a system’s log file behavior. If there is slight a change in the way the log is written after an upgrade, the monitoring system may need a review. By going through the steps, even for the elementary tasks, the risk of little things getting in the way with the project as a whole are reduced.


#8: Use project management to coordinate testing


Having project management and a management sponsor will give credibility to your testing. It will allow other areas in the organization to understand that the testing is essential, and your management will have a better idea of the testing steps. Simply saying that you’re testing the new version of XYZ is not as effective as engaging the project plan with management, sharing the status of the test plan, and collaborating on the testing with multiple parties. Ensure that the test plan document is available to the project management or management sponsor for an ongoing view into the progress; this will enable them to have a good idea of the work and challenges related to the testing you have laid out.


#9: Ensure that test failures are repeatable


Almost every test plan will incur some part of a test that results in a failure. With test systems, many administrators may be testing at once or changing configuration, which may effect the testing. Should a failure occur within the testing, note it and attempt to repeat it. Further, seek other testers to perform the test to see if it fails for them as well. If the failure or issue is critical to the overall success of the project, engage support resources of the product to identify the issue if possible. Depending on the scope of the failure, the overall project may not need to be stopped, and this identification process can get expectations in line to the end-state.


#10: Test with a different environment


If you’re making the effort to provide quality testing, think ahead to some of the challenges you may face. This may include lesser systems running more roles, doubling or halving your workload, integrating another company, or changing a core part of your IT environment. This may be perceived as scope creep in the test process, but if you engage project management and your direct management, you may be able to make the case to allocate time and resources to test other scenarios.


#11: Hold onto your test environment


If you’ve gone to all the effort of creating a full test environment, why not hang onto it for ongoing testing? This could be a test environment that is used to test version updates and core functionality changes or to provide a training environment. Just be aware that there may be licensing considerations with a test environment for continued use.


How do you test?


There are many ways to approach test environments, but incorporating these tips into creating your testing strategies will help equip you for successful installations and upgrades. Share your own testing priorities below.





"



(Via Clippings.)

No comments: