Five points you should know about software validation

Validation of calibration software ? as required by ISO 17025, for instance ? is a topic that folks don?t like to talk about. Often there is Grit about the following: Which software actually should be validated? If that’s the case, who should look after it? Which requirements must be satisfied by validation? How will you do it efficiently and how is it documented? Guts following blog post explains the background and provides a recommendation for implementation in five steps.
In a calibration laboratory, software is used, among other activities, from supporting the evaluation process, around fully automated calibration. Whatever the amount of automation of the program, validation always identifies the complete processes into which the program is integrated. Behind validation, therefore, is the fundamental question of whether the procedure for calibration fulfills its purpose and whether it achieves all its intended goals, in other words, does it provide the required functionality with sufficient accuracy?
To be able to do validation tests now, you ought to know of two basics of software testing:
Full testing isn’t possible.
Testing is always dependent on the environment.
The former states that the test of all possible inputs and configurations of a program cannot be performed due to large number of possible combinations. Depending on application, the user must always decide which functionality, which configurations and quality features must be prioritised and that are not relevant for him.
Which decision is made, often depends on the next point ? the operating environment of the software. Depending on the application, practically, you can find always different requirements and priorities of software use. Additionally, there are customer-specific adjustments to the program, such as concerning the contents of the certificate. But also the average person conditions in the laboratory environment, with an array of instruments, generate variance. The wide selection of requirement perspectives and the sheer, endless complexity of the software configurations within the customer-specific application areas therefore ensure it is impossible for a manufacturer to test for all your needs of a particular customer.
Correspondingly, considering the above points, the validation falls onto the user themself. In order to make this technique as efficient as possible, a procedure fitting the next five points is recommended:
The info for typical calibration configurations ought to be defined as ?test sets?.
At regular intervals, typically once a year, but at least after any software update, these test sets should be entered into the software.
The resulting certificates could be compared with those from the prior version.
In the case of a first validation, a cross-check, e.g. via MS Excel, may take place.
The validation evidence should be documented and archived.
WIKA provides a PDF documentation of the calculations completed in the software.
Note
For further information on our calibration software and calibration laboratories, visit the WIKA website.

Leave a Comment