Why just not test everything?

I attended STP2010 conference  at Las Vegas. I was honored to meet Gerald M. Weinberg. I bought his book "Perfect Software: And Other Illusions About Software Testing" and got his autograph at the first page. I finally managed to read the book. And the next day at my work I started applying his advice's in my project.
My current project is to check the implementation of Electronic Health Care Record (EHCR) that is based on CEN ENV 13606:2000 and HL7v3 standards. My first focus is to check the functionality of the EHCR module that implements that functionality.
CEN standard gives CRIUD (Create, Retrieve, Insert, Update, Delete) functionality to patient health record. Retrieve gives all the history of patient record, functionality very similar to version tree feature of modern version control systems, or only portion of the patient history, based on the query time constraint.
Complexity of CEN is in the xml implementation of the CRIUD functionality. Basic unit of CEN is archetype and datatype, and we support around two hundred of them (e.g. one is patient demographic data). Every delete and update creates version tree of particular archetype. CRIUD actions on archetype could be done by several roles (e.g. physician) defined by permission rules. In the end, basic combinatoric mathematics gives us infinite number of test cases (checks).
With budget constrains (time and money) we had to decide what to test (actually check). Talking with developers I wanted to get the insight how the EHCR module was designed. First, I found out that they wrote very good subsystem documentation (features, architecture and data model). Project issue with documentation is that there is no unique central document management system, so no one from test team didn't know about that documentation. ehcr module uses a number of CEN xml schemas that do the first level of CEN xml message validation. Schema validation was coded using third party library (xerces). Based on that information we created several test cases to check that functionality and confirmed that errors in xml messages regarding defined schema were properly detected. The possibility in wrong error detection was only if schema is not according to requirements. But requirement analyst checked all xml schemas (about 200 of them) using advanced xml editor.
We put our focus on checking the requirements implementation that was coded by developers (various xpath expressions and if..then..else logic). Checking coded values (codes from database) was first check candidate. We found errors in those checks because developers did not understood the requirements (or requirements changed) and they did not coded them in business logic (missing xpath expressions and if..then..else logic).
The hardest part were hidden requirements of CEN protocol. You just have to understand CEN protocol which is not so simple. 
I will give one of the test cases. Demographic data contains three data items: mandatory date of birth and patient sex and optional date of death. Also, demographic data has its position in CEN (cen coordinates). Patient physician first creates demographic data with patient birthday and sex. After that inserts patient death date. He realizes that he made a mistake and deletes the death date. Patient changed its sex so physician updates patient sex and birthday. Now we have several versions of demographic data data items. Physician wants to update again birthday, but sends identification id (cuid) od sex data item instead of birthday. System reports an error code.
Another test case is when Physician updates whole demographic data, so whole demographic date info becomes obsolete.
So on system level we have an infinite number of such test cases.We discussed with developers those test cases in order to implement them in subsystem test because there are no any unit tests.  In that way developers narrowed down the infinite number of test cases because they know subsystem code and how to implement those checks on lower level.
We created automated jython scripts as system test, to test various and complex CRIUD functionality, for several randomly picked archetypes. There was a significant number of reported issues. After developers implemented subsystem checks and done code review base on our input, all issues were resolved. But important information was that there was no such issues when we checked other archetypes!
Testers had a problem of infinite number of checks for CRIUD CEN functionality. First, testers narrowed down the number of checks by checking only test cases that will be triggered in real usage of CEN protocol. What is real usage of CEN protocol, testers concluded based on real data gathered from the production subsystems that already gathered similar data.  They explained those test cases to developers, so developers could implement checks in subsystem code.

Labels: ,