Table of Contents

Tools for Soft Reliability

In order to address Soft Reliability problems, we have developed methods and techniques to mainly solve information deficit problems: while products fail to satisfy customers who might in turn disregard or bring the products back to the shop, vendors do not know why customer dissatisfaction happened in the first place. The aim is to acquire meaningful information and feed that back to the stakeholders in the companies who can use this information to improve products.

In the context of this project, we have developed different approaches and tools to obtain rich information about product usage. In the following, these approaches and tools are described in more detail.

Product Usage Monitoring with D'PUIS

In the past, attempts to cure the information deficit problem required huge effort to collect, process and analyze data which were often still not relevant and meaningful. This was also the reason why these techniques were not adopted easily so far. In contrast, we aim at a high degree of automation, which should take the burden from the stakeholders’ shoulders and provide structured data timely. Overview about D'PUIS-based approach

Further information on this approach and related software can be found here.

Analyzing User Test Behavior with Process Mining Techniques

If no product instrumentation is feasible, one common way to obtain usage information is via user tests, which are recorded on video. Using video analysis software, these videos can then be annotated semi-automatically and logs representing the actions of participants in the user test can be generated.

Process mining can provide richer analysis results based on these action logs than most usability measures, since it preserves the temporal aspect of the data. For example, the behavior of a group of participants in a user test can be visualized graphically by a process model.

Further information on this approach and related software can be found here.

User Experience Evaluation tools

Evaluation should be integral to any design activity. Evaluation in innovative product development practices however is highly complicated. It often needs to be applied to immature prototypes, while at the same time users’ responses may greatly vary across different individuals and situations. This creates highly complex data which traditional statistical practices often fail to analyze. This section describes a number of tools for the elicitation and analysis of rich subjective data about product experience. More information can be found here here.