A Deep Dive into TENSOR’s Technological Innovations: Explainable AI
Author: SQUAREDEV
Artificial Intelligence (AI) has become a game changer for modern security and investigation systems. From identifying relevant evidence to supporting complex analysis tasks, AI has the potential to make security operations faster, more efficient, and more accurate. However, as AI models become increasingly powerful and sophisticated, they also become more difficult to understand. This “black box” problem, where even experts cannot easily see how a system reached a particular conclusion, creates challenges for accountability, reliability, and trust.
The TENSOR project directly addresses this issue through the application of Explainable AI (XAI) methods. The goal is as simple as to make AI-driven biometric technologies more transparent, interpretable, and fair. Explainable AI provides TENSOR’s end-users with a view into the reasoning process behind the system’s results, ensuring that human operators can understand, question, and ultimately trust what the AI is doing.
When a biometric system analyses data, it produces an output, i.e., a similarity score or a match result. Without XAI, users can only see this final number, not the reasoning behind it. With the XAI module developed in TENSOR, users can now visualize why a particular result was produced. Rather than working blindly with a “black box,” they can see which aspects of the input data influenced the decision most strongly. In practical terms, this means investigators can better interpret the AI’s logic, evaluate its reliability, and make more confident, informed decisions.
This level of transparency matters for several reasons. First, it helps ensure that systems remain accountable. When decisions are explainable, it becomes possible to trace how the AI reached a conclusion, which supports both operational transparency and compliance with ethical standards.
Second, explainability is essential for fairness. To ensure fairness across different populations, the TENSOR project has carefully examined the AI models involved in its biometric analysis components with respect to several demographic and linguistic attributes, including age, gender, skin colour, and spoken language. This evaluation helps ensure that the predictions produced by the system are not inadvertently affected by biases in the data or model design. By taking this step, TENSOR reinforces the principle that responsible AI must perform consistently and equitably for all individuals, regardless of their background or characteristics.
The XAI interface is designed to present information in an intuitive and user-friendly way, using visual elements like heatmaps and plots to make complex results easier to interpret. These visual explanations show, for instance, which regions or patterns were most important for the system’s decision, providing a clear understanding of what the model considered significant.
End-users can validate outcomes, explore the system’s confidence levels, and interpret the reasoning behind its assessments. The image below provides an example of a heatmap produced by the XAI module applied to one of TENSOR’s biometric recognition components. Warmer colours (red and yellow) indicate areas of higher importance in the model’s decision-making process, helping users visually conclude on how the system interprets the analysed data.

Figure 1: Explainable AI techniques applied to the Gait recognition module
Beyond visualisation, TENSOR’s explainability features contribute to building a broader culture of trust and accountability in AI-driven security technologies. When users can see how the system operates and understand its reasoning, they are more likely to rely on its outputs responsibly.
Ultimately, the work carried out in TENSOR demonstrates that explainability is not only about ethics or regulation, but it also makes AI systems more robust and operationally useful. Transparent systems can be monitored, refined and improved in ways that opaque systems cannot. As a result, explainable AI enhances both the performance and trustworthiness of biometric technologies.