AI Principles
For a responsible use of AI.
For a responsible use of AI.
Data-driven Banking through responsible AI
We shape the financial industry into a seamless and secure ecosystem to unlock its full potential. Therefore, we’re committed to leveraging the power of data and AI for the benefit of our customers. To ensure ethical and responsible development and use of AI, we have established clear AI principles which guide our approach.
Our AI Principals
These principles serve as a foundation for building trust and transparency, allowing us to develop and implement responsible AI solutions across our data-driven banking offerings.
Contovista´s AI Principles at a Glance.
Privacy by design
All models are based on non-CID data. Processing is GDPR-compliant and prepared for future regulations such as the EU AI Act.
Shared Learning Effect
The model quality is improved through continuous training with anonymized data. This results in more robust models that significantly reduce bias and overfitting.
Explainability & Control
Our AI is fully comprehensible. All models are documented, versioned and auditable. Banks manage the configuration and scope of application according to their own risk and compliance strategy.
AI for a better customer experience: precise, scalable, self-learning
At Contovista, AI is a production-grade capability. It is purpose built to deliver superior customer experiences at scale. Our models combine deterministic accuracy with adaptive learning. They transform raw transaction data into reliable financial intelligence.
- Generative AI and LLMs for Transaction Enrichment
LLM-based pipelines standardise transaction data and translate it into clear, human-readable information. They generate meaningful labels such as Pretty Names, enrich transactions with logos, and assign accurate merchant and counterparty categories. The result is a continuously evolving enrichment database. It forms the foundation for explainable analytics, high-quality insights, and downstream AI applications. - Generative AI and LLMs for Customer and User-Facing Solutions
LLMs enable knowledge-intensive and language-driven use cases across Contovista Finance Management and customer solutions.
On the bank side, agentic assistants support internal teams. They leverage proprietary transaction enrichment capabilities and insights while applying financial domain expertise, business logic, and contextual customer knowledge. On the end-user side, LLM-based workflows power the AI Finance Manager. Capabilities such as the AI Financial Analyst engage users in intelligent, contextual conversations about transactions, income, and expense patterns. - Supervised Learning for Predictive Financial Intelligence
Supervised machine-learning models, including regression techniques, gradient boosting, and random forests, predict expenses, upcoming obligations, travel spend, and safety buffers with high reliability. These predictions drive Finance Manager insights for end users. They also provide structured analytics for banks across advisory, personalisation, and risk-related use cases. - Unsupervised Learning for Segmentation and Pattern Discovery
Clustering algorithms identify customer segments based on behavioural, life-event, and financial patterns. They support internal data-quality improvements and enable customer targeting and product optimisation. In parallel, unsupervised NLP pipelines using n-grams, embeddings, and related techniques form a core component of the enrichment engine. They continuously refine categorisation and semantic understanding. - Probabilistic and Stochastic Models for Robust Forecasting
Probabilistic and stochastic models capture uncertainty and behavioural variability in real-world financial data.
They enable robust forecasts of liquidity, income, expenses, and safety buffers. These models form the foundation for downstream intelligent algorithms used by financial institutions and end users. Shared learning effects also support cross-geographical profiling and benchmarking. The result is comparative insight with strategic value for banks and tangible relevance for end users.
We use a broad spectrum of AI technologies, selected based on the specific use case.
Traditional machine learning as well as probabilistic and stochastic models are used for forecasting, classification, segmentation, and risk-related analytics.
LLMs and Generative AI are applied to language- and knowledge-intensive tasks. These include data-quality improvements, customer-facing solutions such as assistants for bank-internal risk functions, and end-user capabilities such as the AI Financial Analyst within the AI Finance Manager.
In general, the technology stack is chosen based on the problem to be solved, whether internal, customer-facing, or end-user-facing.
No, not by default.
Model selection depends on the specific use case. We primarily rely on open-source models and libraries, particularly where on-premise deployment, performance, cost control, and data governance are critical.
For highly language- or knowledge-intensive tasks, selected closed-source LLM services may be used where they provide a clear and measurable advantage.
We use all three, depending on the problem:
- Open-source models form the foundation of many AI capabilities
- Proprietary models are developed in-house for traditional machine learning and domain-specific tasks
- Closed-source LLMs are selectively used where they deliver clear benefits
Decisions are driven by accuracy, security, deployment constraints such as on-premise requirements, latency, and cost.
Yes. Models are trained, fine-tuned, or adapted depending on the task, either during development or continuously over time. This includes domain and data adaptation, model training and fine-tuning, as well as prompt and pipeline optimisation.
Both options are supported. AI capabilities can be provided as a SaaS offering or deployed via customer-side installations, depending on regulatory, security, and infrastructure requirements.
Depending on the customer setup, deployments run on Swiss cloud infrastructure or fully on-premise.
All solutions comply with GDPR, FINMA requirements, and the EU AI Act. We apply state-of-the-art cryptographic standards to ensure data protection and security at all times.