In recent months, Generative AI has made a significant impact worldwide. The emergence of ChatGPT fulfilled a long-standing desire for a conversational bot capable of providing genuine value to conversations. This bot has the ability to offer opinions, carry out internet searches, and provide meaningful responses in a concise manner, rather than simply presenting search results. The icing on the case is that it can also generate various forms of textual content, such as essays, code, and images.
Although the application of Generative AI may seem fun and entertaining, it raises the question of its practical value in the realm of FinTech. Banking, Financial Services, and Insurance sectors hold potential for leveraging Generative AI – but is there a price to pay?
We take a dive deep in this article about the benefits, concerns, and ugly side of Generative AI to help you (decision-makers in the Banking Financial Services, and Insurance) with the necessary insights for making informed choices regarding adopting this technology.
AI will not take over the world (…yet)
Generative AI operates on learning models that incorporate human intervention. This means that the data used to train the machine learning system is validated by humans. As a result, the machine gets trained using what we refer to as human bias.
Think of the current state of Generative AI as a machine capable of learning from the vast array of resources available on the internet, such as books, articles, and posts, and provide answers to anything. Initially, these responses are validated by humans, but over time, the level of human intervention decreases as the machine becomes more adept at providing accurate answers. The generative part is based on a neural network to predict the best answer in the context of the conversation! As time advances, the machine continues to learn and gets better at this!
If we re-read the earlier paragraph carefully, it can be summarised in a few words – “improve learning to predict”. This means that AI, for the time being, lacks the capacity to exhibit creativity. It cannot imagine new concepts, and its creativity solely derives from past knowledge acquired through human experience. AI is incapable of understanding emotions, engaging in lateral thinking, or thinking out of the box but relies solely on information and computational processes.
The day AI gains the ability to generate its own algorithms and create new theorems that challenge existing theories (to illustrate, imagine an AI algorithm proving the existence of something faster than the speed of light), is the day we need to start worrying!
What is the good that AI brings to FinTech?
AI operates on patterns and learns from them. Machine learning algorithms can assist FinTech in various ways. One crucial aspect is personalization, where Chatbots have evolved from simple decision tree-based automation to providing tailored experiences. They not only understand natural language but can also generate context-driven content.
Chatbots have become so advanced that it is virtually impossible to be sure whether we’re talking to a person or a bot. This positively impacts customer retention and enhances operational efficiency within banks, eliminating the need for a call center to handle customer support.
Banks use data from consumer patterns for targeted marketing too! While this used to involve analysis, algorithms now have the capability to predict user spending habits and automatically provide customized offers and discounts based on their persona.
For instance, you visit a casino and swipe your credit card for a moderate transaction. You receive an instant message regarding the amount spent. You might even receive a call from the issuing bank, either to confirm the transaction or to inform you that they are temporarily blocking your card as a precautionary measure due to a suspicious location.
Until now, this has been considered as a safety measure implemented by the bank. However, the bank’s algorithms can now go a step further by analyzing consumer spending patterns, socio-economic structures, and geo-location to predict anomalies or identify potential non-performing assets (NPAs) in advance.
In the banking sector, Robotic Process Automation (RPA) plays a significant role in automating workflows and known processes. It effectively handles numerous approvals, authorizations, and integrations with third-party systems, resulting in reliable, accurate, and rapid system operations. Nevertheless, it is important to note that RPA should not be compared with Generative AI algorithms. GenAI algorithms have predictive capabilities that rely on patterns derived from existing data.
Risk Assessment and Underwriting are using AI to predict potential risks based on various factors including geography, socio-economic strata, economic environment, travel patterns, personal goals, and occasionally even health metrics or driving behavior. All these parameters allow systems to analyze and evaluate the risk associated with loans, insurance premiums, and even claim settlements.
JP Morgan Chase’s IndexGPT is being hailed as a tool to assist in selecting suitable stocks and financial securities. Although details are not clear yet, it is anticipated to replace the knowledge bearers while still unable to replace intellectual strategists. People in this domain of investment and wealth management will need to upgrade their skills to ensure that they go beyond just data analytics and provide valuable insights that an algorithm cannot!
Concerns around AI in FinTech
It is expensive! Developing GenAI or any predictive model demands significant investments in infrastructure, human resources, and a vast amount of meticulously curated data. Considering the return on investment (RoI) associated with this, achieving feasibility in the short term becomes challenging. In addition to this, the quality of data is critical, as it can be compromised by errors or human biases. Most learning systems require initial validation through human intervention, which can potentially lead to complications.
Furthermore, there are always deliberate attempts to confuse the learning algorithm. Allow me to digress by recounting a story about an artist in Berlin who carried numerous mobile devices with Google navigation enabled, fabricating a fake traffic jam. Now, twist this tale a bit and allow a malicious process to fudge data causing issues in banking processes. While regulations and moderation measures are typically in effect, in such a scenario, the question arises: Who would be held accountable for the incurred losses and business implications?
Data privacy emerges as a key concern. Information is lying around with banks and financial institutions to process, thus becoming very easy to exploit data unknowingly. It is essential to acknowledge that data fundamentally belongs to the consumers, and access to it should be meticulously governed. Regulations such as GDPR, CCPA, and comparable standards impose significant penalties for any mishandling. But, the bottom line remains: Has our data already been compromised, churned, and re-churned?
The concepts of Ethical AI and Responsible AI are gaining momentum as a means to establish appropriate regulations while the situation remains somewhat manageable. Notably, Geoff Hinton, the godfather of AI, quit his job because he was not able to openly discuss the potential hazards of AI in society. He believes that machines are rapidly becoming more intelligent, with machine learning algorithms even capable of setting their own subgoals, which can be immoral and dangerous.
Supporting this notion is an anecdote involving a Drone AI simulation, where the drone’s objective is to identify and destroy Surface-to-Air Missile (SAM) after the operator confirms the kill. However, due to a reward mechanism, the Drone AI ended up targeting and eliminating the operator in order to achieve its goal. If you change the ‘reward’ mechanism of an algorithm to save the bank money, it may end up having disastrous consequences.
The ugly face of GenAI in Fintech
If the above concerns weren’t unsettling enough, let’s push the limits. Improper implementation of online – Know Your Customer (KYC) procedures without adequate security measures can lead to disastrous outcomes. An open-source tool known as DeepFaceLab possesses the capability to replace faces, de-age individuals, replace heads, manipulate lips, and even perform real-time swaps during video streaming. This could spell disaster for online KYC processes, which have gained prominence within the banking sector.
Imagine a custom phishing bot with the ability to analyze the public content on a target’s social media, identify their vulnerabilities, create a situation to cause panic, and generate a voice call with the intention of extracting sensitive information. Consider receiving a call from an unknown individual claiming to be a concerned bystander, urgently requesting your mobile lock pin under the pretense that your only daughter has met with a non-existent accident!
Is it possible to unlock mobile phones by utilizing fabricated biometrics or bypass facial recognition through certain programs? Here is an encouraging example of how regulations can play a vital role. ISO 30107 specifies the framework for “Biometric presentation attack detection,” which ensures liveness detection, depth perception, and other techniques in devices to unlock phones. This specification brings forth a glimmer of hope and suggests that solutions exist to address such concerns.
The FinTech Future – Endless Possibilities
One thing is certain – we must embrace AI as technology takes the lead. Machine learning algorithms will continue to advance in pattern recognition, enhancing their predictive accuracy and potentially offering personalized attention to consumers. This progression will undoubtedly boost operational efficiency within financial institutions, shifting the burden of knowledge-based tasks from humans to machines. While strategies and recommendations may become increasingly data-driven, the power of human intellect will persist in terms of lateral thinking and game-changing strategies.
Ensuring data access and privacy is of utmost importance. It is imperative that data always remains the property of the consumer, even when granting authorization to processes. Web3 standards and decentralized identity management will play a crucial role in safeguarding these principles. Regulations to ensure ethical AI and responsible AI will stem most issues at source but the prevalent threat of fraud will always remain.
I believe that AI will surpass human intelligence in the near future, yet I am confident that machines will never surpass us in outsmarting capabilities.