Generative AI in Finance: FinGPT, BloombergGPT & Beyond

Generative AI refers to models that can generate new data samples that are similar to the input data. The success of ChatGPT opened many opportunities across industries, inspiring enterprises to design their own large language models. The finance sector, driven by data, is now even more data-intensive than ever.

I work as a data scientist at a French-based financial services company. Having been there for over a year, I’ve recently observed a significant increase in LLM use cases across all divisions for task automation and the construction of robust, secure AI systems.

Every financial service aims to craft its own fine-tuned LLMs using open-source models like LLAMA 2 or Falcon. Especially legacy banks that have decades of financial data with them.

Up until now, it hasn’t been feasible to incorporate this vast amount of data into a single model due to limited computing resources and less complex/low-parameter models. However, these open-source models with billions of parameters, can now be fine-tuned to large amounts of textual datasets. Data is like fuel to these models; the more there is the better the results.

Both data and LLM models can save banks and other financial services millions by enhancing automation, efficiency, accuracy, and more.

Recent estimates by McKinsey suggest that this Generative AI could offer annual savings of up to $340 billion for the banking sector alone.

BloombergGPT & Economics of Generative AI 

In March 2023, Bloomberg showcased BloombergGPT. It is a language model built from scratch with 50 billion parameters, tailored specifically for financial data.

To save money, you sometimes need to spend money. Training models like BloombergGPT or Meta’s Llama 2 aren’t cheap.

Training Llama 2’s 70 billion parameter model required 1,700,000 GPU hours. On commercial cloud services, employing the Nvidia A100 GPU (used for Llama 2) can set one back by $1-$2 for every GPU hour. Doing the math, a 10 billion parameter model could cost around $150,000, while a 100 billion parameter model could cost as high as $1,500,000.

If not renting, purchasing the GPUs outright is an alternative. Yet, buying around 1000 A100 GPUs to form a cluster might set one back by more than $10 million.

Bloomberg’s investment of over a million dollars is particularly eye-opening when juxtaposed against the rapid advancements in AI. Astonishingly, a model costing just $100 managed to surpass BloombergGPT’s performance in just half a year. While BloombergGPT’s training incorporated proprietary data a vast majority (99.30%) of their dataset was publicly accessible. Comes FinGPT.

FinGPT

FinGPT is a state-of-the-art financial fine-tuned large language model (FinLLM). Developed by AI4Finance-Foundation, FinGPT is currently outperforming other models in terms of both cost-effectiveness and accuracy in general.

It currently has 3 versions; the FinGPT v3 series are models improved using the LoRA method, and they’re trained on news and tweets to analyze sentiments. They perform the best in many financial sentiment tests. FinGPT v3.1 is built on the chatglm2-6B model, while FinGPT v3.2 is based on the Llama2-7b model.

 

FINGPT

FINGPT

FinGPT’s Operations:

  1. Data Sourcing and Engineering:
    • Data Acquisition: Uses data from reputable sources like Yahoo, Reuters, and more, FinGPT amalgamates a vast array of financial news, spanning US stocks to CN stocks.
    • Data Processing: This raw data undergoes many stages of cleaning, tokenization, and prompt engineering to ensure its relevance and accuracy.
  2. Large Language Models (LLMs):
    • Training: Using the curated data, not only can LLMs be fine-tuned to birth lightweight models tailored to specific needs, but existing models or APIs can also be adapted to support applications.
    • Fine-Tuning Strategies:
      • Tensor Layers (LoRA): One of the key challenges in developing models like FinGPT is obtaining high-quality labeled data. Recognizing this challenge, FinGPT adopts an innovative approach. Instead of solely relying on traditional labeling, market-driven stock price fluctuations are employed as labels, translating news sentiment into tangible labels like positive, negative, or neutral. This results in massive improvements in the model’s predictive abilities, particularly in discerning positive and negative sentiments. Through fine-tuning techniques like LoRA, FinGPT v3 managed to optimize performance while reducing computational overhead.
      • Reinforcement learning from human feedback: FinGPT uses “RLHF (Reinforcement learning from human feedback)“. A feature absent in BloombergGPT, RLHF equips the LLM model with the capability to discern individual preferences—be it a user’s risk appetite, investment patterns, or tailored robo-advisor settings. This technique, a cornerstone of both ChatGPT and GPT4, ensures a more tailored and intuitive user experience.
  3. Applications and Innovations:
    • Robo Advisor: Like a seasoned financial advisor, FinGPT can analyze news sentiments and predict market trends with great precision.
    • Quantitative Trading: By identifying sentiments from diverse sources, from news outlets to Twitter, FinGPT can formulate effective trading strategies. In fact, even when solely directed by Twitter sentiments, it showcases promising trading outcomes.
FinGPT comparision with GPT-4 LLAMA 2 bloomberg gpt

FinGPT comparison with ChatGLM, LLAMA 2, BloombergGPT

FinGPT’s Current Trajectory and Future: July 2023 marks an exciting milestone for FinGPT. The team unveiled a research paper titled, “Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models.” Central to this paper is the exploration of instruction tuning, a technique enabling FinGPT to execute intricate financial sentiment analyses.

But FinGPT isn’t confined to sentiment analysis alone. In fact, 19 other diverse applications are available, each promising to leverage LLMs in novel ways. From prompt engineering to understanding complex financial contexts, FinGPT is establishing itself as a versatile GenAI model in the finance domain.

How Global Banks are Embracing Generative AI

While the onset of 2023 saw some of the major financial players like Bank of America, Citigroup, and Goldman Sachs impose constraints on the usage of OpenAI’s ChatGPT by their employees, other counterparts in the industry have decidedly opted for a more embracing stance.

Morgan Stanley, for instance, has integrated OpenAI-powered chatbots as a tool for their financial advisors. By tapping into the firm’s extensive internal research and data, these chatbots serve as enriched knowledge resources, augmenting the efficiency and accuracy of financial advisory.

In March this year, Hedge fund Citadel was navigating to secure an enterprise-wide ChatGPT license. The prospective implementation envisages bolstering areas like software development and intricate information analysis.

JPMorgan Chase is also putting efforts into harnessing large language models for fraud detection. Their methodology revolves around utilizing email patterns to identify potential compromises. Not resting on here, the bank has also set an ambitious target: adding as high as  $1.5 billion in value with AI by the end of the year.

As for Goldman Sachs, they’re not entirely resistant to the allure of AI. The bank is exploring the power of generative AI to fortify its software engineering domain. As Marco Argenti, Chief Information Officer of Goldman Sachs, puts it, such integration has the potential to transform their workforce into something “superhuman.”

Use cases of Generative AI in the Banking and Finance Industry

Generative AI in Finance USE CASES

Generative AI in Finance Use Cases

Generative AI is fundamentally transforming financial operations, decision-making, and customer interactions. Here’s a detailed exploration of its applications:

1. Fraud Prevention: Generative AI is at the forefront of developing cutting-edge fraud detection mechanisms. By analyzing vast data pools, it can discern intricate patterns and irregularities, offering a more proactive approach. Traditional systems, often overwhelmed by the sheer volume of data, might produce false positives. Generative AI, in contrast, continuously refines its understanding, reducing errors and ensuring more secure financial transactions.

2. Credit Risk Assessment: The traditional methods of evaluating a borrower’s creditworthiness, while reliable, are becoming outdated. Generative AI models through diverse parameters – from credit histories to subtle behavioral patterns – offer a comprehensive risk profile. This not only ensures safer lending but also caters to a broader clientele, including those who might be underserved by traditional metrics.

3. Augmenting Customer Interaction: The financial world is witnessing a revolution in customer service, thanks to generative AI-powered NLP models. These models are adept at comprehending and responding to varied customer queries, offering personalized solutions promptly. By automating routine tasks, financial institutions can reduce overheads, streamline operations, and most importantly, enhance client satisfaction.

4. Personalized Financial: One-size-fits-all is a relic of the past. Today’s customers demand financial planning tailored to their unique needs and aspirations. Generative AI excels here. By analyzing data – from spending patterns to investment preferences – it crafts individualized financial roadmaps. This holistic approach ensures customers are better informed and more equipped to navigate their financial futures.

5. Algorithmic Trading: Generative AI’s analytical prowess is proving invaluable in the volatile world of algorithmic trading. By dissecting data – from market trends to news sentiment – it provides incisive insights, enabling financial experts to optimize strategies, anticipate market shifts, and mitigate potential risks.

6. Strengthening Compliance Frameworks: Anti-Money Laundering (AML) regulations are critical in maintaining the integrity of financial systems. Generative AI simplifies compliance by sifting through intricate transactional data to pinpoint suspicious activities. This not only ensures financial institutions adhere to global standards but also significantly reduces the chances of false positives, streamlining operations.

7. Cybersecurity: With cyber threats constantly evolving, the financial sector needs agile solutions. Generative AI offers exactly that. Implementing dynamic predictive models, it enables faster threat detection, fortifying financial infrastructures against potential breaches.

However, as is the case with any evolving technology, generative AI does come with its set of challenges in the finance industry.

The Challenges

  1. Bias Amplification: AI models, as sophisticated as they are, still rely on human-generated training data. This data, with its inherent biases—whether intentional or not—can lead to skewed results. For instance, if a particular demographic is underrepresented in the training set, the AI’s subsequent outputs could perpetuate this oversight. In a sector like finance, where equity and fairness are paramount, such biases could lead to grave consequences. Financial leaders need to be proactive in identifying these biases and ensuring their datasets are as comprehensive and representative as possible.
  2. Output Reliability & Decision Making: Generative AI, at times, can produce results that are both wrong and misleading—often termed as ‘hallucinations‘. These missteps are somewhat expected as AI models refine and learn, but the repercussions in finance, where precision is non-negotiable, are severe. Relying solely on AI for critical decisions, such as loan approvals, is perilous. Instead, AI should be viewed as a sophisticated tool that assists financial experts, not one that replaces them. It should handle the computational weight, providing insights for human professionals to make the final, informed decisions.
  3. Data Privacy & Compliance: Protecting sensitive customer data remains a significant concern with generative AI applications. Ensuring the system adheres to global standards like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) is crucial. AI may not inherently know or respect these boundaries, so its use must be moderated with stringent data protection guidelines, particularly in the financial sector where confidentiality is paramount.
  4. Quality of Input Data: Generative AI is only as good as the data fed to it. Inaccurate or incomplete data can inadvertently lead to subpar financial advice or decisions.

Conclusion

From enhancing trading strategies to fortifying security, Generative AI applications are vast and transformative. However, as with any technology, it’s essential to approach its adoption with caution, considering the ethical and privacy implications.

Those institutions that successfully harness the prowess of generative AI, while simultaneously respecting its limitations and potential pitfalls, will undoubtedly shape the future trajectory of the global financial arena.

Credit: Source link

Comments are closed.