Skip to main content

Generative AI is revolutionizing problem analysis, solution development and design, introducing a new era of innovation. We are witnessing groundbreaking experiments in augmentation across various domains, both in new solutions and enhancements to existing ones. Large Language Models (LLMs) represent a powerful subset of Generative AI, leveraging language as their primary tool. They excel in understanding, reasoning, and responding in human languages. LLMs are revolutionizing the way humans interact with AI, fostering seamless communication and collaboration. This approach, known as Human-AI Interaction (HAX), empowers both humans and AI to conquer even the toughest challenges with confidence.

Generative AI and LLMs

Specifically speaking LLMs are a type of generative AI. Generally, generative AI covers a broader canvas of AI creativity which stretches from language understanding and generation to multimodal content generation including realistic images, artistic creations, and music composition. It not only replicates what it has seen in the training process but also innovates new forms of expressive content.

LLMs are artificial neural networks, trained on a massive amount of linguistic data (text). They have a large number of parameters to learn patterns, problems, and situations. With that, they are impressive in understanding and generating text patterns, information retrieval, and summarization. Being a type of generative AI, LLMs complement other modes of generative AI by their strong capabilities of language understanding, processing, reasoning, and problem solving.

Generative AI is not only revolutionizing the fields of graphic design, music creation, and film making but also making groundbreaking advancements in the fields like software design and development, business intelligence, financial planning, and asset and investment management. However, to make generative AI work for Humans or software systems there must be effective communication mechanism between generative AI model and software systems or humans. This is where LLMs come into play to offer their language understanding and processing skills for the rest of the generative AI capabilities. Effectiveness of generative AI is totally dependent on how it is being communicated to perform a task. This communication is called prompt engineering. Prompt is a natural language text used to direct or request generative AI models to perform a task. Being generative AI sub systems LLMs provide mechanisms to understand these prompts and respond accordingly.

LLM as Personal AI Assistant

Large language model can be your general-purpose chatbot companion, lifelong learning partner, translator, programming code generator, personal assistant, teaching assistant, content moderator, data analyzer, planner and whatever you want LLM to be for you if you are able to effectively communicate your needs!

LLM Empowered Business Solutions

Apart from personal use, LLM augmented business solutions may include but not limited to enhancement of customer service and marketing, improved market research and analysis, streamlining operations and productivity, and finance and investment management.

The total market value of all publicly traded companies is approximately 100 trillion to 200 trillion US dollars (The $109 Trillion Global Stock Market in One Chart) (Companies ranked by Market Cap). A report by Statista in 2023 estimated that total market value of investments managed by investment firms like mutual funds, hedge funds, and pension funds and other asset classes is around 140 trillion US dollars (Assets managed by hedge funds globally 2023 | Statista). Given this massive (potentially hundreds of trillions of USD) financial planning and investment market Generative AI and LLM augmented software solutions have the potential to revolutionize the financial planning, and investment management by providing powerful predictive analytics, optimizing investment strategies, and enhancing decision-making processes in the financial industry.

Generative AI and LLMs make it possible to have enhanced user experiences with conversational financial planning, customizable investment dashboards, AI powered financial analyst assistants, advanced risk management by stress testing with generative scenarios, and optimized investment portfolios by algorithmic portfolio rebalancing. Moreover, it not only provides new innovative solutions like the ones discussed above but can also be integrated into existing systems to complement, leverage, and enhance their capabilities. This leads individuals and businesses to make quick, efficient and smarter financial decisions with short and long-term goals.

Financial systems rely on accuracy and quality of data while making decisions, managing risks, and ensuring compliance. Data quality and accuracy issues are persistent challenges and pose significant risks to financial stability and consumer trust in financial systems. LLMs are currently addressing these challenges and are game changers in improving data quality for financial systems by automating data cleansing, enrichment, and augmentation processes.

LLM Knowledge Gaps: Introducing RAG

Even though LLMs are trained on massive amounts of data they do not know about your personal, organizational and enterprise data patterns. LLMs are not good at producing results in knowledge intensive tasks where domain specific knowledge is continuously updating. This is where a technique used in generative AI called RAG (Retrieval Augmented Generation) comes in.

Complementing LLMs with Missing Patterns: Synergizing RAG and LLMs

RAG empowers the LLM to learn about your targeted data patterns without modifying or affecting the underlying model of LLMs. This targeted information is not only up to date than the LLM training patterns but also specific to your organizational data. This means that LLM is now more context aware and provides appropriate and accurate answers. As an example, consider an AI based financial analyst assistant that uses LLM to find and suggest investments that best fit your goals. This financial assistant will need access to not only historical investment market data but also needs access to current pricing, trading, company financials, risk and other data that’s continuously changing. This data builds the context for LLM to work on your specific investment goals more effectively. RAG powered LLM based systems can provide investment professionals with a more comprehensive understanding of companies, markets, and investment options. This leads to smarter investment decisions and potential improvement in investment returns.

Designing LLM Powered Solutions

Making LLMs work for solving your problems is not straight forward. You need to develop skills of LLM communications (called prompt engineering) to make LLMs understand you and vice versa. Augmenting LLMs for effective business solutions depends on how business requirements and LLM capabilities and functionalities are aligned with each other to get maximum results while considering the challenges of missing information, hallucination, bias, privacy, and transparency.

LLMs: Selecting the Right One for Your Needs

Looking for suitable LLM for your solutions? Good! But a lot of LLMs are available (from dozens to hundreds) and may arguably be good for one reason or the other. So, as we mentioned earlier, making the right choice not only depends on the understanding of the model but also on how the model aligns with your business requirements. Therefore, the criteria for selecting LLM depends on several factors which may include what functionalities/capabilities LLM is offering, accessibility and integration options, performance, cost, and community support.

LLMs: General Purpose vs Customized

Large Language Models can either be general purpose or customized. General purpose LLMs are versatile and can handle a wide range of tasks. They are readily available and require minimal setup to use. Customized LLMs on the other hand are fine-tuned on domain specific data and are tailored to specific use cases within the customization domain. If you are looking for broad functionality and ease of access and cost effectiveness, then general purpose LLMs would be a better choice. However, if you are looking for accuracy within the specialization domain and cost is not an issue then customized LLM is for you. An alternative in this case may be a general-purpose RAG empowered LLM with tailored knowledge base.

Evaluating LLMs: Some Technical Considerations

Number of parameters is a good indicator of LLM size and capabilities (but not the only one). Non technically we can say that parameters are the situations, problems, and experiences LLM passes through during its learning process. These parameters govern the learning and generation of the model. The more parameters a model has, the more situations and data patterns it can learn. However, this comes at the cost of increased computational complexity.

From a software development perspective, proprietary general-purpose LLMs offer accessibility through API interfaces. Utilizing LLM API access is ideal for those seeking a fast, efficient solution with minimal setup time. Conversely, if you possess technical expertise and ample computational resources, opting for an open-source LLM grants you full control over its functionalities.

One important consideration while making the selection of LLM is the size of context window. The context window refers to the maximum number of tokens (amount of text) LLM considers while performing a task (or generating a response). It determines how much history LLM can refer to while processing the current input or prompt. Long context windows help to understand complex relationships with long distance dependencies in data which results in improved performance. The context window needs to be used carefully. Most LLMs with paid access charge per token (prompted token and generated token separately)

Conclusion

No doubt, generative AI and large language models (LLMs) are reshaping the landscape of computing and its applications. We are witnessing a shift from traditional code-centric development to conversational interface-based development. Generative AI is not only expanding the horizons of discovery and innovation but also challenging the boundaries of decidability and intractability in computing. AI-powered solutions are quick in automating basic and intermediate business processes, opening doors for smarter and more innovative solutions. However, leveraging generative AI and LLMs for such solutions requires a deeper understanding of computing problems than ever before. This is crucial to address key challenges like bias, explainability, transparency, and responsible development.

Dr. Mutee ur Rehman

PhD in Natural Language Processing | Chief AI Officer at Sarovar Inc | Professor of Computer Science | Bridging academia and industry through AI innovation

Leave a Reply