What a year it has been! The launch of ChatGPT opened the AI demand floodgates, particularly for generative AI: large language models (LLMs) for text and the new generation of models for manipulating images and voice.
At Reaktor, we’ve met with dozens of our existing and potential new clients to discuss applying generative AI in a business context. Most of the demand has centered around using large language models to assist with or automate manual tasks involving textual information.
After a series of workshops, concepts, prototypes, deadlines, and some scar tissue, we are eager to share our insights. Therefore, here’s five things we’ve learned developing new GenAI applications:
1. Getting started is both easy and affordable
APIs for large language models, such as those offered by OpenAI, are user-friendly and provide a good developer experience. Developers can quickly grasp the basics of LLMs, prompt engineering, and related APIs, often getting started within hours, if not minutes. Excellent documentation, examples, tutorials, and interactive playgrounds make the initial steps for new AI developers an exhilarating experience.
Many models are readily accessible through model vendors and cloud providers via pay-as-you-go APIs. For individual developers and small-scale internal testing, API usage costs are low. Pricing may only become a consideration for broader deployments targeting a larger user base. The continuous development of hardware and algorithms required to run the deep neural networks that power GenAI models is driving prices down while improving performance – reflecting the same trends observed with Moore's Law throughout the history of IT.
2. Designers are essential
Relying solely on technology and programmers is not sufficient. As we develop new tools for knowledge workers or integrate new features into digital services, the importance of user experience cannot be overstated. These elements are critical for successfully introducing new tools and features. Moreover, product and service design plays a pivotal role in weaving the new capabilities of AI seamlessly into existing workflows.
When creating prototypes, it's crucial to validate not just the technical feasibility but also the viability of the use case. Designers must familiarize themselves with the new possibilities and the limitations presented by AI. Close collaboration between developers and designers is vital in exploring new ideas and experimenting with proofs of concept.
Yes, you do need designers.
3. Data, data, data
Data truly is the foundation of AI, playing a critical role both in the development of AI models and their application in real-life use cases. Getting started is easy. Prompt ChatGPT and receive valuable and actionable responses. LLMs possess a surprisingly extensive knowledge, having digested the entirety of the internet. However, their understanding is at a general level. LLMs have little to no knowledge of your business's internal processes and data. The more (and the better quality) unique data you can provide to an LLM, the more effectively AI can assist you.
As ChatGPT becomes the new normal, your unique data becomes your differentiating competitive advantage. The challenge is how to wield your data effectively.
In the increasingly AI-driven business world, the significance of data platforms and the data maturity of organizations are becoming more critical than ever. Your existing processes and systems generate data around the clock. The real challenge lies in identifying, understanding, enriching, and contextualizing data for AI – accelerating the momentum of your data-AI-business flywheel. Look at how the digital FAANG giants do it!
4. Human oversight or other guard rails are needed
Even though LLMs are remarkably smart, they can sometimes generate false or fabricated information. A certain level of unpredictability appears to be an inherent characteristic of generative AI models – both a source of creativity and potential for error. Therefore, validating LLM outputs and implementing guardrails are necessary to present the results to the end user.
It's advisable to first launch new GenAI-powered tools internally in your organization. After collecting internal feedback and undergoing iterative development, you gain a better understanding of and trust in the capabilities and quality of AI-generated responses. This validation process helps you determine the necessary guardrails, moderation, or feedback mechanisms before launching new AI-powered features externally.
5. Embrace the iterative development process
As we leverage GenAI to develop new features for digital services, we find ourselves navigating uncharted territory. The landscape is such that no one has definitive knowledge of what works and what doesn't, nor do we fully understand the features users genuinely value. The industry has yet to establish best practices for leveraging generative AI. To discover effective approaches, it's necessary to experiment broadly — the quicker the iteration, the better.
Once you've mastered the APIs, understood the basic concepts, and configured your development environment, prototyping can proceed remarkably quickly, often within days or even hours. This eliminates the need to dedicate weeks or months to developing and testing a proof of concept.
The ability to test rapidly, fail quickly, and iterate swiftly enables us to explore this new frontier in a methodical yet expedient manner. Adopting a 'wide-net' approach to experimentation becomes a highly effective and economical strategy for mapping out this new frontier and uncovering valuable use cases. This approach allows us to identify and invest further in validated scenarios that promise the greatest return.
DATA, AI & LLM SOLUTIONS
How Reaktor can help you with AI
From generative AI to data strategy and custom models, we help you build transformative AI solutions with value-first approach.
Discover our AI offering