Many organizations struggle with finding a true-and-tried path to consistent success when it comes to their data and AI efforts. In fact, according to Harvard Business Review, 80% of AI projects fail. From our experience, there’s three typical reasons for why this is the case. First, solution performance isn’t good enough and there is uncertainty around what can be built. Second, the solution does not provide end-user value. Or third, the solution does not provide business value.
So what can be done? The first piece of the puzzle is correctly identifying the problems we should try and solve with AI. From there, we can start figuring out how to go about solving them, and make distinctions between what AI and traditional software development can do. In this article, we take a look at some of the most common pitfalls when it comes to planning and executing on AI projects. Drawing from extensive experience, we hope to provide some clarity on how to best navigate this unknown beast, and help you with informed decision-making throughout your AI experiments.
Turning data into benefits for end-users and business
Transforming raw data into tangible benefits for end-users, and subsequently into business success, is much easier said than done. It’s not uncommon for folks to get excited about new technology, and start figuring out how to put it to use within your organization.
This approach is flawed if you are hoping to have these experiments turn into short-term solutions for particular problems. Combining R&D with “off-the-shelf” capabilities never works, and getting these two things mixed overlooks the reason for why businesses should jump on data projects in the first place: to solve a specific, pressing business problem. Hammering technology into any issue without a clear business goal in mind leads to subpar results, if not outright failure. The temptation of using AI simply because it’s possible can lead organizations down a path of misallocated resources and unfavorable outcomes.
A common oversight is the assumption that existing data is readily adaptable for any new, innovative applications just because it supports current operations. This misunderstanding fails to account for the need to substantially transform data to make it suitable for different purposes. The nature of data as a universal asset is largely a myth. In most cases, its utility is highly dependent on the context and specific problem it is intended to solve.
Many companies find themselves grappling with fragmented data landscapes where acquiring and preparing valid data sets for specific problems is a formidable challenge. The work involved in streamlining data governance, ensuring compliance, and simply understanding what data is available is often underestimated and overshadowed by the more glamorous aspects of data projects. This isn’t just true for technical challenges, often in-depth domain knowledge is required to gain semantic understanding of the data as well.
This is true in particular when it comes to “classical” machine learning and data science. Before any meaningful results can be achieved, the basic infrastructure for managing and analyzing data has to be robust and coherent. GenAI has a lot to offer here as well. Because of its surprisingly universal abilities to transform data, not only PoCs become easier to get off the ground, but building transformations for different data sets also becomes significantly more doable.
People and processes
In recent years, we’ve noticed a trend of placing unreasonably high expectations on data scientists. Many orgs have put high hopes on data scientists to single-handedly transform the organization and unlock a new path forward. This perspective is not only unrealistic, but also unfair. These individuals often lack the mandate, resources, and sometimes even the necessary skills to meet these inflated expectations. The hype surrounding big data and its transformative power has contributed to this disconnect, and overshadowed the complex, foundational work required to make data truly valuable to your organization.
Despite the technical complexities, the success of data and AI initiatives is more often tied to people than the technology itself – something we’ve already proven many times over with traditional software development. Bridging the gap between data capabilities and business objectives requires a conscious effort to involve both domain experts as well as data professionals. It’s crucial that the people who will be utilizing the new tools understand, accept, and even welcome the change, and that will happen by co-creating solutions together with the end-users. This collaborative approach not only brings more valuable insights to the table, but also ensures that the solutions developed are deeply rooted in real-word needs and practicalities.
Understanding that a successful implementation of data and AI solutions requires a varied set of skills and perspectives naturally expands the circle of involved stakeholders. From software developers to business analysts, and now increasingly designers and even end-users, the collaborative effort to harness the potential of AI is becoming more and more multidisciplinary. Integrating diverse points of view from the early stages of designing a project all the way through to execution will help position you for success.
As with best practices in traditional software development processes, taking an iterative approach usually leads to best results. With short feedback cycles and quick iterations, initial hypotheses get turned into working prototypes.
Strategies and solutions
One seemingly successful strategy for navigating the uncertainties of new data and machine learning is to adopt what we call a portfolio approach. Instead of betting everything on a single, monolithic solution – diversifying efforts across a range of initiatives allows you to stay more flexible and adaptable. This mindset encourages experimentation and learning, mitigating the risk associated with high-stakes projects. With most data and AI experiments failing, doing so fast helps find the promising use case that actually takes off.
Another practical strategy is to identify use cases where moderate performance provides value. For example, instead of trying to replace experts, provide them with a variety of simple but highly enthusiastic assistants. This approach avoids problems while mitigating risks, and also faces less resistance and empowers your experts to fulfill their potential.
In many cases, actual machine learning might not even be necessary, and basic analytics and statistics can get the job done. In other scenarios, just how complicated AI is is overstated and the models are in reality fairly simple, with only a fraction embodying the complexity levels of ‘AI hype’. LLMs like OpenAI’s GPT have also dramatically democratized access to AI technologies, lowering the technical barriers that once made it more difficult for developers to get involved.
With easier access and more efficient tooling, developers are able to shorten development cycles – making it possible to effectively test uncertain projects and bridge the gap between technology and its practical applications. As a result, solutions can be conceptualized and delivered in record time, also providing more room for experimentation.
Creating mental models around AI
The path to effectively leveraging data and AI in any business is complex. Creating appropriate mental models around AI: what it can do, what its limitations are, and where there is space for play, will help orientate and build trust around this new technology and its potential value.
AI challenges us to reconsider our motivations, expectations, and strategies for integrating these powerful tools into our operations. Developing AI tools requires users and stakeholders to be involved in all phases of the design and development process – constantly evaluating the value proposition and value actualization. By anchoring our efforts in clear business objectives, embracing the importance of foundational data management work, and encouraging collaboration across disciplines, we can navigate the pitfalls and unlock the true potential of data and AI.
DATA, AI & LLM solutions
How Reaktor can help you with AI
From generative AI to data strategy and custom models, we help you build transformative AI solutions with value-first approach.
Learn more