4 Overlooked Considerations That Can Cause Your AI Strategy to Fail
| 7 minute read
If you feeling some pain in scaling your AI strategy,
then this guide is for you.
AI is considered a driving force powering the next age of human progress and computing platforms. Early experiences suggest that achieving success with AI/Machine Learning/Deep Learning is harder than expected. The power of the transformative effects of AI is not as simply as turning on a light switch.
AI is the second-most important initiative to enterprise leaders today, second only to using data-driven insights to improve products and services, according Forrester Consulting.
The No. 1 goal for AI-based projects is increasing revenue growth (43%), followed closely by improving employee productivity, improving CX, and increasing profitability. Not surprisingly, top use cases mirror these key goals with over 70% of firms currently using or expanding their use of AI technology to support customer service interactions, operational efficiency, and business intelligence application scenarios.
Every organisation actively advancing their AI strategy and capabilities are doing so not in isolation, rather dealing with the direct dependencies and impact AI is placing on the organisation’s people, data, processes and technology.
AI Success is Driving the Next Generation of Market Leaders
With the rapidly evolving and transformative effects of the fourth platform, failure to participate is no longer a viable business option. Companies that wish to digitally transform must understand that embracing the status quo will leave them struggling to keep up with competitors that recognized the opportunity before them.
AI has the ability to create incredible value by decreasing costs, increasing productivity, and improving customer experiences.
Up until 2019/2020, enterprises focused on experimenting with AI within specific areas or functions. According to Forrester, those enterprises that have achieved success with AI are seven times more likely than firms that have not scaled AI to be the fastest-growing organizations in their industries. Conversely, those that have not scaled AI are 1.4x more likely simply to be average in terms of revenue growth rate compared to competitors.
Why Organisations are Failing at AI
Data Quality: 90% of firms are severely challenged at scaling AI across their enterprise, with data the driving force behind this difficulty.
Lack of AI Understanding: One of the most perplexing findings of the same Forrester study is that 52% of respondents simply don’t know what their AI data needs are. If enterprises don’t know what they need, they may blindly jump into AI initiatives that have little chance of success or worse, may never try in the first place.
AI Skills Shortage: Without the right skills in place, teams will struggle with solutions and fail to successfully carry out use cases. The skills shortage is real, and many enterprises are underestimating the time needed to rampup to become proficient (hint: It’s more than 12 months).
Not Thinking Beyond Compute: Simply put, there is no AI without information architecture (IA). Many organizations start with the focus on the compute side of AI, investing in GPUs. While GPUs are critical to AI success, this singular focus can, and sometimes does, lead to the disruption or complete failure of AI projects.
The IA that handles an AI pilot project may not function well when scaled across the enterprise. Organisations must review their entire information architecture for potential breakpoints (performance, cost, security) across computing processing, data storage and interconnectivity when they start to scale AI across the enterprise.
Data Quality is the Top Success Factor for AI. What are the others?
Without properly prepared and curated data, AI initiatives fail. While data quality and data standardisation are the top AI success factors, they are not the only ones:
Data Integration– The ability to connect AI platforms with analytics/business intelligence platforms, along with connecting multiple data sources.
People– Access to the abundant data science and AI/ML engineering skills is critical. As noted above, there is a shortage of these skills in 2020, as demand surges across many industries.
Tech Infrastructure– GPUs are a must, no arguments there. Like GPUs for computing power, not all storage is created equally for AI and Data workloads. Many general-purpose platforms were not designed with AI in mind. Purpose-built platforms like IBM Spectrum Scale and IBM Cloud Object Storage have been designed specifically to handle Data and AI workloads.
In addition, the next generation of Information Architecture (“IA”) is being designed to scale up and out with minimal to no disruption to your production operations. The current thinking behind multi-cloud and hybrid cloud architectures is ensuring this next-generation IA scales not only from a performance standpoint, also with cost and security considerations as well.
Data Management Processes– Managing data manually can be a challenge, especially when training AI. Organisations who are successful at scaling AI think ahead in this regard and consider automation to automatically manage data in an efficient manner.
Key IA Considerations within Each Stage of the AI Journey
Each AI journey or initiative contains four stages: a) collect the data, b) organise the data, c) analyse the data, and d) infuse insights into the organisation. AI is driven by data, and how your data is stored can significantly determine success. The specialists at IBM outline the impact of Storage across the four stages:
Data Collection The raw data for AI workloads can come from a variety of structured and unstructured data sources, and you need a very reliable place to store the data. The storage medium could be a high-capacity data lake or a fast tier, like flash storage, especially for real-time analytics.
Data Organisation. Once stored, the data must be prepared since it is in a “raw” format. The data needs to be processed and formatted for consumption by the remaining phases. File I/O performance is a very important consideration at this stage since you now have a mix of random reads and writes. Take the time to figure out what the performance needs are for your AI pipeline. Once the data is formatted, it will be fed into the neural networks for training.
Data Analyse and Infusion. These stages are very compute intensive and generally require streaming data into the training models. The Training and Analysis stage is an iterative process, requiring setting and resetting, which is used to create the models. Inferencing can be thought of as the sum of the data and training. The GPUs in the servers, and your storage infrastructure become very important here because of the need for low latency, high throughput and quick response times. Your storage networks need to be designed to handle these requirements as well as the data ingestion and preparation. At scale, this stresses many storage systems, especially ones not prepared for AI workloads, so it’s important to specifically consider whether your storage platform can handle the workload needs in line with your business objectives.
Moving Forward at Scale
To compete beyond 2020, organisations will need to progress on developing and scaling their AI capabilities in order to remain or become the leader in their space. The next paradigm is well under way. The next step is up to you.
TES offers a free IA and Storage Assessment, providing a free assessment report outlining where your IA and Storage is viable (and deficient) for scaling AI. Many of our clients use this report as a fresh set of eyes to validate their strategy and/or find the breakpoints that could emerge once the scale effort starts. Request the free assessment here.