As organisations increasingly rely on artificial intelligence (AI) and data analytics to redefine business operations and explore new growth avenues, the significance of strategic planning, data quality, and infrastructure integration has come to the forefront.
According to a senior industry leader, rather than succumbing to fear of missing out (FOMO) or following the herd mentality, AI adoption requires careful strategic planning, clear objectives, understanding data requirements, and selecting appropriate technologies that align with business goals.
“The potential of AI is immense, but its deployment necessitates strategic planning and data initiatives,” said Koyelia Ghosh Roy, Senior AVP – Enterprise BI COE Lead at EXL Service. “Organisations must define clear objectives, understand their data requirements and select appropriate technologies that align with their goals.”
To effectively adopt AI, organisations must first identify the services that will seamlessly integrate with existing systems. “The first step is to identify the services that will connect with all your systems. Next, capture and clean your data meticulously, as this is not an easy task. Finally, ensure data privacy while extracting meaningful insights,” Roy said.
Importance of Data Quality
For most chief information officers (CIOs), a central theme in AI development is the quality of data used to train models, a tedious task given the challenges associated with the “garbage in, garbage out” (GIGO) syndrome.
“The data we use to develop models is crucial,” Roy said. “The confidence in these models heavily depends on the quality and relevance of the training data. This is why hyperscalers like Google and AWS, known for their vast and clean data sets, are often relied upon.”
The relevance and reliability of training data are essential for building accurate and trustworthy AI systems, which can significantly impact business outcomes.
The issue of explainability in AI models is also paramount. Organisations need to understand the parameters and data used to train their models to avoid legal and operational pitfalls.
“Understanding what data was used, its relevance, and the associated costs are essential to avoid getting wrapped up in complications,” she explained. According to her, this transparency helps build trust in AI systems, making them more acceptable within the organisation and to external stakeholders.
Integration and Transition Challenges
The integration of AI into business operations often involves complex workflows and multi-agent systems. These systems distribute tasks across multiple agents, enabling more efficient and accurate processing. Describing how AI can transform extensive documents into concise summaries through multi-agent workflows, she said, “We have multiple agents working together – one for research, another for citation, and another for review. This collaborative effort results in efficient and accurate outputs.”
Transitioning from legacy systems to modern, cloud-based infrastructures is a significant challenge many organisations face, especially large enterprises and the public sector, which have invested in legacy IT infrastructure over the years.
Stressing the importance of having a robust infrastructure that can integrate seamlessly with existing systems, Roy said, “Legacy systems often have duplicate data and versioning issues. Identifying reliable data and ensuring compatibility with new solutions is vital. Hyperscalers provide solutions that facilitate this integration, but careful planning is essential to avoid cost overruns and implementation issues.”
Many industry CIOs, who are at different stages of their AI journey, said this transition is critical for organisations to fully leverage the capabilities of AI and cloud technologies, ensuring that their data systems are both modern and efficient.
Data Privacy and Security
In the age of data-driven decision-making, privacy and security remain top priorities. With clarity on data privacy in place, thanks to the Digital Personal Data Protection Act (DPDPA) of 2023, Roy emphasised the need for a comprehensive data architecture that ensures data privacy while enabling analysis.
“Maintaining data privacy involves segregating sensitive information and sharing only the essential data for analysis. For instance, in healthcare, you don't need personal details to track treatment progress. What's important is the clinical data, which can be anonymised to protect privacy while still providing valuable insights,” she explained, adding that this approach balances the need for data-driven insights with the imperative of protecting individual privacy.