Manish Garg, the co-founder and Chief Product Officer at Skan.ai, a computer-vision-based process intelligence platform, discusses the transformative potential of AI in enhancing knowledge accessibility within enterprises. This technology can anticipate the information needs of digital workers without explicit requests, improving productivity by reducing the time spent searching for information in outdated systems. However, the journey towards implementing AI-driven systems comes with challenges such as speed, accuracy, change management, cost, data hygiene, and security concerns.
One of the primary challenges in adopting AI technology is change management, with employee support for change initiatives declining due to fears of job loss and resistance to new technologies. To address this, an open-source approach that includes employees in the decision-making process can improve buy-in and reduce burnout, leading to higher success rates in change initiatives. Embracing this strategy can ensure that employees stay engaged and committed to new technologies.
The speed of AI in responding to digital workers’ needs is crucial for its effectiveness in improving knowledge accessibility. By using miniaturized models that operate on digital workers’ laptops and desktops, processing data locally and utilizing the cloud for complex tasks, delays in understanding and acting on information can be minimized. This strategic computing distribution enhances responsiveness and efficiency in accessing knowledge, ensuring that AI can provide immediate assistance.
High costs associated with AI adoption can be a significant barrier for many companies, with expensive computing operations impacting the feasibility of implementing sophisticated AI systems. By using miniaturized models and distributed computing strategies that optimize resource use and lower costs, companies can make AI more financially viable. Focusing on training AI for specific functions and transmitting only essential, non-sensitive data to the cloud can further reduce operational and training expenses.
Data quality is crucial for the successful deployment of AI, with up to 35% of projects facing delays or failures due to inadequate or poor data quality. Ensuring thorough data preparation, profiling, refinement, validation, and continuous quality monitoring are essential steps to guarantee the accuracy and effectiveness of AI models. By adhering to established rules and standards and addressing data issues systematically, companies can ensure reliable performance from their AI systems.
Data privacy and security concerns are significant hurdles for AI adoption, particularly in industries where sensitive information must be protected. Incorporating privacy-by-design principles, adopting a dual-model approach, conducting regular privacy assessments, and implementing robust protection strategies can safeguard sensitive data, maintain security, and compliance while enabling the benefits of AI. This proactive approach ensures that data privacy and security remain priorities in organizations utilizing AI for knowledge access.