PCD / Converge blog

AI projects: From experimentation to engineering principles

Before you can reinvent the world with AI, you need an IT infrastructure to help you design it. Put artificial intelligence to work securely and at scale with a hybrid and future-proof infrastructure, servers for AI, storage for AI, and technologies designed for enterprise AI workloads. But lets first have a look at the best practices our team was able to observe.

Ai EN

Original source of this article: IBM Expert Insights, Proven concepts for scaling AI

 

Deploying AI: Most businesses are still at an experimentation stage

Poor, misunderstood artificial intelligence (AI). It’s alternately overhyped as a digital nirvana or vilified as a dystopian menace. Yet in the pragmatic here and now, it is neither. Primarily, AI is a way to augment human capabilities and performance, creating better outcomes for people, customers, employees, partners, stakeholders and better financial returns for businesses. Think human aid, not humanoid.

Some organizations view AI as a means to achieving incremental but tangible outcomes with intelligent workflows more efficient business operations, more compelling customer experiences, and more insightful decision-making so human ingenuity and empathy can take centre stage.

The number of AI adopters has grown 65 percent in the past four years, a trend the pandemic’s business disruption is accelerating relative to other technology priorities. Still, companies need to embrace AI holistically to address the scaling problem rooting it in business strategy, innovation, and competitive differentiation, then deeply integrating it into evolving business operating models and workflows. Artificial intelligence (AI) and machine learning (ML) are only beginning to emerge from their formative stage and the peak of the hype cycle into a period of more practical, efficient development and operations

The difficulties in scaling AI projects

Successfully scaling AI projects from sandboxes to pilots and minimum viable products (MVPs) all the way to commercialization in the business has bedeviled many companies. Recent studies show that 90 percent of companies have difficulty scaling AI across their enterprises. So, it’s not surprising that about half of AI projects fail.

AI is a complex, multi-faceted business and technological innovation with layers of interconnected and moving parts. No one aspect can single-handedly ensure success. To advance, organizations first must treat AI as a discipline with robust engineering and ethical principles, rigorous operations and governance, and an adaptable approach that emphasizes pragmatism over theory. Organizations also must apply greater focus on scientific innovation with R&D-like capabilities that continuously explore the bleeding edge in order to differentiate.

 Of course, progress is rarely linear. There will be projects that succeed in early stages, but still fail to achieve human adoption. Beta tests need to be developed and launched as part of a commercialization engine explicitly designed and geared toward growth and scale. Otherwise, companies risk becoming mired in endless cycles of experimentation, ever dabbling but never doing.

Among our observations, too often, model development takes place on a data scientist’s laptop, and orchestration is done manually, or ad hoc, using custom code and scripts. The net effect is that data teams scientists, engineers are forced to work inefficiently, burdened with manual tasks. It slows the delivery of ML-enabled applications and reduces the business returns on AI investments. Another reason AI initiatives stall before going into production is that the projects are often siloed, with a wall or gap between developers and stakeholders.

Integrating ML and deep learning into business operations is great, but often results in models and algorithms that are solely based on structured data. Most companies engaged in AI initiatives would agree that the best information is still locked in unstructured data sources

Agile DevOps + Automated ITOps + MLOps = AIOps

Similar to how many companies use DevOps and other software engineering approaches, AI engineering and operations extend the proven benefits of decreased development cycles, improved collaborations, higher levels of operational efficiency, and more effective deployment. The approach creates an environment that brings a structured focus to ushering projects through development into production and, ultimately, delivering commercial results.

Here is an overview of the approach proposed by Converge and IBM to accelerate the deployment of AI initiatives using a reference architecture and proven technologies. Watch the video. (approx 3 min)

PCd article de blogue

 

AI engineering and operations

Design

A human-AI experience designed for usability, as well as standard toolsets and methodologies to enhance speed-to-value and quality standards across AI projects.

Deployment

A framework that automates deployment to improve efficiency and auditability.

Monitoring

Technical and quality key performance indicators (KPIs) and processes to regularly measure and benchmark them.

Embedding

Methods to check for bias in models, tools that visualize decisions taken by AI models, and broader company ethical guidelines.

 

 

 

Making the case for building AI capabilities

We need to keep in mind that the humans who interact with AI are perhaps the most important part of any project team. Ultimately, they are responsible for realizing the actual, not just ideal, individual experiences, intelligent workflows, collaborative decisions, and tangible business value.

 

Here are some of the best practices we observed:

Start small but design for scale

Use an MVP to lay the foundation for something larger. Initial projects should be prioritized based on business impact, complexity, and risk. From there, you can build scale. Create and follow a roadmap based on impact and feasibility. If a pilot doesn’t succeed, accept it as a learning process and move on. Don’t expect every project to move into full production. A hybrid multicloud environment lends itself to scaling with data from various sources.

Establish measurements for success

If you are already using DevOps or other software engineering approaches, establish a small team to transfer those skills and processes to AI projects. Adjust these policies and processes for the nuances of an AI environment.

 

Adopt engineering principles

If it’s worth doing, it’s worth measuring. Metrics should be mapped against key success factors and significant risks. They also should be open and transparent, allowing the relevant internal teams to review progress. Feedback loops should provide input for new design and development. In AI, failure is an option, as long as companies learn from those constructive failures.

 

Appoint strong leadership

Confirm all AI projects support the strategic agenda and are designed with customers and other stakeholders in mind. AI should be regularly tested for bias and transparency to help ensure the output is ethical and fair. Leaders should also be responsible for building or acquiring the requisite AI skills and training.

 

Establish an AI playbook

The playbook should be a living document, with checklists and engineering principles, built upon successes, failures, and KPIs. Create an architecture and team structure that operate at the intersection of design and data centres. Reinforce that deploying AI models is not the only goal or the end of a project. For AI to scale, you need to continuously evaluate and improve your models while they are in production. If it’s not repeatable, it’s not reliable and documentation is critical for repeatability.

 

Monitor models

On an ongoing basis, monitor the explainability, fairness, and robustness of your AI models. Develop inspection algorithms ethical “bots” that serve as virtual microscopes to search for unintended bias and other issues.

 

Innovate at scale

Adopt and integrate deep, robust NLP capabilities and other forward-looking elements of AI matched to distinct use cases that add clear business value. Integrate disparate internal and external data sources. Adopt the mindset of an AI startup. Consider assigning some resources to explore bleeding edge technologies.

 

Engage ecosystem partners

Consider partnering with others to establish and/or influence relevant standards, drive transparency, and foster trust. Engage academics, think tanks, startups, and other trusted third parties.

 

 

Sponsor of this article

ibm

 

 

 

 

 

Modern IT infrastructure matters for trustworthy AI. AI and automation are two of the key reasons why businesses are modernizing their IT infrastructure. That’s why it’s vital to know that your choice of AI servers, storage and software can determine how well you are able to infuse AI throughout your organization, reduce the cost of data ingress and egress, and protect your data for a trustworthy AI framework.
IBM’s systems and storage technologies have been designed to:

  • Provide access to data anywhere, anytime with governance and at scale, so you can infuse AI across your hybrid environment
  • Securely build a trustworthy AI environment with superior security and a flexible system architecture
  • Quickly scale for the demands of AI with on-demand capacity, whether on premise or by connecting to the public Cloud.

 

Need help?

Converge and our team in Canada can help and support you in reviewing your situation or completing your project. Feel free to contact us to discuss your objectives or ask questions.

 

Robb Sinclair

Vice-President Analytics
Converge Technology Solutions Corp.
rsinclair@convergetp.com

 

Source: IBM Expert Insights, Proven concepts for scaling AI