What advanced AI needs to do to reshape the enterprise landscape

- Advertisement -


According to Infosys research, data and artificial intelligence (AI) can generate $467 billion in incremental profits worldwide and become the cornerstone of enterprises gaining a competitive edge.

But while the opportunities to use AI are very real – and the democratization of ChatGPT is driving generative AI test-and-learn faster than QR code adoption during the COVID pandemic – the utopia of substantial business wins through autonomous AI remains elusive. There is a proper way. Getting there requires process and operational transformation, new levels of data governance and accountability, business and IT collaboration, and customer and stakeholder trust.

- Advertisement -

The reality is that many organizations still struggle with the data and analytics foundation needed to move forward on an advanced AI path. Infosys research found that 63 percent of AI models only work on basic capabilities, are driven by humans, and often fall short on data validation, data practices, and data strategies. It’s not surprising that only one in four physicians are highly satisfied with their data and AI tools so far.

This status quo can be partly explained by the fact that only eight out of 10 started their AI journey in the past four years. Only 15 percent of organizations have achieved what is described as an ‘evolved’ AI state, where systems can discover causes, act on recommendations and refine their own performance without oversight .

- Advertisement -

Then there are the considerations of trust and accuracy around the use of AI that can be contested. Gartner’s predictions estimate that by 2022, 85 percent of all AI projects will end up with mistakes, errors, bias, and inaccurate results from things going wrong. According to Infosys, one in three companies are using data processes that increase the risk of bias in AI right now.

That’s why the ethical use of AI is an increasingly important movement being led by governments, industry groups and thought leaders as this disruptive technology progresses. It is for these important reasons that the Australian Government deployed the AI ​​Ethics Principles Framework, which followed an AI ethics trial in 2019 endorsed by National Australia Bank and brands such as Telstra.

- Advertisement -

Yet even with all these potential roadblocks, it is clear that the appetite for AI is growing and with it the spending.

So what can IT leaders and their teams do now to propel AI out of the realm of data science and into practical business applications and innovation pipelines? What data governance, operational and ethical considerations should we consider? And what human oversight is needed?

Technology and transformation leaders from the finance, education and retail sectors explored these questions during a panel session at the Infosys APAC Confluence event. Here’s what we found.

Operational efficiency is a no-brainer use case for AI

While the panelists agreed that the use cases for AI could be endless and socially positive, what’s getting the most favor right now is operational efficiency.

“We are looking very deeply at how AI drives across the organization how we can revolutionize our business processes, how we run our organization, and how data and analytics can be used to improve customer outcomes,” said the ANZ Bank CIO. Perspective adds that secret sauce.” Peter Barras, for Institutional Banking & Markets.

One example is to meet legislative requirements for monitoring communications traders originating in 23 countries. AI is successfully used to analyze, interpret and monitor fraudulent activity on a global scale. Crunching and digitization of documents, and chatbots are other examples.

In the retail and logistics sectors, nearly three out of 10 retailers are actively adopting AI with strong business impact, said Andal Alwan, Infosys APAC regional head for consumer, retail and logistics. While personalization is often a key item, AI is also increasing operational efficiencies and a frictionless experience across the end-to-end supply chain.

Cyber ​​security is another hot topic for AI in many sectors, once again bound by risk mitigation and governance imperatives.

AI cannot move forward without rethinking policy and process

But realizing advanced AI is not just a technical or data processing achievement. This requires change at a systemic, operational and cultural level.

Take the explosion of AI accessible to students from a learning perspective. With mass adoption increasing, education institutions such as the Melbourne Archdiocese Catholic School (MACS) need to proactively create policies and positions around the use of AI. One consideration is how open access to such tools might affect students. The second is protecting academic integrity.

Then it’s making sure that leadership is very clear from an education system perspective on using AI in learning across MACS’s 300 schools. “We need to enable our teachers to think about how their students will use AI and how they can maximize learning for individual students,” said MACS Chief Technology and Transformation Officer. Vicky Russell said.

Enhancing and sharing data governance is critical

Simultaneously, there is a need for improvements in data governance and practices. Alwan outlined two dimensions of the data strategy debate: inter-organization; and inter-organization.

“Intra-organization is about how I control the data: what data I collect, why I’m collecting it and how I’m protecting and using it,” she explained. “Then there is interoperability, or between retailers, producers and logistics firms, for example. Collaboration and sharing of data is very important. Unless there is end-to-end visibility across the supply chain, A retailer will never know what is available and when it is going to arrive. All of this requires massive amounts of data, which means we need AI for scaling and also for predicting trends. Will be

Another area of ​​data collaboration is between retailers and consumers, in what Alwan refers to as an “autonomous supply chain.” “It is about understanding demand signals from the point of consumption, whether online or physical, then translating this into organization systems in real-time to achieve planning and greater security of the supply chain. This is another area of ​​AI maturity that we’re seeing grow.”

The Infosys Knowledge Institute’s Data + AI Radar found that organizations seeking to realize business outcomes from advanced AI must develop data practices that encourage sharing and position data as currency.

But even as the financial sector works to share data through the Open Banking Regime, Barras reflected on the need to protect customer information and privacy and to be deliberate about the value data has to both the organization and the customer. .

“In the world of data, you have to remember that you have many stakeholders,” he remarked. “The customer and the person who owns the data and to whom the data is associated, is in fact the curator of that information, and must have the right to choose where it is shared and how it is shared. To enable this Corporates like banks have a responsibility towards customers. This needs to be wrapped up in your data strategy.”

Internally, learning education and harnessing the wealth of data points MACS is capturing is a critical foundation for using AI successfully.

“Data and knowledge about a business is really important in that maturity curve before it enters the AI ​​space,” Russell said. “Before jumping into your AI body of work or activities it is important to have great data and know what you have to some degree. But I also think AI can help us make that leap. There’s enough information out there but also to be open to whatever you might discover during that journey.”

Building trust with customers around AI still requires human oversight

It is clear that organizations have a responsibility to structurally address issues of trust and bias, especially as they lean toward allowing AI to autonomously generate results for customers. There must be ethical use of data and trust in what and how the information is used. As a result, parallel human oversight of what the machine is doing to ensure the results are accurate and ethical remains important.

“Trust in the source of the information and really clear ownership of that information is really important, so there is clear accountability in the organizational structure as to who is responsible for maintaining a piece of information that drives customer decision outcomes,” Barras said. . “Then over time, as it matures, we could potentially have two sets of AI tools looking at similar problem sets and validating each other’s results based on different data sets . So you at least get some verification of the information a set of drivers have.

Transparency of AI results with customers is another important element if trust in AI is to develop over time. This again comes back to strong collaboration with data owners and stakeholders, the ability to detail the data points that drive AI-based results, and explaining why the customer got the results they did.

“It’s so important to be conscious of bias and how you balance and provide huge sets of data that consistently work against bias and correct it,” Alwan said. “This will be critical to the success of AI in the business world.”

We all need to work with ChatGPT, not against it

Even as we strive for responsible AI use, ChatGPT is accelerating the adoption of Generative AI at an unprecedented rate. Test cases are being seen in everything from architectural design to writing poetry, claims law and developing software code. The panelists agreed that we are only scratching the surface of the use cases this generative AI can tackle.

In banking, it’s about experimenting in a controlled way and understanding ‘why’ generative AI is implemented to achieve tangible business results, Barras said. Alwan said that in the world of retail and consumer-facing industries, conversational commerce is already front and center and ChatGPT is set to further accelerate it.

For Russell, the most important thing is to make sure future generations learn how to use openly accessible AI tools and be properly prompted to gather good information from it, then refer to it. In other words, education evolves and works with it.

This is a good lesson for all of us.



Source link

- Advertisement -

Recent Articles

Related Stories