Share to Facebook Share to Twitter Share to Linkedin It's more about collaboration with machines Little by little, trust in AI output — in both its operational and generative forms — is leading to lights-out, hands-off processes on an increasingly wider scale. But how much authority will humans have — and should have — to step in and overrule AI decisions? For example, a potentially valuable bank customer may be denied a loan by an AI system. Or, an AI-based recruitment system may deliver biased or sexist results.

However, worrying about human intervention to stop AI-driven processes may mean it’s already too late. Things shouldn’t even reach this stage, said James Hendler , professor at Rensselaer Polytechnic Institute and director of RPI’s Future of Computing Institute. “If systems using AI are correctly designed to be used interactively, then overruling or reversing becomes the wrong way to look at things,” said Hendler.

“Together the systems should be solving the problems, with both human and technological interaction – especially in cases where human expertise is needed.” This is where thoughtful “responsible-by-design” principles may help ensure balanced human interaction with AI systems, said Sunil Senan , global head of data, analytics and AI for Infosys. “The ease of overriding AI should be a carefully considered design decision based on the specific application and its level of risk, transparency, user expertise, and the evolving landsc.