Can AI Be Trusted? 7% Rise In AI Optimism Challenges Traditional Skepticism
A recent Deloitte Consulting study into whether AI can be trusted asked what emerging technologies posed the highest potential for serious ethical risk. The State of Ethics and Trust in Technology report found that cognitive technologies like AI scored highest, at 54 percent, well above second-place digital reality, at 16 percent, and 40 percent of respondents cited data privacy as a top concern when it comes to generative AI (GenAI).
However, 46 percent of respondents said cognitive technologies also offered the most potential for social good, highlighting the technology’s ability to polarize opinions since it burst on the scene. Results show that suspicion and ethical concern toward AI has dropped by 3 percent since 2023, while the hope that AI will ultimately prove to be a force for good rose by 7 percent.
Obviously, increasing familiarity has increased the comfort level of those in business and IT toward AI and GenAI. But the survey shows that the jury is still out. Accelerated adoption of GenAI appears to be outpacing organizations’ capacity to govern the technology and maintain ethical and privacy standards.
What Can Be Done?
Trust is never far from any discussion about AI, fed by decades of novelists and screenwriters churning out stories about AI going rogue. While the Deloitte study shows that ethical concerns linger, it also shows they might be starting to turn around. One way to nudge public perception along in the right direction is to help build trust.
“Respondents show concern for reputational damage to an organization associated with misuse of technology and failure to adhere to ethical standards,” the report said. “AI Is a powerful tool, but it requires guardrails.”
Businesses that add governance and compliance guardrails to their AI use can help get buy-in from employees and customers and strengthen trust in the technology.
Is AI Reliable?
Another study casts serious doubts over the accuracy of GenAI outputs. While accuracy falters on tasks humans would find challenging, a surprising finding was that GenAI lacks 100 percent accuracy on what would be regarded as very simple tasks.
“Scaled-up models tend to give an apparently sensible yet wrong answer much more often,” said study co-author Lexin Zhou, a researcher at Spain’s Polytechnic University of Valencia, “including errors on difficult questions that human supervisors frequently overlook.”
Deloitte recommends appointing Chief Ethics Officers to oversee AI as part of a larger effort to follow ethical best practices for the technology. These individuals would create processes for the safe and accurate use of AI while enforcing compliance, driving adherence to standards, and championing responsibility for ethical usage. For example, ethical principles can be embedded into software code, applications, and workflows.
“Embedding ethical principles early and repeatedly in the technology development lifecycle can help demonstrate a fuller commitment to trust in organizations and keep ethics at the front of your workforce’s priorities and processes,” said Bill Briggs, Chief Technology Officer at Deloitte Consulting.
The appropriate processes and guardrails should be in place to assure GenAI users that they can trust the reliability of outputs and that they are not inadvertently engaging in theft, plagiarism, or misuse of intellectual property (IP). Additionally, processes should be in place so that humans don’t blindly accept AI’s conclusions and responses as being 100 percent true.
“The increasing scale of GenAI adoption may increase the ethical risks of emerging technologies, and the potential harm of failing to manage those risks could include reputational, organizational, financial, and human damage,” Senior Manager of Deloitte Consulting Lori Lewis said.
Learn more about generative AI ethics or AI policy and governance.
The post Can AI Be Trusted? 7% Rise In AI Optimism Challenges Traditional Skepticism appeared first on eWEEK.