The Rising Power of Artificial Intelligence
The AI-powered organization – artificial intelligence (AI) is transforming business. The highly capable and complex technology aims to simulate human intelligence (Glikson and Woolley, 2020). Furthermore, AI creativity comprises the production of highly novel outputs by autonomous machines (Amabile, 2019). The expectations for global spending on AI systems in 2023 amount to $97.9 billion, with a compound annual growth rate (CAGR) of 28.4% for the projection period from 2018 until 2023 (International Data Corporation, 2019). Additionally, 63% of executives worldwide believe that AI significantly impacts their companies in the next five years (Ransbotham et al., 2017). The objective is to exploit new sources of business value.
Human – AI Interaction: Where is the Trust?
Optimal performance requires an environment where the collaboration of humans and machines exceeds the individual work of humans and machines (Fountaine et al., 2019; Hoff and Bashir, 2015). However, organizations struggle to build a system of trust between humans and AI solutions (Lee and See, 2004; The Economist, 2019). Just 8% worldwide engage in core practices that support the widespread adoption of AI and advanced analytics (Bisson et al., 2018). The challenge consists of effectively integrating the technology in workflows. As a result, most companies do not capture the full potential of AI.
Why do humans within organizations accept or reject cooperating with novel products created solely by AI?
The AI Acceptance Model summarizes the research results by mapping the relationship between creative AI features, trust, and acceptance within organizations.
AI Acceptance Model: Creative AI Features – Trust – Acceptance
Thirteen AI product features explain directly and indirectly through dynamic learned trust why humans within organizations accept or reject cooperating with novel products created solely by AI. Finally, acceptance allows for successful widespread adoption of AI technologies to create new business value.
1. Creative AI Features
Besides human and organizational conditions, the following technological product features strongly affect human-creative AI interaction within organizations: (1) tangibility, (2) transparency, (3) level of control, (4) reliability, (5) feedback, (6) validity, (7) anthropomorphism, (8) immediacy behavior, (9) authenticity, (10) usefulness, (11) ease of use, (12) adjustability, and (13) data privacy. Transparency, reliability, and usefulness are particularly strong, whereas anthropomorphism plays a subordinate role.
Trust involves “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” (Mayer et al., 1995, p. 712). The abovementioned AI features increase human trust in creative AI products.
Technology acceptance involves the actual AI system use (Davis et al., 1989). Trust fosters the acceptance of creative AI products. Additionally, the abovementioned AI features directly increase the acceptance of creative AI products.
Source: own representation based on Davis et al. (1989), Gefen et al. (2003), Ghazizadeh et al. (2012), Glikson and Woolley (2020), Hoff and Bashir (2015), Lee and See (2004) and five expert interviews
More about Success Factors: Transparency – Reliability – Usefulness
Create transparent, reliable, and useful AI products.
Transparency depicts the degree to which “the inner workings or logic of the automated systems are known to human operators to assist their understanding about the system” (Seong and Bisantz, 2008, p. 611). The simplification of algorithms and operations results in an improved understanding of AI products for the operator. Transparency of high-level AI technologies leads to trust (Hoff and Bashir, 2015; Lee and See, 2004). For example, the development of algorithmic literacy through training on how to interact with AI-based decision aids contributes to proper utilization (Burton et al., 2020). Explanations about the operating principles of algorithms promote cognitive trust in particular for virtual and embedded AI (Glikson and Woolley, 2020).
“In order for a product to be accepted, […] users also have to be able to understand what it does.”
Reliability refers to the “consistency of an automated system’s functions” (Hoff and Bashir, 2015, p. 424). High-performance results in cognitive trust in AI. Yet, humans who attribute high machine intelligence to a robot tend to follow even a faulty robot (Glikson and Woolley, 2020). Furthermore, accurate and ongoing feedback concerning the system’s reliability creates transparency and thus is a driver for trust and task performance (Hoff and Bashir, 2015).
“With the time, Neo gets better and better, so its reliability is improving. And then users also have higher trust in working with Neo.”
Perceived usefulness encompasses “the prospective user’s subjective probability that using a specific application system will increase his or her job performance within an organizational context” (Davis et al., 1989, p. 985). Therefore, the onboarding training for the user should not only demonstrate the functionality and reliability but also the intended use of the system (Lee and See, 2004). Ultimately, the performance-based variable usefulness fosters trust in AI products (Hoff and Bashir, 2015).
“In order for a product to be accepted, it has to add value to the users.”
AI Acceptance in Practice
The rapid pace of technological innovation requires the management of organizational change. For managers leading the digital transformation in companies: Introduce actions that build a system of trust between humans and AI technologies. For example, purchase AI products that include most of the relevant technical features. Clearly communicate the functions and benefits of new AI products to employees. Finally, higher acceptance of AI solutions creates efficiency gains and, thus, a competitive advantage in the long run.
For product managers developing new AI-based solutions: Keep in mind to implement feedback mechanisms within AI systems. Offer onboarding training for customers launching AI products. Ultimately, a human-centered design approach improves product-market fit, which leads to higher customer satisfaction and sales volume.
So, Let us take human-creative AI interaction to the next level with Neo!
Note: The blog post is based on a scientific seminar paper written within the Management & Technology Master’s program at Technical University Munich. For more information please contact me via e-mail: [email protected]
Amabile, T., 2019. Guidepost: Creativity, Artificial Intelligence, and a World of Surprises. Academy of Management Discoveries.
Bisson, P., Hall, B., McCarthy, B., Rifai, K., 2018. Breaking Away: The Secrets to Scaling Analytics. McKinsey. https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/breaking-away-the-secrets-to-scaling-analytics# (accessed 1 July 2020).
Burton, J.W., Stein, M.‐K., Jensen, T.B., 2020. A Systematic Review of Algorithm Aversion in Augmented Decision Making. Journal of Behavioral Decision Making 33 (2), 220–239.
Davis, F.D., Bagozzi, R.P., Warshaw, P.R., 1989. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science 35 (8), 982–1003.
Fountaine, T., McCarthy, B., Saleh, T., 2019. Building the AI-Powered Organization. Harvard Business Review. https://hbr.org/2019/07/building-the-ai-powered-organization (accessed 1 July 2020).
Gefen, D., Karahanna, E., Straub, D.W., 2003. Trust and TAM in Online Shopping: An Integrated Model. Management Information Systems Quarterly 27 (1), 51–90.
Ghazizadeh, M., Lee, J.D., Boyle, L.N., 2012. Extending the Technology Acceptance Model to
Assess Automation. Cognition, Technology & Work 14 (1), 39–49.
Glikson, E., Woolley, A.W., 2020. Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals.
Hoff, K.A., Bashir, M., 2015. Trust in Automation: Integrating Empirical Evidence on Factors that influence Trust. Human Factors 57 (3), 407–434.
International Data Corporation, 2019. Worldwide Spending on Artificial Intelligence Systems will be nearly $98 billion in 2023, according to new IDC Spending Guide. https://www.idc.com/getdoc.jsp?containerId=prUS45481219 (accessed 1 July 2020).
Lee, J.D., See, K.A., 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors 46 (1), 50–80.
Mayer, R.C., Davis, J.H., Schoorman, F.D., 1995. An Integrative Model of Organizational Trust. Academy of Management 20 (3), 709–734.
Ransbotham, S., Kiron, D., Gerbert, P., Reeves, M., 2017. Reshaping Business with Artificial Intelligence: Closing the Gap Between Ambition and Action. MIT Sloan Management Review. https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/ (accessed 1 July 2020).
Seong, Y., Bisantz, A.M., 2008. The Impact of Cognitive Feedback on Judgment Performance and Trust with Decision Aids. International Journal of Industrial Ergonomics 38 (7-8), 608–625.
The Economist, 2019. Don’t trust AI until we build Systems that earn Trust. https://www.economist.com/open-future/2019/12/18/dont-trust-ai-until-we-build-systems-that-earn-trust (accessed 1 July 2020).