Stewart Palmer - 3rd year PhD presentation
Exploring the Role of Trust in Artificial Intelligence Organizations: A Boundary Objects Perspective
Info about event
Time
Location
2628-303
Organizer
Supervisors: Polymeros Chrysochou & Susanne Pedersen
Discussants: Jacob Sherson & Susan Hilbolling
Abstract
The classification and determination of categories for trust is likely to have a significant impact on the design, development and deployment of artificial intelligence. Research in other fields finds that powerful actors, such as industry leaders, policymakers and scholars, often shape category definitions, standards and legislation (Busch, 2011; Grodal, Gotsopoulos, & Suarez, 2015). One such actor is the organization.
Organizations face a dual role in realizing the commercial potential of artificial intelligence. Outward-facing it must develop technology that is trustworthy, while inward-facing it must manage the process of adoption by its workforce. Trust is important in both contingencies, yet is likely to be different due to context, technology, agency and experience.
Based on the Star and Griesemer’s (1989) concept of boundary objects and utilizing the meta-narrative systematic review methodology (Greenhalgh et al., 2004; Wong et al., 2013), this study examines how local and abstract communities shape and borrow trust in artificial intelligence for use within and outside the organization. Our findings provide insights into the ways trust in artificial intelligence is shaped by organizational factors and how this can impact its development and adoption.
Everyone is welcome!