The Future Of Human Ai Collaboration A Taxonomy Of Design Knowledge

Bonisiwe Shabane
-
the future of human ai collaboration a taxonomy of design knowledge

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Recent technological advances, especially in the field of deep learning, provide astonishing progress on the road towards AGI (Goertzel and Pennachin 2007; Kurzweil 2010).

AI is progressively achieving (super-) human level performance in various tasks, such as autonomous driving , cancer detection, or playing complex games (Mnih et al. 2015; Silver et al. 2016). Therefore, more and more business applications that are based on AI technologies arise. Both research and practice are wondering when AI will be capable of solving complex tasks in real-world business applications apart from laboratory settings in research. However, those advances provide a rather one-sided picture on AI, denying the fact that although AI is capable to solve certain tasks with quite impressive performance, AGI is far away from being achieved.

There are lots of problems that machines cannot solve alone yet (Kamar 2016), such as applying expertise to decision-making, planning, or creative tasks, just to name a few. ML systems in the wild have major difficulties with being adaptive to dynamic environments and self-adjusting (Müller-Schloer and Tomforde 2017), and the lack of what humans call common sense. This makes them highly vulnerable for adversarial examples (Kurakin et al. 2016). Moreover, AGI needs massive amounts of training data compared to humans, who can learn from only few examples (Lake et al. 2017) and fails to work with certain data types (e.g.

soft data). Nevertheless, a lack of control of the learning process might lead to unintended consequences (e.g. racism biases) and limit interpretability, which is crucial for critical domains such as medicine (Doshi-Velez and Kim 2017). Therefore, humans are still required at various positions in the loop of the ML process. While a lot of work has been done in creating training sets with human labellers, more recent research points towards end user involvement (Amershi et al. 2014) and teaching of such machines (Mnih et al.

2015), thus, combining humans and machines in hybrid intelligence systems. The main idea of hybrid intelligence systems is, thus, that socio-technical ensembles and its human and AI parts can co-evolve to improve over time. The purpose of this paper is to point towards such hybrid intelligence systems. Thereby, I aim to conceptualize the idea of hybrid intelligence systems and provide an initial taxonomy of design knowledge for developing such socio-technical ensembles. By following a taxonomy development method (Nickerson et al. 2013), I reviewed various literature in interdisciplinary fields and combine those findings with an empirical examination of practical business applications in the context of hybrid intelligence.

The contribution of this paper is threefold. First, I provide a structured overview of interdisciplinary research on the role of humans in the ML pipeline. Second, I offer an initial conceptualization of the term hybrid intelligence systems and relevant dimensions for system design. Third, I intend to provide useful guidance for system developers during the implementation of hybrid intelligence systems in real-world applications. Towards this end, I propose an initial taxonomy of hybrid intelligence systems. The subfield of intelligence that relates to machines is called AI.

With this term I mean systems that perform” [. . .] activities that I associate with human thinking, activities such as decision-making, problem solving, learning [. . .]” (Bellman 1978). Although various definitions exist for AI, this term generally covers facets, such as creating machines that can accomplish complex goals.

This includes facets such as natural language processing, perceiving objects, storing of knowledge, and applying it for solving problems, and ML to adapt to new circumstances and act in its environment (Russell and Norvig...

People Also Search

ArXivLabs Is A Framework That Allows Collaborators To Develop And

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...

AI Is Progressively Achieving (super-) Human Level Performance In Various

AI is progressively achieving (super-) human level performance in various tasks, such as autonomous driving , cancer detection, or playing complex games (Mnih et al. 2015; Silver et al. 2016). Therefore, more and more business applications that are based on AI technologies arise. Both research and practice are wondering when AI will be capable of solving complex tasks in real-world business applic...

There Are Lots Of Problems That Machines Cannot Solve Alone

There are lots of problems that machines cannot solve alone yet (Kamar 2016), such as applying expertise to decision-making, planning, or creative tasks, just to name a few. ML systems in the wild have major difficulties with being adaptive to dynamic environments and self-adjusting (Müller-Schloer and Tomforde 2017), and the lack of what humans call common sense. This makes them highly vulnerable...

Soft Data). Nevertheless, A Lack Of Control Of The Learning

soft data). Nevertheless, a lack of control of the learning process might lead to unintended consequences (e.g. racism biases) and limit interpretability, which is crucial for critical domains such as medicine (Doshi-Velez and Kim 2017). Therefore, humans are still required at various positions in the loop of the ML process. While a lot of work has been done in creating training sets with human la...

2015), Thus, Combining Humans And Machines In Hybrid Intelligence Systems.

2015), thus, combining humans and machines in hybrid intelligence systems. The main idea of hybrid intelligence systems is, thus, that socio-technical ensembles and its human and AI parts can co-evolve to improve over time. The purpose of this paper is to point towards such hybrid intelligence systems. Thereby, I aim to conceptualize the idea of hybrid intelligence systems and provide an initial t...