Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)

Rational models of human behavior aim to predict, possibly control, humans. There are two primary models, the cognitive model that treats behavior as implicit, and the behavioral model that treats beliefs as implicit. The cognitive model reigned supreme until reproducibility issues arose, including...

Full description

Bibliographic Details
Main Author: William F. Lawless
Format: Article
Language:English
Published: MDPI AG 2021-02-01
Series:Informatics
Subjects:
Online Access:https://www.mdpi.com/2227-9709/8/1/14
id doaj-3cb08fd73bf7431e82e358ea452f4b5c
record_format Article
spelling doaj-3cb08fd73bf7431e82e358ea452f4b5c2021-02-27T00:02:53ZengMDPI AGInformatics2227-97092021-02-018141410.3390/informatics8010014Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)William F. Lawless0Department of Mathematics, Sciences and Technology, School of Arts & Sciences, Paine College, Augusta, GA 30901, USARational models of human behavior aim to predict, possibly control, humans. There are two primary models, the cognitive model that treats behavior as implicit, and the behavioral model that treats beliefs as implicit. The cognitive model reigned supreme until reproducibility issues arose, including Axelrod’s prediction that cooperation produces the best outcomes for societies. In contrast, by dismissing the value of beliefs, predictions of behavior improved dramatically, but only in situations where beliefs were suppressed, unimportant, or in low risk, highly certain environments, e.g., enforced cooperation. Moreover, rational models lack supporting evidence for their mathematical predictions, impeding generalizations to artificial intelligence (AI). Moreover, rational models cannot scale to teams or systems, which is another flaw. However, the rational models fail in the presence of uncertainty or conflict, their fatal flaw. These shortcomings leave rational models ill-prepared to assist the technical revolution posed by autonomous human–machine teams (A-HMTs) or autonomous systems. For A-HMT teams, we have developed the interdependence theory of complementarity, largely overlooked because of the bewilderment interdependence causes in the laboratory. Where the rational model fails in the face of uncertainty or conflict, interdependence theory thrives. The best human science teams are fully interdependent; intelligence has been located in the interdependent interactions of teammates, and interdependence is quantum-like. We have reported in the past that, facing uncertainty, human debate exploits the interdependent bistable views of reality in tradeoffs seeking the best path forward. Explaining uncertain contexts, which no single agent can determine alone, necessitates that members of A-HMTs express their actions in causal terms, however imperfectly. Our purpose in this paper is to review our two newest discoveries here, both of which generalize and scale, first, following new theory to separate entropy production from structure and performance, and second, discovering that the informatics of vulnerability generated during competition propels evolution, invisible to the theories and practices of cooperation.https://www.mdpi.com/2227-9709/8/1/14interdependencecomplementaritybistabilityuncertainty and incompletenessnon-factorable information and tradeoffsautonomous teams (Level 5 vehicles)
collection DOAJ
language English
format Article
sources DOAJ
author William F. Lawless
spellingShingle William F. Lawless
Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)
Informatics
interdependence
complementarity
bistability
uncertainty and incompleteness
non-factorable information and tradeoffs
autonomous teams (Level 5 vehicles)
author_facet William F. Lawless
author_sort William F. Lawless
title Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)
title_short Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)
title_full Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)
title_fullStr Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)
title_full_unstemmed Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)
title_sort exploring the interdependence theory of complementarity with case studies. autonomous human–machine teams (a-hmts)
publisher MDPI AG
series Informatics
issn 2227-9709
publishDate 2021-02-01
description Rational models of human behavior aim to predict, possibly control, humans. There are two primary models, the cognitive model that treats behavior as implicit, and the behavioral model that treats beliefs as implicit. The cognitive model reigned supreme until reproducibility issues arose, including Axelrod’s prediction that cooperation produces the best outcomes for societies. In contrast, by dismissing the value of beliefs, predictions of behavior improved dramatically, but only in situations where beliefs were suppressed, unimportant, or in low risk, highly certain environments, e.g., enforced cooperation. Moreover, rational models lack supporting evidence for their mathematical predictions, impeding generalizations to artificial intelligence (AI). Moreover, rational models cannot scale to teams or systems, which is another flaw. However, the rational models fail in the presence of uncertainty or conflict, their fatal flaw. These shortcomings leave rational models ill-prepared to assist the technical revolution posed by autonomous human–machine teams (A-HMTs) or autonomous systems. For A-HMT teams, we have developed the interdependence theory of complementarity, largely overlooked because of the bewilderment interdependence causes in the laboratory. Where the rational model fails in the face of uncertainty or conflict, interdependence theory thrives. The best human science teams are fully interdependent; intelligence has been located in the interdependent interactions of teammates, and interdependence is quantum-like. We have reported in the past that, facing uncertainty, human debate exploits the interdependent bistable views of reality in tradeoffs seeking the best path forward. Explaining uncertain contexts, which no single agent can determine alone, necessitates that members of A-HMTs express their actions in causal terms, however imperfectly. Our purpose in this paper is to review our two newest discoveries here, both of which generalize and scale, first, following new theory to separate entropy production from structure and performance, and second, discovering that the informatics of vulnerability generated during competition propels evolution, invisible to the theories and practices of cooperation.
topic interdependence
complementarity
bistability
uncertainty and incompleteness
non-factorable information and tradeoffs
autonomous teams (Level 5 vehicles)
url https://www.mdpi.com/2227-9709/8/1/14
work_keys_str_mv AT williamflawless exploringtheinterdependencetheoryofcomplementaritywithcasestudiesautonomoushumanmachineteamsahmts
_version_ 1724248804750786560