Challenges in artificial socio-cognitive systems : a study based on intelligent vehicles

Technological developments are causing a proliferation of computing devices in every day life, with the availability of high compute power, small size, and the capability for wireless communication leading to consideration of how to off-load human tasks to devices which can manage themselves. Simila...

Full description

Bibliographic Details
Main Author: Baines, Vincent
Other Authors: Padget, Julian ; De Vos, Marina
Published: University of Bath 2015
Subjects:
Online Access:https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665382
id ndltd-bl.uk-oai-ethos.bl.uk-665382
record_format oai_dc
collection NDLTD
sources NDLTD
topic 629.25
spellingShingle 629.25
Baines, Vincent
Challenges in artificial socio-cognitive systems : a study based on intelligent vehicles
description Technological developments are causing a proliferation of computing devices in every day life, with the availability of high compute power, small size, and the capability for wireless communication leading to consideration of how to off-load human tasks to devices which can manage themselves. Similarly, the science of how artificial systems can operate in an environment, sensing, reasoning, and taking action, has become increasingly mature. These developments lead to the opportunity for artificial entities to undertake activities in the real world, but facing significant challenges in reasoning about the environment and social interaction with humans. The concept of artificial entities operating amongst humans raises a set of socio-cognitive problems, that is, issues where reasoning is required both about the environment and human activity, raising the challenge of the need to understand the cultural and contextual aspects of a situation. Further challenges stem from giving intelligent entities the autonomy to pursue their own goals. Firstly, how to manage the situation when, in simple terms, an entity does not know what to do, for example it has no appropriate knowledge of how to handle the situation it finds itself in, or enters into a conflict which it cannot resolve by itself. Secondly, we consider the challenge of how an entity's pursuit of its own goals can be balanced against the greater social welfare, where an entity may be taking action which is to the detriment of the wider population, or where coordination of its action could result in benefit for others. In broad terms, we consider these as issues relating to an entity's understanding of the environment in which it operates, and we adopt the concept of Situational Awareness as a means to analyse this understanding. We consider an entity's understanding as being built up from low level perceptions (where events in the environment are sensed) to an increased understanding at the comprehension level (where possible meanings of the perceptions are generated) through to a high level projection understanding (of the likely future state of the environment). We refine these issues into a number of problem statements which we see artificial socio-cognitive systems (ASCS) facing, and propose the approach of attempting to build an explicit representation of SA at different levels, and how to move data, information and knowledge upwards through them. We then explore this by grounding experimentation in the domain of intelligent vehicles. This problem space contains a number of characteristics which make it suitable for our work: there is complex human interaction, rules which may not completely govern the situation or be adhered to, and technological developments in autonomous vehicles, all of which can be represented in scenarios in order to assess our approach. We use this domain to illustrate our proposed approach, and rather than develop solutions for this specific domain we remain abstract as far as possible in order to be able to offer conclusions that can find application in other domains. We describe a framework where distributed components are brought together to support investigation into these problem areas. A generalised message exchange approach is adopted, with messages containing additional semantic annotation such that the emphasis for appropriate handling lies with the consumer. Concerning what is exchanged, we consider knowledge and understanding in terms of Situational Awareness levels as a means of allowing components to communicate at an appropriate level and prevent communication overload due to exchanging the wrong kind and the wrong volume of data. We present scenarios that have been constructed to illustrate problematic aspects in the domain that are reflections of the wider challenges faced by artificial socio-cognitive systems, and show how these can be tackled or mitigated with the help of a range of framework components. An intelligence layer contains autonomous agents which are responsible for controlling vehicles, but we assist these agents through the use of an external governance structure, capable of issuing guidance to the intelligent agents for situations where they do not know what to do, and/or to issue appropriate obligations to ensure the wider society goals are met. We see such external regulation as an intrinsic feature of social systems, whether implied (convention) or explicit (regulation/law), and replicate this structure through the use of institutions to provide a reference when the agent's knowledge is incomplete. In conclusion, our focus is on agent understanding from a Situational Awareness perspective, and consideration of what communication at which level is appropriate; from this we find that agents are better suited to higher level communication than to dealing with the processing of high volumes of low level perceptions. We couple the intelligent agents to an external governance structure, to mimic existing structures of regulation allowing agents to be provided with additional guidance as required. This approach is demonstrated in a number of scenarios which show the framework resolving issues where an individual agent would otherwise show undesirable behaviour as it: i) lacked knowledge of social convention, ii) would choose to pursue its own benefit over the collectives, or iii) lacked appropriate jurisdiction over other agents to bring about a solution. Having demonstrated how these issues can be alleviated by our approach in particular scenarios, we argue for some generalisations to problems broadly faced by (artificial) socio-cognitive systems, which we believe have sufficiently similar characteristics that our approach to the concretisation of SA for ASCS can be applied.
author2 Padget, Julian ; De Vos, Marina
author_facet Padget, Julian ; De Vos, Marina
Baines, Vincent
author Baines, Vincent
author_sort Baines, Vincent
title Challenges in artificial socio-cognitive systems : a study based on intelligent vehicles
title_short Challenges in artificial socio-cognitive systems : a study based on intelligent vehicles
title_full Challenges in artificial socio-cognitive systems : a study based on intelligent vehicles
title_fullStr Challenges in artificial socio-cognitive systems : a study based on intelligent vehicles
title_full_unstemmed Challenges in artificial socio-cognitive systems : a study based on intelligent vehicles
title_sort challenges in artificial socio-cognitive systems : a study based on intelligent vehicles
publisher University of Bath
publishDate 2015
url https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665382
work_keys_str_mv AT bainesvincent challengesinartificialsociocognitivesystemsastudybasedonintelligentvehicles
_version_ 1719002574369062912
spelling ndltd-bl.uk-oai-ethos.bl.uk-6653822019-03-14T03:27:18ZChallenges in artificial socio-cognitive systems : a study based on intelligent vehiclesBaines, VincentPadget, Julian ; De Vos, Marina2015Technological developments are causing a proliferation of computing devices in every day life, with the availability of high compute power, small size, and the capability for wireless communication leading to consideration of how to off-load human tasks to devices which can manage themselves. Similarly, the science of how artificial systems can operate in an environment, sensing, reasoning, and taking action, has become increasingly mature. These developments lead to the opportunity for artificial entities to undertake activities in the real world, but facing significant challenges in reasoning about the environment and social interaction with humans. The concept of artificial entities operating amongst humans raises a set of socio-cognitive problems, that is, issues where reasoning is required both about the environment and human activity, raising the challenge of the need to understand the cultural and contextual aspects of a situation. Further challenges stem from giving intelligent entities the autonomy to pursue their own goals. Firstly, how to manage the situation when, in simple terms, an entity does not know what to do, for example it has no appropriate knowledge of how to handle the situation it finds itself in, or enters into a conflict which it cannot resolve by itself. Secondly, we consider the challenge of how an entity's pursuit of its own goals can be balanced against the greater social welfare, where an entity may be taking action which is to the detriment of the wider population, or where coordination of its action could result in benefit for others. In broad terms, we consider these as issues relating to an entity's understanding of the environment in which it operates, and we adopt the concept of Situational Awareness as a means to analyse this understanding. We consider an entity's understanding as being built up from low level perceptions (where events in the environment are sensed) to an increased understanding at the comprehension level (where possible meanings of the perceptions are generated) through to a high level projection understanding (of the likely future state of the environment). We refine these issues into a number of problem statements which we see artificial socio-cognitive systems (ASCS) facing, and propose the approach of attempting to build an explicit representation of SA at different levels, and how to move data, information and knowledge upwards through them. We then explore this by grounding experimentation in the domain of intelligent vehicles. This problem space contains a number of characteristics which make it suitable for our work: there is complex human interaction, rules which may not completely govern the situation or be adhered to, and technological developments in autonomous vehicles, all of which can be represented in scenarios in order to assess our approach. We use this domain to illustrate our proposed approach, and rather than develop solutions for this specific domain we remain abstract as far as possible in order to be able to offer conclusions that can find application in other domains. We describe a framework where distributed components are brought together to support investigation into these problem areas. A generalised message exchange approach is adopted, with messages containing additional semantic annotation such that the emphasis for appropriate handling lies with the consumer. Concerning what is exchanged, we consider knowledge and understanding in terms of Situational Awareness levels as a means of allowing components to communicate at an appropriate level and prevent communication overload due to exchanging the wrong kind and the wrong volume of data. We present scenarios that have been constructed to illustrate problematic aspects in the domain that are reflections of the wider challenges faced by artificial socio-cognitive systems, and show how these can be tackled or mitigated with the help of a range of framework components. An intelligence layer contains autonomous agents which are responsible for controlling vehicles, but we assist these agents through the use of an external governance structure, capable of issuing guidance to the intelligent agents for situations where they do not know what to do, and/or to issue appropriate obligations to ensure the wider society goals are met. We see such external regulation as an intrinsic feature of social systems, whether implied (convention) or explicit (regulation/law), and replicate this structure through the use of institutions to provide a reference when the agent's knowledge is incomplete. In conclusion, our focus is on agent understanding from a Situational Awareness perspective, and consideration of what communication at which level is appropriate; from this we find that agents are better suited to higher level communication than to dealing with the processing of high volumes of low level perceptions. We couple the intelligent agents to an external governance structure, to mimic existing structures of regulation allowing agents to be provided with additional guidance as required. This approach is demonstrated in a number of scenarios which show the framework resolving issues where an individual agent would otherwise show undesirable behaviour as it: i) lacked knowledge of social convention, ii) would choose to pursue its own benefit over the collectives, or iii) lacked appropriate jurisdiction over other agents to bring about a solution. Having demonstrated how these issues can be alleviated by our approach in particular scenarios, we argue for some generalisations to problems broadly faced by (artificial) socio-cognitive systems, which we believe have sufficiently similar characteristics that our approach to the concretisation of SA for ASCS can be applied.629.25University of Bathhttps://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665382Electronic Thesis or Dissertation