Monday, April 8 Tuesday, April 9 Wednesday, April 10 Thursday, April 11
8:00 ‑ 9:00 am Breakfast   Breakfast Breakfast Breakfast
9:00 ‑ 9:10 am T1: Tutorial Session 1: Conversational Explanations - Explainable AI through Human-Machine Conversation
T2: Tutorial Session 2: Energy Constraints in Cognitive Processing - The Role of Constraint Satisfaction in Emergent Awareness
  Conference Opening K2: Keynote by Dr. Doug Riecken, Air Force Office of Scientific Research (AFOSR), USA K3: Keynote by Prof. Nancy J. Cooke, Arizona State University, AZ, USA, "Human-Autonomy Teaming: Can Autonomy be a Good Team Player?"
9:10 ‑ 10:00 am   K1: Keynote by Dr. Rand Waltzman, Deputy CTO at RAND Corporation, Santa Monica, CA, "MD: Multimedia Disinformation - Is there a doctor in the house?!"
10:00 ‑ 10:30 am Coffee Break Coffee Break Coffee Break Coffee Break
10:30 am ‑ 12:00 pm   S1: Information Fusion S4: Applications S5: Situation Awareness
12:00 ‑ 1:30 pm Lunch (On your own)   Lunch (On your own) Lunch (On your own) Lunch (On your own)
1:30 ‑ 3:00 pm T3: Tutorial Session 3: Social Data Analysis for Intelligence
T4: Tutorial Session 4: Self-Modeling for Adaptive Situation Awareness
  S2: Decision Support P1: Poster Session S6: Shared Situation Awareness
3:00 ‑ 3:30 pm Coffee Break Coffee Break Conference Closing
3:30 ‑ 4:00 pm   S3: Modeling and Simulations Coffee Break  
4:00 ‑ 4:30 pm   Industry Panel  
4:30 ‑ 5:00 pm      
5:00 ‑ 5:30 pm        
5:30 ‑ 6:00 pm     CogSIMA 2020 Planning Meeting    
6:00 ‑ 7:30 pm Welcome Reception   Conference Banquet Dinner  
7:30 ‑ 9:00 pm      

Monday, April 8

Monday, April 8 8:00 - 9:00

Breakfastgo to top

Monday, April 8 9:00 - 12:00

T1: Tutorial Session 1: Conversational Explanations - Explainable AI through Human-Machine ConversationDetailsgo to top

Dave Braines, IBM Research UK

Abstract: Explainable AI has significant focus within both the research community and the popular press. The tantalizing potential of artificial intelligence solutions may be undermined if the machine processes which produce these results are black boxes that are unable to offer any insight or explanation into the results, the processing, or the training data on which they are based. The ability to provide explanations can help to build user confidence, rapidly indicate the need for correction or retraining, as well provide initial steps towards the mitigation of issues such as adversarial attacks, or allegations of bias. In this tutorial we will explore the space of Explainable AI, but with a particular focus on the role of the human users within the human-machine hybrid team, and whether a conversational interaction style is useful for obtaining such explanations quickly and easily. The tutorial is broken down into three broad areas which are dealt with sequentially:

Explainable AI What is it? Why do we need it? Where is the state of the art? Starting with the philosophical definition of explanations and the role they serve in human relationships, this will cover the core topic of explainable AI, looking into different techniques for different kinds of AI systems, different fundamental classifications of explanations (such as transparent, post-hoc and explanation by example) and the different roles that these may play with human users in a human-machine hybrid system. Examples of adversarial attacks and the role of explanations in mitigating against these will be given, along with the need to defend against bias (either algorithmic or through training data issues). Human roles in explanations Building on the work reported in "Interpretable to whom?" [Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems, ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Jul 2018] this section examines the different roles that a human (or machine) user within the system may be fulfilling, and why the role has an important part to play in determining what kind of explanation may be required. In almost all current AI explanation-related research the role of the user is not a primary consideration, but we assert that the ability to create a meaningful explanation must take this into account. The goals of the users will vary depending on their role, and the explanations that will serve them in achieving these goals will also vary. Conversational explanations The role of conversational machine agents (such as Alexa, Siri and Google) are becoming increasingly commonplace, but the typical interactions that these agents fulfil are fairly simple. Conversational interactions can be especially useful in complex or evolving situations where the ability to design a rich and complete user interface in advance may not be possible. In our ongoing research we are investigating the role of a conversational interaction with AI explanations and will report the findings so far in this section. There will also be a live interactive demo for optional use by the audience during this session.

Intended Audience: The intended audience for this tutorial are researchers in any field where complex algorithms or processes can be used to inform human decision-making. The participants will be taken through a general overview of explanation in both human and machine contexts, and how the role of the agent will have a significant impact on what kind of explanation might be useful. The workshop will then move into some ongoing research into the role of conversation as a tool to enable explanations in human-machine hybrid systems, along with an interactive demonstration of an early version of this capability.

T2: Tutorial Session 2: Energy Constraints in Cognitive Processing - The Role of Constraint Satisfaction in Emergent AwarenessDetailsgo to top

Robert Kozma, PhD, and Roman Ilin, PhD

Abstract: Recent insight on brain dynamics and cognitive processing provide important clues for the development of artificially intelligent systems with the capability of situation awareness, flexible operation, and rapid response to unpredictable events in dynamically changing and potentially hostile environments. The focus of this tutorial is to analyze the consequences of constraint satisfaction in developing new AI technologies. Embodiment is a key feature of biological intelligence, which finds its manifestation in embodied robotics and situated intelligence. Energy-awareness can be viewed as the ultimate expression of embodied intelligence; without energy supply from the environment, no intelligence is possible. Energy constraint is often ignored, or has just a secondary role in typical cutting-edge AI approaches. For example, Deep Learning Convolutional Networks often require huge amount of data/ time/ parameters/ energy/ computational power, which may not be readily available in various scenarios.

Our approach proposes solutions to several pitfalls observed in cutting edge AI solutions, such as unsustainable, exponentially growing computational and resource demand; catastrophic deterioration of performance to minute changes in input data, random or intentional; susceptibility to malicious, deceptive actions of adversaries. By learning from neurosciences and cognitive sciences, we outline mathematical and computational models of neurodynamics and their implementations in practical problems. The tutorial covers the following topics:

  1. Overview of insights from neurobiology and advanced brain imaging on the dynamics of higher cognition and intentionality. Describing the cinematic model of cognition and sequential decision-making. Aspects of embodiment and situated cognition, consciousness, including technical and philosophical issues.

  2. Mathematical and computational models of experimentally observed neurodynamics. Describing the Freeman K (Katchalsky) model hierarchy (K0-KIV) of cortical structures, dynamics, and functions.

  3. Practical implementations of embodied cognition, including multisensory percept formation, the intentional action-perception cycle. Examples of self-organized development of behaviors using reinforcement in the NASA Mars Rover SRR-2K robotics test bed.

  4. Energy-aware implementation of AI designs motivated by brain metabolism, using computational units coupled with their energy units (metabolic subsystems). Principles of dynamical pattern-based computation through activation sequences in arrays of oscillators.

  5. Practical illustrations of the energy-aware computing approach, including distributed sensing with limited bandwidth. Comparing with cutting-edge AI results in computer gaming (e.g., ATARI); show that leading deep learning reinforcement results can be reproduced very efficiently. Neuromorphic H/W implications (Loihi, TrueNorth, etc).

Intended Audience: The tutorial is intended for those interested in better understanding the advantages and shortcomings of today's leading deep learning AI, and possible ways to resolve the mounting bottleneck due to exponentially increasing demand for resources. The tutorial does not require thorough knowledge of the topics covered, rather it provides a comprehensive overview how cognitive, neural, computational, and engineering aspects of intelligence can be combined in a unified framework. It is self-contained, and it will be accessible to researchers and students with basic math and engineering background.

Monday, April 8 10:00 - 10:30

Coffee Breakgo to top

Monday, April 8 12:00 - 1:30

Lunch (On your own)go to top

Monday, April 8 1:30 - 4:30

T3: Tutorial Session 3: Social Data Analysis for IntelligenceDetailsgo to top

Dr. Valentina Dragos, ONERA - The French Aerospace Lab, France

Abstract: This tutorial investigates several issues of social data analysis for intelligence. Social data is understood as information collected from social media including various networks and platforms that show what online users publish on those platforms but also how they share, view or engage with content or other users. The tutorial does not break down how to make sense of social media data, but raises questions to be addressed before exploring social media as a resource for intelligence analysis. The tutorial will be organized into seven chapters.

The first chapter introduces intelligence analysis as the application of cognitive methods to weigh data and test hypotheses within a specific socio-cultural context. The second chapter explores some of the unique features of cyberspace that shape how people behave in this new social realm. The chapter also analyses how the virtual domain of cyberspace is unlike the environmental domains of air, land, maritime and space and how it challenges traditional understanding of concepts such as temporality, conflict, information, border, community, identity or governance. The next chapter investigates the notions of trust and reliability for artefacts in the cyberspace, ranging from information items to sources to more sophisticated structures such as virtual communities. The chapter shows that trust may be diminished is spite of the tremendous volume of information and that the cyberspace is prone to phenomena causing harm to data completeness and credibility. Several such phenomena will be considered: opacity and information filtering (echo chambers, bubble filters), disinformation campaigns (fake news, propaganda, hoaxes, site spoofing), misleading intentions (data leaks), biased interactions (social boots, smoke screening). Chapter 4 investigates the nature of social data content, asking the question of whether social data conveys factual and useful pieces or information or rather subjective content in the form of personal opinions, beliefs and impressions. The discussion is based on two illustrations of social data analysis. The first one tackles fake news propagation in the aftermath of terrorist attacks; the second one addresses the subjective assessment of concepts conveying extreme ideologies online. Chapter 5 identifies pitfalls in exploring the cyberspace both in isolation and considering its interconnectedness with the real world. First, the cyberspace comes with its own riddles and pairs of opposite concepts having blurred frontiers: free speech and actions vs. online hate or cyberbullying; online privacy and personal data vs. fake profiles and identities; transparency vs. anonymity by design. Second, additional pitfalls occur when social data is analyzed in the light of events in the real life. Specific phenomena induced by white data and real-life bias induced by silent communities will be discussed. Chapter 6 addresses the question of how gathering, processing and analyzing social data impacts the intelligence analysts, given the characteristics of those data. The last chapter concludes the tutorial by illustrating the state of art on tools and techniques for cyberspace exploration along with several ongoing research projects, NATO research tracks and initiatives addressing the many facets of social data analysis. While showing that, from a practical standpoint, solutions are still at afterthe- fact forensics level, the chapter will highlight several initiatives adopted by various instances to counter illegal content and online hate and finally to make the Internet a safer place.

Intended Audience: This tutorial is intended for students, researchers and practitioners who are interested in cyberspace exploration and social data analysis. Thanks to illustrations based on realistic use-cases, the participants will learn major challenges of gathering, analyzing and interpreting data from social media and will discover major initiatives undertaken to offer solutions to some of those challenges and to make the cyberspace a more resilient environment.

T4: Tutorial Session 4: Self-Modeling for Adaptive Situation AwarenessDetailsgo to top

Dr. Christopher Landauer, Dr. Kirstie L. Bellman, Dr. Phyllis Nelson

This tutorial is about how to build systems that can be trusted to be appropriately situation aware, so they can act as our information partners for complex tasks or in complex environments, including hazardous, distributed, remote, and / or incompletely knowable settings. There is a rich and growing literature on Self-Aware and Self-Adaptive systems, but few of them allow the system to build its own models. Nonetheless, many of the specific properties our systems exhibit have been implemented in other ways, and we will describe many of those other choices.

We show how to build systems that have enough selfinformation to decide when and how to construct, analyze, and communicate models of their operational environments, of their history of interaction with that environment, and of their own behavior and internal decision processes. They can assess their own models and also evaluate and improve the preliminary models we may provide them. These systems will explore their environment, using active experimentation to assess hypotheses and adjust their models. Each of these capabilities has been proposed for other systems, and we compare and contrast those choices with ours.

If we are going to build systems that can act as information partners, we will expect them to communicate among themselves and with us. We will expect them at least to interpret models suggested by us and communicate to us the models that they construct. That means that we need mechanisms of mutually compatible model interpretation (we are specifically not trying to reach mutual understanding at this stage of development, only cooperative behavior based on communicated models). In order for the system to build or assimilate those models, it will need to probe its environment, looking for gaps or errors in the models. This active experimentation is an essential part of situation awareness, since it provides some of the context within which situations can be interpreted. This process of identifying model weaknesses and using them to improve the models is called Model Deficiency Analysis, which is an active area of research.

To that end, we draw principles from theoretical biology and show how to use them in our computational processes. We use the Wrapping integration infrastructure that implements this style of reflective computing, with all computational resources implemented as limited-scope functions, explicit descriptions of all of these functions and when it is appropriate to use them, and powerful Knowledge-Based integration support processes, all of which are themselves computational resources with explicit descriptions.

We have shown that the Wrapping approach is ideal for adaptive and autonomous systems, including Self-Modeling Systems, in many previous papers, and in full day tutorials in previous SASO conferences. To understand the necessary choices, we start by introducing some of the basic elements of creating a reflective system, i.e., one that reasons about its own resources. For the many relevant design questions, we will describe approaches to addressing them other than ours, and why we chose to do what we did. We will present and discuss examples in developing situation awareness capabilities using a testbed for embedded real-time systems, called CARS (Computational Architectures for Reflective Systems).

Monday, April 8 3:00 - 3:30

Coffee Breakgo to top

Monday, April 8 6:00 - 9:00

Welcome Receptiongo to top

Tuesday, April 9

Tuesday, April 9 8:00 - 9:00

Breakfastgo to top

Tuesday, April 9 9:00 - 9:10

Conference Openinggo to top

Tuesday, April 9 9:10 - 10:00

K1: Keynote by Dr. Rand Waltzman, Deputy CTO at RAND Corporation, Santa Monica, CA, "MD: Multimedia Disinformation - Is there a doctor in the house?!"Detailsgo to top

Dr. Rand Waltzman
Chair: Kellyn Rein (Fraunhofer FKIE, Germany)

So-called "deep fake" technologies have the potential to create audio and video of real people saying and doing things they never said or did. The technology required to create these digital forgeries is rapidly developing. It is on the verge of becoming commoditized to the point where anybody with a laptop computer, very modest investment in software and minimal technical skill will have the capability to create realistic audiovisual forgeries. Machine learning and Artificial Intelligence techniques are making deep fakes increasingly realistic and resistant to detection. Individuals and businesses will face novel forms of exploitation, intimidation, and sabotage. Things have been bad with disinformation polluting the information environment. They are about to get a lot worse. What is to be done?

Tuesday, April 9 10:00 - 10:30

Coffee Breakgo to top

Tuesday, April 9 10:30 - 12:00

S1: Information Fusiongo to top

Chair: Galina L. Rogova (University at Buffalo, USA)
S1.1 Embedding Uncertainty in Conceptual Graphs for Semantic Information Fusion
Pawel Kowalski and Trevor Martin (University of Bristol, United Kingdom (Great Britain))
S1.2 Dominance-based Rough Set Approach Supporting Experts in Situation Assessment
Giuseppe D'Aniello and Matteo Gaeta (University of Salerno, Italy)
S1.3 Situation Mining: Event Pattern Mining for Situation Model Induction
Andrea Salfinger (Johannes Kepler University Linz, Austria)

Tuesday, April 9 12:00 - 1:30

Lunch (On your own)go to top

Tuesday, April 9 1:30 - 3:00

S2: Decision Supportgo to top

Chair: Scott Fouse (Self Employed, USA)
S2.1 Modeling Decisions in Collective Risk Social Dilemma Games for Climate Change Using Reinforcement Learning
Medha Kumar (Indian Institute of Technology Mandi, India); Kapil Agrawal (Indian Institute of Technology Mandi (IIT Mandi), India); Varun Dutt (Indian Institute of Technology, Mandi, India)
S2.2 Reasoning and Decision Making Under Uncertainty and Risk for Situation Management
Galina L. Rogova (University at Buffalo, USA); Roman Ilin (AFRL, USA)
S2.3 Knowledge Complacency and Decision Support Systems
Sebastian Rodriguez (University of Illinois at Urbana-Champaign, USA); James Schaffer (US Army Research Laboratory, USA); John O'Donovan and Tobias Höllerer (University of California, Santa Barbara, USA)

Tuesday, April 9 3:00 - 3:30

Coffee Breakgo to top

Tuesday, April 9 3:30 - 5:00

S3: Modeling and Simulationsgo to top

Chair: Giuseppe D'Aniello (University of Salerno, Italy)
S3.1 A Theoretical Model for Assessing Information Validity from Multiple Observers
Stephen Dorton (Sonalysts, Inc., USA); Ian Frommer (University of Maryland, USA); Teena M. Garrison (Sonalysts, Inc., USA)
S3.2 Living in a Sensor Limited World
Christopher Landauer and Kirstie L Bellman (Topcy House Consulting, USA)
S3.3 Modeling Drivers' Takeover Behavior Depending on the Criticality of Driving Situations and the Complexity of Secondary Tasks
Foghor Tanshi (University of Duisburg-Essen & Chair of Dynamics and Control, Germany); Dirk Söffker (University Duisburg-Essen, Germany)

Tuesday, April 9 5:30 - 7:30

CogSIMA 2020 Planning Meetinggo to top

Wednesday, April 10

Wednesday, April 10 8:00 - 9:00

Breakfastgo to top

Wednesday, April 10 9:00 - 10:00

K2: Keynote by Dr. Doug Riecken, Air Force Office of Scientific Research (AFOSR), USAgo to top

Dr. Doug Riecken
Chair: Andrea Salfinger (Johannes Kepler University Linz, Austria)

Wednesday, April 10 10:00 - 10:30

Coffee Breakgo to top

Wednesday, April 10 10:30 - 12:00

S4: Applicationsgo to top

Chair: Dirk Soeffker (University Duisburg-Essen, Germany)
S4.1 AOH-Map: A Mind Mapping System for Supporting Collaborative Cyber Security Analysis
Chen Zhong, Awny Alnusair, Brandon Sayger and Aaron Troxell (Indiana University Kokomo, USA); Jun Yao (University of Texas at Arlington, USA)
S4.2 Predicting Demand in IoT Enabled Service Stations
Himadri Sikhar Khargharia (EBTIC, Khalifa University & CDAC Bangalore, United Arab Emirates); Siddhartha Shakya (EBTIC, Khalifa University, United Arab Emirates); Russell Ainslie (British Telecom, United Kingdom (Great Britain)); Sara AlShizawi (EBTIC, Khalifa University, United Arab Emirates); Gilbert Owusu (British Telecom, United Kingdom (Great Britain))
S4.3 Exploiting Vehicle-to-Vehicle Communications for Enhanced Situational Awareness
Thanuka Wickramarathne and Amanda Metzner (University of Massachusetts Lowell, USA)

Wednesday, April 10 12:00 - 1:30

Lunch (On your own)go to top

Wednesday, April 10 1:30 - 3:30

P1: Poster Sessiongo to top

Short Presentations and Posters
Chair: Kenneth P. Baclawski (Northeastern University, USA)
P1.1 Information Fusion for Maritime Domain Awareness: Illegal Fishing Detection
Kerry Trentelman, Adam Saulwick, Rebecca Rafferty and Aaron Ceglar (Defence Science and Technology Group, Australia)
P1.2 Simulation-Based Reduction of Operational and Cybersecurity Risks in Autonomous Vehicles
George Clark, Jr. and Todd R Andel (University of South Alabama, USA); Mike Doran (Louisiana State University Shreveport)
P1.3 Markov Decision Processes with Coherent Risk Measures: Risk Aversity in Asset Management
Yuji Yoshida (University of Kitakyushu)
P1.4 Mapping the Information Flows for the Architecture of a Nation-Wide Situation Awareness System
Peeter Laud (Cybernetica AS, Estonia); Hayretdin Bahsi (Tallinn University of Technology, Estonia); Veiko Dieves (Centre for Applied Studies National Defence College, Estonia); Taivo Kangilaski (Tallinn University of Technology, Estonia); Leo Motus (Estonian Academy of Science, Estonia); Jaan Murumets and Illimar Ploom (Estonian National Defence College, Estonia); Jaan Priisalu (Tallinn University of Technology, Estonia); Mari Seeba (Cybernetica AS, Estonia); Ermo Täks (Tallinn University of Technology, Estonia); Kaide Tammel (Estonian National Defence College, Estonia); Piia Tammpuu (University of Tartu, Estonia); Kuldar Taveter (Tallinn University of Technology, Estonia); Avo Trumm, Tiia-Triin Truusa and Triin Vihalemm (University of Tartu, Estonia)
P1.5 Evaluating Improvement in Situation Awareness and Decision-Making Through Automation
Timothy Hanratty and Erin Zaroukian (Army Research Laboratory, USA); Justine Caylor (US Army Research Laboratory, USA); Michelle Vanni and Sue E Kase (Army Research Laboratory, USA)
P1.6 Uncovering Age Progression in Wireless Signal Propagation Modeling Using Decisions of Machine Learning Classifiers
Ashraf A Tahat (Princess Sumaya University for Technology, Jordan); Majd Abukhalaf (Orange Jordan Mobile Communications); Talal Edwan and Omar Saraereh (Princess Sumaya University for Technology, Jordan)

Wednesday, April 10 3:30 - 4:00

Coffee Breakgo to top

Wednesday, April 10 4:00 - 5:30

Industry Panelgo to top

Chair: Scott Fouse (Self Employed, USA)

Wednesday, April 10 6:00 - 9:00

Conference Banquet Dinnergo to top

Banquet Group Discussion: On the importance and challenges of integrating AI components to achieve system level intelligent behavior
Chair: Scott Fouse (Self Employed, USA)

Thursday, April 11

Thursday, April 11 8:00 - 9:00

Breakfastgo to top

Thursday, April 11 9:00 - 10:00

K3: Keynote by Prof. Nancy J. Cooke, Arizona State University, AZ, USA, "Human-Autonomy Teaming: Can Autonomy be a Good Team Player?"Detailsgo to top

Prof. Nancy J. Cooke
Chair: Nicolette McGeorge (Charles River Analytics, USA)

Abstract: A team is an interdependent group of three or more people who have different roles and who interact with one another toward a common goal. Teams can engage in physical activities as a unit (e.g., lifting a patient from bed), as well as cognitive activities (e.g., specialists coordinating on a patient's diagnosis). Team cognition is the execution of these cognitive activities (e.g., perception, planning, decision making) at a team level. But do teammates need to be people? Advances in artificial intelligence and machine learning have provided machines with increasingly levels of autonomy. The human-machine relationship can shift from humans supervising machines to humans teaming with machines. But do machines have what it takes to be good teammates? In this talk I will discuss what we know about team cognition in human teams and present some findings from studies of human-autonomy teams.

Thursday, April 11 10:00 - 10:30

Coffee Breakgo to top

Thursday, April 11 10:30 - 12:00

S5: Situation Awarenessgo to top

Chair: Mustafa Canan (Naval Postgraduate School, USA)
S5.1 Measuring the Collective Allostatic Load
Kemal Davaslioglu (Intelligent Automation, Inc, USA); Bob Pokorny, Yalin E Sagduyu, Henrik Molintas and Sohraab Soltani (Intelligent Automation, Inc., USA); Rebecca Grossman (Hofstra University, USA); Clint Bowers (University of Central Florida, USA)
S5.2 Assessing Cognitive Fidelity in a Situation Awareness Process Model
Mary D Freiman (Aptima, Inc., USA); Christopher Myers (Air Force Research Laboratory, USA); Jerry Ball (Oak Ridge Institute for Science and Education, USA); Michelle Caisse (L3 Technologies & Air Force Research Laboratory, USA); Tim Halverson (Oregon Research in Cognitive Applications, LLC, USA)
S5.3 Use Cases for Evaluation of Machine Based Situation Awareness
Kenneth P. Baclawski (Northeastern University, USA); Dieter Gawlick (Oracle, USA); Kenny Gross (Oracle Physical Sciences Research Center, USA); Adel Ghoneimy, Zhen Liu and Anna Chystiakova (Oracle, USA)

Thursday, April 11 12:00 - 1:30

Lunch (On your own)go to top

Thursday, April 11 1:30 - 3:00

S6: Shared Situation Awarenessgo to top

Chair: Kenneth P. Baclawski (Northeastern University, USA)
S6.1 Pragmatic Idealism: Towards a Probabilistic Framework of Shared Awareness in Complex Situations
Mustafa Canan (Naval Postgraduate School, USA); Andres Sousa-Poza (Old Dominion University, USA)
S6.2 Effective Team Interaction for Adaptive Training and Situation Awareness in Human-Autonomy Teaming
Mustafa Demir and Craig J Johnson (Arizona State University, USA); David Grimm (Georgia Institute of Technology, USA); Nathan J. McNeese (Clemson University, USA); Jamie Gorman (Georgia Institute of Technology, USA); Nancy Cooke (Arizona State University, USA)
S6.3 Identifying Consensus in Heterogeneous Multidisciplinary Professional Teams
Brandon Perelman (US Army Research Laboratory, USA); Stephen Dorton and Samantha Harper (Sonalysts, Inc., USA)

Thursday, April 11 3:00 - 3:30

Conference Closinggo to top


May 10: Thanks everyone for making a successful & enjoyable CogSIMA 2019 happen! Save the date: CogSIMA 2020 will take place May 4 - 7, 2020, in picturesque Victoria, BC, Canada!
Apr 1: We are pleased to announce that we will also offer breakfast on Monday - make sure to arrive early!
Apr 1: Our keynote program has been updated.
Mar 22: We would like to thank our patron smartcloud for the continued support!

Sponsors and Patrons

IEEE logo
SMC logo
Lockheed Martin
Charles River Analytics
Smart Cloud

Related Conferences & Organizations