CogSIMA Logo WHERE COGNITIVE SCIENCE MEETS COMPUTER SCIENCE


Tutorials

CogSIMA 2019 will feature the following tutorials, which will deepen attendees' knowledge on upcoming techniques supporting situation management, notably Explainable Artificial Intelligence (XAI), cutting-edge findings on energy constraints in cognitive processing providing inspiration for next-generation AI technologies, social data analysis, and self-modeling:

Tutorial 1: Conversational Explanations – Explainable AI through human-machine conversation

Instructor: Dave Braines, IBM Research UK
Time: Monday 8 April 2019: 9:00 a.m. - 12:00 p.m.

Abstract: Explainable AI has significant focus within both the research community and the popular press. The tantalizing potential of artificial intelligence solutions may be undermined if the machine processes which produce these results are black boxes that are unable to offer any insight or explanation into the results, the processing, or the training data on which they are based. The ability to provide explanations can help to build user confidence, rapidly indicate the need for correction or retraining, as well provide initial steps towards the mitigation of issues such as adversarial attacks, or allegations of bias. In this tutorial we will explore the space of Explainable AI, but with a particular focus on the role of the human users within the human-machine hybrid team, and whether a conversational interaction style is useful for obtaining such explanations quickly and easily. The tutorial is broken down into three broad areas which are dealt with sequentially:

  1. Explainable AI
    What is it? Why do we need it? Where is the state of the art?
    Starting with the philosophical definition of explanations and the role they serve in human relationships, this will cover the core topic of explainable AI, looking into different techniques for different kinds of AI systems, different fundamental classifications of explanations (such as transparent, post-hoc and explanation by example) and the different roles that these may play with human users in a human-machine hybrid system. Examples of adversarial attacks and the role of explanations in mitigating against these will be given, along with the need to defend against bias (either algorithmic or through training data issues).
  2. Human roles in explanations
    Building on the work reported in "Interpretable to whom?" [Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems, ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Jul 2018] this section examines the different roles that a human (or machine) user within the system may be fulfilling, and why the role has an important part to play in determining what kind of explanation may be required. In almost all current AI explanation-related research the role of the user is not a primary consideration, but we assert that the ability to create a meaningful explanation must take this into account. The goals of the users will vary depending on their role, and the explanations that will serve them in achieving these goals will also vary.
  3. Conversational explanations
    The role of conversational machine agents (such as Alexa, Siri and Google) are becoming increasingly commonplace, but the typical interactions that these agents fulfil are fairly simple. Conversational interactions can be especially useful in complex or evolving situations where the ability to design a rich and complete user interface in advance may not be possible. In our ongoing research we are investigating the role of a conversational interaction with AI explanations and will report the findings so far in this section. There will also be a live interactive demo for optional use by the audience during this session.

Intended Audience: The intended audience for this tutorial are researchers in any field where complex algorithms or processes can be used to inform human decision-making. The participants will be taken through a general overview of explanation in both human and machine contexts, and how the role of the agent will have a significant impact on what kind of explanation might be useful. The workshop will then move into some ongoing research into the role of conversation as a tool to enable explanations in human-machine hybrid systems, along with an interactive demonstration of an early version of this capability.

Dave Braines

Instructor's biography: Dave Braines is the Chief Technology Officer for Emerging Technology, IBM Research UK, and is a Fellow of the British Computer Society. As a member of the IBM Research division he is an active researcher in the field of Artificial Intelligence and is currently focused on Machine Learning, Deep Learning and Network Motif analysis. He has published over 100 conference and journal papers and is currently the industry technical leader for a 10 year research consortium comprised of 17 academic, industry and government organisations from the UK and US. Dave is passionate about human-machine cognitive interfaces and has developed a number of techniques to support deep interactions between human users and machine agents.

Since 2017 Dave has been pursuing a part-time PhD in Artificial Intelligence at Cardiff University, and in his spare time he likes to get outdoors for camping, walking, kayaking, cycling or anything else that gets him away from desks and screens!

 

Tutorial 2: Energy Constraints in Cognitive Processing - The Role of Constraint Satisfaction in Emergent Awareness

Instructors:
Robert Kozma, PhD, Department of Computer Science, UMass Amherst, MA & Department of Mathematics, U of Memphis, TN, USA
Roman Ilin, PhD, Air Force Research Laboratory, Wright Patterson Air Force Base, Dayton, OH, USA

Time: Monday 8 April 2019: 9:00 a.m. - 12:00 p.m.

Abstract: Recent insight on brain dynamics and cognitive processing provide important clues for the development of artificially intelligent systems with the capability of situation awareness, flexible operation, and rapid response to unpredictable events in dynamically changing and potentially hostile environments. The focus of this tutorial is to analyze the consequences of constraint satisfaction in developing new AI technologies. Embodiment is a key feature of biological intelligence, which finds its manifestation in embodied robotics and situated intelligence. Energy-awareness can be viewed as the ultimate expression of embodied intelligence; without energy supply from the environment, no intelligence is possible. Energy constraint is often ignored, or has just a secondary role in typical cutting-edge AI approaches. For example, Deep Learning Convolutional Networks often require huge amount of data/ time/ parameters/ energy/ computational power, which may not be readily available in various scenarios.

Our approach proposes solutions to several pitfalls observed in cutting edge AI solutions, such as unsustainable, exponentially growing computational and resource demand; catastrophic deterioration of performance to minute changes in input data, random or intentional; susceptibility to malicious, deceptive actions of adversaries. By learning from neurosciences and cognitive sciences, we outline mathematical and computational models of neurodynamics and their implementations in practical problems. The tutorial covers the following topics:

  1. Overview of insights from neurobiology and advanced brain imaging on the dynamics of higher cognition and intentionality. Describing the cinematic model of cognition and sequential decision-making. Aspects of embodiment and situated cognition, consciousness, including technical and philosophical issues.
  2. Mathematical and computational models of experimentally observed neurodynamics. Describing the Freeman K (Katchalsky) model hierarchy (K0-KIV) of cortical structures, dynamics, and functions.
  3. Practical implementations of embodied cognition, including multisensory percept formation, the intentional action-perception cycle. Examples of self-organized development of behaviors using reinforcement in the NASA Mars Rover SRR-2K robotics test bed.
  4. Energy-aware implementation of AI designs motivated by brain metabolism, using computational units coupled with their energy units (metabolic subsystems). Principles of dynamical pattern-based computation through activation sequences in arrays of oscillators.
  5. Practical illustrations of the energy-aware computing approach, including distributed sensing with limited bandwidth. Comparing with cutting-edge AI results in computer gaming (e.g., ATARI); show that leading deep learning reinforcement results can be reproduced very efficiently. Neuromorphic H/W implications (Loihi, TrueNorth, etc).

Intended Audience: The tutorial is intended for those interested in better understanding the advantages and shortcomings of today's leading deep learning AI, and possible ways to resolve the mounting bottleneck due to exponentially increasing demand for resources. The tutorial does not require thorough knowledge of the topics covered, rather it provides a comprehensive overview how cognitive, neural, computational, and engineering aspects of intelligence can be combined in a unified framework. It is self-contained, and it will be accessible to researchers and students with basic math and engineering background.

References: Handout materials will be available online for attendees. For related literature, see below

  1. R. Kozma. C. Alippi, Y. Choe, C. Morabito, (Eds) "Artificial Intelligence in the Age of Neural Networks and Brain Computing," Academic Press, Elsevier, USA, ISBN: 9780128154809 (2018).
  2. R. Kozma and W.J. Freeman "Cognitive Phase Transitions in the Cerebral Cortex - Enhancing the Neuron Doctrine by Modeling Neural Fields", Springer Verlag, Heidelberg, ISBN 978-3-319-24404-4 (2016).
  3. Hazan, H., D. Saunders, H. Khan, D. Sanghavi, H.T. Siegelmann, R. Kozma BindsNET: A machine learning-oriented spiking neural networks library in Python, Front. in Neuroinf. https://www.frontiersin.org/articles/10.3389/fninf.2018.00089/full 2018.
  4. Kozma, R., W.J. Freeman (2017) Cinematic operation of cerebral cortex interpreted via critical transitions in self-organized dynamical systems. Front in Syst. Neurosci, 11(10) https://www.frontiersin.org/articles/10.3389/fnsys.2017.00010/full 2017.
  5. Kozma, R. & M. Puljic (2015) "Neuropercolation model of pattern-based computing in brains through cognitive phase transitions," Theor. Comp. Science C - Natural Computing, 633, pp. 54-70. http://dl.acm.org/citation.cfm?id=2952041
  6. Kozma, R., T. Huntsberger, H. Aghazarian, E. Tunstel, R. Ilin, WJ Freeman (2008) "Intentional Control for Planetary Rover SRR2k," Advanced Robotics, 21, 1109-1127.
  7. Ilin, R., Kozma, R. (2006) "Stability of coupled excitatory-inhibitory neural populations & application to control multistable systems," Phys. Lett. A, 360, 66-83.

Robert Kozma

Instructor's biography: Dr. Robert Kozma (Fellow of IEEE, Fellow of International Neural Network Society, INNS), Professor of Mathematics, Director of the Center of Large-Scale Integration and Optimization Networks (CLION), FedEx Institute of Technology, University of Memphis, TN. Visiting Professor of CS at University of Massachusetts Amherst, Director of the Biologically Inspired Neural and Dynamical Systems (BINDS) Laboratory, where he now leads a DARPA initiative towards energy-aware Superior AI. Past President of INNS, served on the governing Boards of IEEE SMC and CIS Societies. Recipient of the INNS Gabor Award, has been NRC Senior Fellow with AFRL. His research focuses on brain-inspired AI. His neuropercolation model of space-time brain dynamics interprets cognitive phase transitions as basic attributes of intelligence, biological and artificial. In collaboration with Walter Freeman (UC Berkeley) and colleagues, he established the Freeman K (Katchalsky) sets, which is considered today the most advanced approach describing the hierarchy of brain structure, dynamics, and cognitive functions, including intentionality. Application areas: sensor networks, autonomous control, decision support, big data, knowledge extraction, brain imaging, and brain-computer interfaces.

Roman Ilin

Instructor's biography: Dr. Roman Ilin is Research Engineer at the Air Force Research Laboratory, WPAFB, Dayton, OH. Dr. Ilin completed his graduate studies at the University of Memphis (PhD 2008) under the supervision of Dr. Robert Kozma on learning in neural network arrays for brain models and various practical applications. He has received various awards, such as the Norton Dissertation Award at University of Memphis bestowed to the best PhD dissertation in any given year; he is recipient of the INNS Young Investigator Award. His research encompasses neural networks, multi-layer convolutional networks/ deep learning, dynamical properties chaotic neural networks, and learning in simultaneous recurrent neural networks using extended Kalman filtering based on approximate dynamic programming (ADP). His achievements support various applications, including situation awareness in complex scenarios, and distributed, multi-modal sensor systems.

 

Tutorial 3: Social Data Analysis for Intelligence

Instructor: Dr. Valentina Dragos, ONERA – The French Aerospace Lab, France
Time: Monday 8 April 2019: 1:30 - 4:30 p.m.

Abstract: This tutorial investigates several issues of social data analysis for intelligence. Social data is understood as information collected from social media including various networks and platforms that show what online users publish on those platforms but also how they share, view or engage with content or other users. The tutorial does not break down how to make sense of social media data, but raises questions to be addressed before exploring social media as a resource for intelligence analysis. The tutorial will be organized into seven chapters.

The first chapter introduces intelligence analysis as the application of cognitive methods to weigh data and test hypotheses within a specific socio-cultural context.
The second chapter explores some of the unique features of cyberspace that shape how people behave in this new social realm. The chapter also analyses how the virtual domain of cyberspace is unlike the environmental domains of air, land, maritime and space and how it challenges traditional understanding of concepts such as temporality, conflict, information, border, community, identity or governance.
The next chapter investigates the notions of trust and reliability for artefacts in the cyberspace, ranging from information items to sources to more sophisticated structures such as virtual communities. The chapter shows that trust may be diminished is spite of the tremendous volume of information and that the cyberspace is prone to phenomena causing harm to data completeness and credibility. Several such phenomena will be considered: opacity and information filtering (echo chambers, bubble filters), disinformation campaigns (fake news, propaganda, hoaxes, site spoofing), misleading intentions (data leaks), biased interactions (social boots, smoke screening).
Chapter 4 investigates the nature of social data content, asking the question of whether social data conveys factual and useful pieces or information or rather subjective content in the form of personal opinions, beliefs and impressions. The discussion is based on two illustrations of social data analysis. The first one tackles fake news propagation in the aftermath of terrorist attacks; the second one addresses the subjective assessment of concepts conveying extreme ideologies online.
Chapter 5 identifies pitfalls in exploring the cyberspace both in isolation and considering its interconnectedness with the real world. First, the cyberspace comes with its own riddles and pairs of opposite concepts having blurred frontiers: free speech and actions vs. online hate or cyberbullying; online privacy and personal data vs. fake profiles and identities; transparency vs. anonymity by design. Second, additional pitfalls occur when social data is analyzed in the light of events in the real life. Specific phenomena induced by white data and real-life bias induced by silent communities will be discussed.
Chapter 6 addresses the question of how gathering, processing and analyzing social data impacts the intelligence analysts, given the characteristics of those data.
The last chapter concludes the tutorial by illustrating the state of art on tools and techniques for cyberspace exploration along with several ongoing research projects, NATO research tracks and initiatives addressing the many facets of social data analysis. While showing that, from a practical standpoint, solutions are still at afterthe- fact forensics level, the chapter will highlight several initiatives adopted by various instances to counter illegal content and online hate and finally to make the Internet a safer place.

Intended Audience: This tutorial is intended for students, researchers and practitioners who are interested in cyberspace exploration and social data analysis. Thanks to illustrations based on realistic use-cases, the participants will learn major challenges of gathering, analyzing and interpreting data from social media and will discover major initiatives undertaken to offer solutions to some of those challenges and to make the cyberspace a more resilient environment.

Valentina Dragos

Instructor's biography: Dr. Valentina Dragos is a research scientist, member of the Department of Information Modeling and Systems at ONERA, The French Aerospace Lab in Palaiseau, France. Valentina received Master and PhD degrees in Computer Science from Paris V University and her research interests include artificial intelligence, with emphasis on natural language processing, semantic technologies and automated reasoning. Since joining ONERA in 2010, Valentina contributed to several academic and industrial security-oriented projects, addressing topics such as: semantic interoperability for command and control systems, heterogeneous information fusion, exploration of open sources and social data and integration of symbolic data (HUMINT, OSINT) for situation assessment.

 

Tutorial 4: Self-Modeling for Adaptive Situation Awareness

Instructors: Dr. Christopher Landauer, Dr. Kirstie L. Bellman, Dr. Phyllis Nelson
Time: Monday 8 April 2019: 1:30 - 4:30 p.m.

Abstract: This tutorial is about how to build systems that can be trusted to be appropriately situation aware, so they can act as our information partners for complex tasks or in complex environments, including hazardous, distributed, remote, and / or incompletely knowable settings. There is a rich and growing literature on Self-Aware and Self-Adaptive systems [1], [9], but few of them allow the system to build its own models. Nonetheless, many of the specific properties our systems exhibit have been implemented in other ways, and we will describe many of those other choices.

We show how to build systems that have enough selfinformation to decide when and how to construct, analyze, and communicate models of their operational environments, of their history of interaction with that environment, and of their own behavior and internal decision processes. They can assess their own models and also evaluate and improve the preliminary models we may provide them. These systems will explore their environment, using active experimentation to assess hypotheses and adjust their models. Each of these capabilities has been proposed for other systems, and we compare and contrast those choices with ours.

If we are going to build systems that can act as information partners, we will expect them to communicate among themselves and with us. We will expect them at least to interpret models suggested by us and communicate to us the models that they construct. That means that we need mechanisms of mutually compatible model interpretation (we are specifically not trying to reach mutual understanding at this stage of development, only cooperative behavior based on communicated models). In order for the system to build or assimilate those models, it will need to probe its environment, looking for gaps or errors in the models. This active experimentation is an essential part of situation awareness, since it provides some of the context within which situations can be interpreted. This process of identifying model weaknesses and using them to improve the models is called Model Deficiency Analysis [8], which is an active area of research.

To that end, we draw principles from theoretical biology and show how to use them in our computational processes. We use the Wrapping integration infrastructure that implements this style of reflective computing [2], with all computational resources implemented as limited-scope functions, explicit descriptions of all of these functions and when it is appropriate to use them, and powerful Knowledge-Based integration support processes, all of which are themselves computational resources with explicit descriptions.

We have shown that the Wrapping approach is ideal for adaptive and autonomous systems, including Self-Modeling Systems [3], in many previous papers, and in full day tutorials in previous SASO conferences [6] [7]. To understand the necessary choices, we start by introducing some of the basic elements of creating a reflective system, i.e., one that reasons about its own resources. For the many relevant design questions, we will describe approaches to addressing them other than ours, and why we chose to do what we did. We will present and discuss examples in developing situation awareness capabilities using a testbed for embedded real-time systems, called CARS (Computational Architectures for Reflective Systems) [4] [5].


[1] Samuel Kounev, Jeffrey O. Kephart, Aleksandar Milenkoski, Xiaoyun Zhu (eds.), Self-Aware Computing Systems, Springer (2017) [2] Christopher Landauer, Kirstie L. Bellman, "Generic Programming, Partial Evaluation, and a New Programming Paradigm", Chapter 8, pp. 108-154 in Gene McGuire (ed.), Software Process Improvement, Idea Group Publishing (1999)
[3] Christopher Landauer, Kirstie L. Bellman, "Self-Modeling Systems", p. 238-256 in R. Laddaga, H. Shrobe (eds.), "Self-Adaptive Software", Springer Lecture Notes in Computer Science, Volume 2614 (2002)
[4] Dr. Kirstie L. Bellman, Dr. Christopher Landauer, Dr. Phyllis R. Nelson, "Managing Variable and Cooperative Time Behavior", Proceedings SORT 2010: The First IEEE Workshop on Self-Organizing Real-Time Systems, 05 May, part of ISORC 2010: The 13th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing, 05-06 May 2010, Carmona, Spain (2010)
[5] Dr. Kirstie L. Bellman, Dr. Phyllis R. Nelson, "Developing Mechanisms for Determining ‘Good Enough' in SORT Systems", Proceedings SORT 2011: The Second IEEE Workshop on Self-Organizing Real-Time Systems, 31 March 2011, part of ISORC 2011: The 14th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing, 28-31 March 2011, Newport Beach, California (2011)
[6] Dr. Christopher Landauer, Dr. Kirstie L. Bellman, "Wrapping Tutorial: How to Build Self-Modeling Systems", Proc. SASO 2011: The 5th IEEE International Conference on Self-Adaptive and Self-Organizing Systems, 03-07 October 2011, Ann Arbor, Michigan (2011)
[7] Dr. Christopher Landauer, Dr. Kirstie L. Bellman, Dr. Phyllis R. Nelson, "Wrapping Tutorial: How to Build Self-Modeling Systems", Proc. SASO 2012: The 6th IEEE International Conference on Self-Adaptive and Self- Organizing Systems, 10-14 October 2012, Lyon, France (2012)
[8] Christopher Landauer, "Mitigating the Inevitable Failure of Knowledge Representation", Proc. 2nd Models@run.time: The 2nd International Workshop on Models@run.time for Self-aware Computing Systems, Part of ICAC2017: The 14th International Conference on Autonomic Computing, 17-21 July 2017, Columbus, Ohio (2017)
[9] Peter R. Lewis, Marco Platzner, Bernhard Rinner, Jim Tørreson, Xin Yao (eds.), Self-Aware Computing Systems: An Engineering Approach, Springer (2016)

Instructor's biography: Dr. Christopher Landauer is a mathematician (Ph.D. Mathematics, Caltech, 1973) working on large-scale software managed systems, with an emphasis on the development and evaluation of formal methods and other mathematically based tools for and models of complex software systems, and on the software development processes required to make them effective and reliable. This work has included both applications and research in communication protocols, discrete-event simulations, computer security and program verification, multiple target tracking, spacecraft attitude determination, evaluation of knowledge-based systems, system integration infrastructure, natural language processing, computational semiotics and knowledge representation, model-based design and engineering of computer managed systems, including real-time, embedded, self-organizing, and reflective systems. He cofounded Topcy House Consulting in 2001.

Instructor's biography: Dr. Kirstie L. Bellman is a Neurophysiologist (Ph.D. UCSD 1979), Computer Scientist and Mathematician, working on large-scale software-managed systems, with a combined emphasis on formal methods and the appropriate use of biological principles. She has been a DARPA Program Manager from 1993-1997, in charge of mathematical and formal methods programs, including Domain-Specific Software Architectures, rapid prototyping technology, and the large Computer-Aided Education and Training Initiative. At the end of her DARPA tenure, she received a (rare) award from the Office of the Secretary of Defense for excellence in her programs. Dr. Bellman has over thirty-five years of academic, industry, and consulting experience in the development of both conventional computer models and applications and artificial intelligence. Her published research spans a wide range of topics in the cognitive, neurophysiological, and information processing sciences. In addition to playing a leading role in the development of programs in error analysis and evaluation of expert systems, her group did nationally recognized research in extending the applications of expert systems to open-ended design problems and to the integration of mathematical and artificial intelligence techniques. She joined Topcy House Consulting in 2002.

Instructor's biography: Dr. Phyllis R. Nelson is a professor in the Department of Electrical and Computer Engineering at California State Polytechnic University Pomona. She holds an MSEE from Caltech and a PhD from UCLA. Prior to her academic career, Dr. Nelson was a systems engineer in the aerospace industry and a research staff member at leading universities in both the United States and France. She is currently exploring methods for designing trustworthy functioning of complex systems of systems, motivated by an interest in complexity as its own technical challenge.

News

May 10: Thanks everyone for making a successful & enjoyable CogSIMA 2019 happen! Save the date: CogSIMA 2020 will take place May 4 - 7, 2020, in picturesque Victoria, BC, Canada!
Apr 1: We are pleased to announce that we will also offer breakfast on Monday - make sure to arrive early!
Apr 1: Our keynote program has been updated.
Mar 22: We would like to thank our patron smartcloud for the continued support!

Sponsors and Patrons

IEEE logo
 
SMC logo
 
Lockheed Martin
 
Charles River Analytics
 
Smart Cloud
 

Related Conferences & Organizations