The humans in complex systems competency focuses on multi-disciplinary, non-medical approaches to understand and modify the potential of humans situated in and interacting within complex social, technological, and socio-technical systems.1 The competency is subdivided into six core competencies: bidirectional human–system communication, estimating and predicting humans in complex systems, human-guided system adaptation, human-system team interactions, hybrid human–technology intelligence, and neuroscience and neurotechnologies. The work within this competency was reviewed on November 2–4, 2022 at the Aberdeen Proving Ground in Aberdeen, Maryland. Work outside of these core competencies, which was still housed under the humans in complex systems competency, was also presented at this review and is discussed below.
The increasing prevalence of artificial intelligence (AI) and autonomous systems creates tremendous possibility for future capabilities; however, the inclusion of these systems within Army teams creates challenges for effective communication between soldiers and systems. Human teams communicate through multiple modalities (e.g., speech, gestures, and body language) to efficiently and effectively accomplish their goals. The bidirectional human–system communication core competency focuses on research underpinning effective and efficient real-time multimodal communication between soldiers and systems.2 This research examines concepts of team-level trust calibration, measurement of team cohesion, and dynamic information presentation, leading to robust real time human system team performance.3 Based on the presentations shown during the review, the scientific quality in the core competency is strong. There was excellent intramural and extramural work (e.g., through avenues such as the Multidisciplinary University Research Initiative [MURI]), demonstrating thorough understanding of the field across many projects. The research builds on existing literature and then goes beyond. For example, the presentations started with works that validated the literature with replication studies, and then moved to use cases that are more relevant to the Army Research Laboratory (ARL).
___________________
1 U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory (ARL), “Foundational Research Competencies and Core Competencies,” document for the Army Research Laboratory Technical Assessment Board, received March 30, 2022.
2 Ibid.
3 DEVCOM ARL, “Humans in Complex Systems Overview,” read-ahead document provided to the Panel on Assessment of Humans in Complex Systems for the November 2, 2022, meeting.
The “Multisensory Neural Information Processing for Direct Brain-Computer Communications” project considers how information is presented to military operators based on their mental states, and focuses on developing closed-loop systems that allow for a maximum information transfer rate to these operators to enhance decision accuracy.4 The project brought together a strong group of researchers and was a great example of composing a team with diverse skills and resources to work together and transfer skills across the laboratories. For example, there were groups with valuable and interesting data and groups with strong analysis capabilities. The general approach described in this project of using inferred mental states to drive audio and visual cues is innovative, promising, and practical. The approach can potentially be applied to drive the visualization and user interfaces of many different devices, vehicles, and systems, enhancing a user’s judgement, control, accuracy, comfort, confidence, etc., in tasks such as navigation, search, and object manipulation. This is a high-risk, high-reward, big-idea project, and significant progress has been made as evidenced by publications and continued grants (i.e., the move from MURIs to the U.S.-U.K. Bilateral Academic Research Initiative). The controls aspect was not as well developed, but this is the area that the team is actively working on.
This project has a lot of room for exploration and research and there will need to be comprehensive user studies to relate the choice of multisensory cues (e.g., audio cues or visual cues) to present to the user based on the mental state (e.g., confusion, confidence, fatigue, or stress) in a principled, scientific manner. As presented, it was unclear what the optimal controller design is—for example, the goals of the controller design could include maximizing the chance of success of accomplishing a task, increasing the user satisfaction, reducing user frustration, or mitigating stress level. Machine learning (ML) and optimization approaches, potentially developed by ML experts at ARL, could be employed for devising such optimal controller designs. Controller designs may be informed by psychology studies too; for example, the capability of humans in handling visual and audio data under different stress levels. Physiological indicators (e.g., endocrine, autonomic state) may also be important to consider. This work is also connected to the neuroscience and neurotechnologies core competency and it is important to ensure cross-pollination.
The “Psycho-Social Dynamics of Human Agent Teaming” project explored how team composition affects team performance with AI; what the ideal role of agents is in facilitating optimized team performance in mixed teams of humans and agents; and how perceptions of competence or warmth (e.g., congeniality) in the AI agent affect team acceptance. It also looked at how AI functions affect team neural and network “signatures” and which signatures best predict acceptance, performance, and viability.5 The project was also well done and generally considered to be quality work. The space of building understanding how people interact with agents is highly complex and will require going beyond testing A versus B, and it would be hard or impossible to systematically evaluate all possibilities in this way. A suggestion is to develop a plan for how to prioritize and select the most important areas to study to build a thorough understanding of specific scenarios. For example, there were suggestions to explore not just how an agent would intervene, but also when. This plan would also need to consider how to test the ecological validity of people-agent interactions. The underlying health and mental states of the participants may also be important to consider.
The “Aided Target Recognition Approaches that Complement Rather than Conflict with Human Visual Processing” project was also strong. The technical goal of this project is to determine whether an increased understanding of underlying cognitive mechanism of visual processing could inform the design
___________________
4 DEVCOM ARL, “HCxS TAB Bi-directional Human System Communication Core Competency,” read-ahead document provided to the Panel on Assessment of Humans in Complex Systems for the November 2, 2022, meeting.
5 N. Contractor, Northwestern University, “The Sequential Structural Signatures of Success in Human-Agent Teams,” Army Research Laboratory Award W911NF1920140, read-ahead material provided to the Panel on Assessment of Humans in Complex Systems for the November 2, 2022, meeting.
of effective aided target recognition systems.6 There were suggestions about additional areas to explore, such as the ecological validity of the experiments and how they are dealing with visual search tasks, as well as how false positives impact performance and decision-making time overall—especially if the user knows that there is a chance of false positives. It would be worthwhile to investigate how false positives and false negatives may affect trust and stress levels in using the system. Because false positives are unavoidable, it may be interesting to investigate the optimal trade-off between false positives and false negatives. In this regard, there may be good synergy between this research and research on studying trust in human–machine teaming. Another suggestion is to consider leveraging the ability to do parallel visual processing and enable parallel decisions as well (e.g., confirm all at once, instead of selecting one at a time).
The “Opportunistic Sensing to Improve Aided Target Recognition Algorithms” demo was interesting as the user may provide samples (e.g., positive examples and negative examples of targets) to train the recognition algorithms when using a gun. There are educational implications for this work. It would be important for the user to gain some basic understanding of ML before they use such a system. Closing the loop to bring the opportunistic sensing to the technology is encouraged.
The demo on “Dynamic Information Representation in Complex Environments” was a nice example of work that starts by validating the literature and doing replication, but then also goes further to be more applicable to soldiers. The focus of this research was to maximize situation awareness and performance through intuitive and customized visualizations to represent complex time-sensitive information in uncertain and dynamic environments in order to enhance and capitalize on human cognitive abilities. The goal of this work was to conduct device-agnostic research investigating how and when virtual information needs to be presented and to characterize techniques for adaptively displaying dynamic information in ways that improve human decision-making. This research was designed to (1) understand how people make decisions under uncertainty and (2) enable tailored information representations to meet soldier and mission requirements. The methodology incorporated soldier engagements to identify specific needs, as well as a literature reviewer to identify relevant gaps in cognitive sciences literature. The experiment used human subjects to explore how human perception and decision-making performance is affected by different techniques for representing uncertain information. Software (CAARE) was developed to create and modify dynamic and complex augmented reality (AR) research environments in real time. This software was made to integrate with ARL’s AURORA network and framework. Results from initial behavioral experiments leveraging CAARE can inform human performance trade-offs for different representations of complex and/or adaptive information in AR or virtual reality (VR). The results of eight experiments found no evidence that risk preferences reverse under time pressure. Additionally, the experiment’s findings suggest that with more exposure to the task and decision-related outcomes, classic framing effects may diminish. This may be related to the phenomenon known in the framing literature as the “description-experience gap.” This is important work that needs to be continued.
The demo on “Interpreting Gaze Behavior in Open-World Virtual Environment” was a high-quality project that builds on the Information for Mixed Squads (INFORMS) Laboratory infrastructure. It was seen as an area worthy of further investment. The build and audio quality were high. The direction this project is taking into map randomization is challenging, and it is important to ensure that it has the necessary expertise on hand. In particular, the project needs experts in eye movements. Connections with the game industry and gaming academia are suggested as there is work and expertise there that could be complementary. The VR industry is at the cutting edge of hand–eye coordination and gaze behavior. Neurology may also be a good discipline to include, as the cranial nerves affect eye movements.
An opportunity for further work includes ensuring the ecological validity of the research. While initial studies have to be done in controlled laboratory studies, further work will be needed to translate the
___________________
6 DEVCOM ARL, “HCxS TAB Bi-directional Human-Technology Communication Core Competency,” read-ahead document provided to the Panel on Assessment of Humans in Complex Systems for the November 2, 2022, meeting.
findings into the real world. Work on opportunistic sensing relies on humans having the expertise to train algorithms. This can be labor intensive for the humans, and perhaps not the best use of their time and energy. In addition, as more systems become highly automated, there is a risk that the humans will begin to lack the expertise needed to train the system. Finally, much work in AI involves humans performing a task that produces data to teach an algorithm. The implicit assumption is that the human is providing “good” training data from a human expert at peak performance. It would be important to consider what happens when bad data are used for training, either from a human performing the tasks poorly (suboptimal performance) or from a human intentionally producing bad data (adversarial) to cause the algorithm to perform poorly. Work in this area therefore also needs to consider suboptimal human performance, as well as adversarial human inputs when training on data created by humans.
The goal of the estimating and predicting humans in complex systems core competency focuses on research to develop novel approaches to sense, interpret, and predict human state changes across time scales to enable effective and efficient technology adaptation and inference of operational environment context.7 The quality of research outlined by scientists and engineers in this core competency was impressive and the goal was ambitious, timely and challenging. The research presented was rigorous and well integrated across projects within the core. Current efforts are tightly focused on a few key areas including how to leverage and extend open-source data sets to get large samples from many individuals performing challenging tasks over long time horizons, how to extract fundamental principles of human behaviors from these data, and how to develop multi-scale models of human behaviors that can predict performance in novel scenarios. Research in this core competency is moving to focus more on prediction with limited data sets in real-world scenarios and development of technology that can enable closed-loop bidirectional control of human states (that include males and females with varying levels of experience in task-learning to ensure statistical rigor across gender) toward effective team behavior. ARL is uniquely positioned to tackle these challenges along with key extramural partners.
Within this core competency innovative research was shared across many demonstrations, presentations, and posters that demonstrated cutting-edge approaches to address important scientific questions focused on measuring, understanding, and predicting multi-scale changes in human behavior over time in natural settings. The relevance, breadth, and rigor of the science was at par with other leading research institutions (e.g., universities and institutes) in the United States and beyond. Project presentations demonstrated a clear understanding of current research conducted elsewhere both within and adjacent to the field of big data science and in many cases clearly communicated how ARL-based research is addressing unique gaps. The research methods employed across the projects within this competency were sound and, in some cases, cutting edge with respect to research in the field at large. Here are a few highlighted examples:
___________________
7 DEVCOM ARL “Foundational Research Competencies and Core Competencies” document for the Army Research Laboratory Technical Assessment Board, received March 30, 2022.
scanning. This demonstrated a very creative way to non-invasively collect a large data set (3.8 billion trials from 15.5 million people) in a pseudo-natural context with replicates recorded over long periods (months to years). This novel data set allowed the presenter and his team to examine performance changes over time on a per individual basis and begin to extract statistical models of these behaviors. An extension of this project included measures of the personality types of the participants and their self-reported energy level at game-play time, enabling the team to relate participant-specific attributes to performance over time.
approach to extract relatively simple attractor–repulsor grid models of human avoidance behavior over time in response to an AI with static, non-adaptive behavior. Combining task relevant data and inverse reinforcement learning is an interesting and potentially game changing approach toward applications that optimize AI agent teaming behaviors or use model-predictive anticipatory control strategies to optimize future behaviors.
In terms of opportunities, the research projects presented demonstrate exemplary use of open-source data sets often generated by humans interacting with online gaming or simulation systems. It is suggested that the core competency transition to focus less on stereotyped, static virtual settings, which are good for simplifying complex systems, and more on in-lab proxies for ecologically relevant human–human and human–agent scenarios. One way to do this would be to merge with other core competency areas research via shared resources within ARL, for example, the INFORMS Laboratory—which is set up to replicate aspects of human–human or human–autonomous agent teaming in a setting where deep physiology data could be acquired that would not be possible in the field (e.g., eye tracking, pupil dilation, heart rate, electromyography [EMG], and electroencephalogram [EEG]). Working with the INFORMS Laboratory would allow researchers to set up “in-lab” ecologically relevant contexts where big data sets could be acquired from human operators over time into a large human behavioral data base. From those data, data-driven models could be generated and then inserted into simulations as more accurate agents of human behavior. In addition, longitudinal data from single individuals participating in INFORMS studies over time (e.g., many sessions) could give deep insight into how humans adapt to different teaming scenarios.
Additionally, research projects demonstrated data sets with large samples over long times. These are important data, but there is also an opportunity to study small samples (e.g., n = 1 or a few individuals) with enormous depth, that is, with measurements that are very high resolution across scales (e.g., whole brain to individual neuron) to better address adaptability of individuals (i.e., deep phenotyping: physiological measurements of behavior such as movement, heart rate, gaze direction, and non-invasive electrophysiology like EMG and EEG).
Most of the current research projects are moving away from comparing two time points (e.g., beginning and end of a long data stream) toward model extraction from snapshots in time within available data sets. It is recommended to accelerate the transition from static toward dynamic models (i.e., more explicitly include evolution of states over time). This will enable model-based prediction that can guide or nudge human behaviors in closed loop feedback applications. The excellence of collaborative projects with external academic laboratories was noteworthy. It would be worthwhile to expand this approach by encouraging more “sitting-in” (i.e., embedding of ARL researchers in both academic and industry-based laboratories).
In terms of concerning challenges, nearly all, projects in the core competency involved collecting, processing, evaluating and managing vast amounts of data, in some cases in real time. Given this fact, to keep up with industry and leading academic institutions, it is recommended to invest in bolstering the data science expertise within the core. More scientists and engineers who work in the nascent field of big data will be needed. These are statisticians trained in cutting edge techniques (e.g., nonlinear and Bayesian), computer scientists, and applied mathematicians who can develop and apply leading edge ML algorithms that may also take into account human heterogeneity. Leaders in this area will help answer key questions such as: when, where, and at what channel to look at in a high-resolution multi-scale data stream to make key predictions in a context-dependent manner.
Additionally, nearly all of the projects in this core competency use experimental frameworks that try to recreate real-world scenarios in a controlled laboratory or virtual environment. While this is an important and logical first step, it will eventually stifle progress. It is recommended to rapidly accelerate
toward acquiring data and studying behaviors in the field in ecologically relevant contexts. Operationally, this means more investment in lean, wearable sensor technology that can be used “in the wild.”
In terms of key new foci of research (i.e., double-down areas), two key opportunities were identified that could expand the impact of research in the core competency. First, research projects presented focused mostly, if not exclusively, on measuring and predicting cognitive behavioral states. Exploring similar questions and challenges in the physical domain (e.g., biomechanical or physiological performance) and the interaction between physical and cognitive domains could enable more expansive opportunities for human–human and human–agent performance. There is an opportunity to add more experts in learning or adaptation, motor behavior, economics of movement, and in model-predictive optimal control.
Finally, most of the research projects in this core competency have the potential for translation to new technologies that could enhance human performance. It will be essential then, to invest in projects focused on adoption of these new technologies. A hint of this direction emerged in the poster titled, “Technically Savvy Soldiers”—but it needs more emphasis. For example, what is the best way to coach a naive user of newer forms of advanced technology? What attributes are necessary for a technology to have in order to be accepted by end users? Addressing these questions will require expertise in ergonomics, product development, and coaching and learning science and may be facilitated by the incorporation of wearable sensor technology.
The Human-Guided System Adaptation core competency focuses on research underpinning novel approaches for soldiers to effectively and efficiently guide the adaptation of intelligent technologies to create new or upgraded human-system capabilities.8 The Human-Guided System Adaptation team is leading the field in terms of algorithm, AI, simulation, integration, and implementation of human-guided mechatronic systems. This is exemplified by the success of the Minecraft open-ended AI platform, which offers insight into using ML advances to make single-agent decisions in uncertain environments, and the project’s impressive accomplishment of winning a highly competitive competition. Minecraft as a platform is a great testbed due to its high dimensionality and more subjective goals. Winning this competition is compelling evidence of being a leader in this field—and of being state of the art. It was impressive to see how quickly this team integrated foundation models and their approach to open-ended AI. Their use of games is the right approach. Beyond this specific example, this core competency is performing top-tier university-level research and picking the right extramural partners to supplement these efforts. The parsimonious use of data for “reducing the performance gap” by lowering the required data training set from 1 billion to 100 million data points is an example of excellent extramural partnerships.
The focus on gaming as a way to leverage advances in the commercial space, and the use of Minecraft, Super MegaTeTRIS, and Airport Scanner were very smart choices. The Drone Control Laboratory was an impressive demonstration of leveraging advanced simulation tools, with a leading example of predicting rollovers of military vehicles in a variety of terrain architecture and mechanical properties. The extracurricular partnership for embedded system estimation and control of drones was an exciting direction of research.
The speculative design work by the extramural partner was also very exciting. The production of more than two games per year from that effort is encouraged. Additionally, the work being done on closing the performance gap by ARL’s extramural partners is of high quality. Reducing the amount of data needed for training is obviously a huge problem that is presently not sustainable, and the work being
___________________
8 Ibid.
done in this competency is high quality and focused on this effort. Reducing the training data points from 1 billion to 0.1 billion is important and a quantitative metric of ARL being leaders in this field.
The “How Do We Design the Games” workshop was a good idea, as was the “Super MegaTeTRIS” project and both of these efforts need to be doubled down on. The “Super MegaTeTRIS” project demonstrates how AI can augment human agents to perform multiple functions simultaneously—which appears to be an important capability for ARL to pursue. More globally, the “How Do We Design the Games” workshop aims to think beyond our current level of simulation building to create more realistic environments and more optimal solutions to challenges. In combination, perhaps, future simulations could be performed to reduce casualties, present new solutions, and the multiplicative effect of human–AI teaming as demonstrated from the “Super MegaTeTRIS” project could be implemented in simulation to provide these new solutions with greater lethality and reduced casualties.
The “MEERKAT Intelligent and Interactive Data Structures for Complex Data Types” demo was also very forward-thinking, and it will be interesting to see how its investigators quantify success in this effort. The continuously adaptable exobots demo (presentation name: “Continuously Adaptive Human-in-the-Loop Exobot”) was also very striking and a good move toward embodiment research and closing the loop on human guided systems from environmental input to agent reaction.
The Drone Control Laboratory demo and posters, the rollover simulations, and the extramural partner’s integration of efficient algorithms for hardware-based, multi-agent teaming were all worthwhile efforts. Such efforts are important for closing the loop on the human guided systems from environment input to multi-agent reaction. Other work on parsimonious neural networks may be able to aid the extramural partner’s effort.9
While not as much in the domain of human-guided system adaptation, the head mimic for establishing ground truth for some of the sensing modalities is a highly noteworthy effort that needs to be doubled down on. It could be expanded into more accurate body phantoms. This is a low-level input into the control loop that allows exquisite variable control for sensitivity analysis that will be important in real environments, with the “Tech Savvy Soldiers” program being the output that closes the loop. The “Tech Savvy Soldiers” program is an excellent example of a testbed that demonstrates the effectiveness of the AI tools developed in the human-guided systems research portfolio; this program also needs to be doubled down on.
The complexity of ARL’s task is enormous. The breadth of the human-guided system adaptation efforts are quite large, which also introduces challenges. There are likely efficiency gains to be made from closing the loop earlier on these ideas. Opportunities for improvement are not in the AI space itself, but rather include (1) the direct involvement and use of more data science experts to strengthen ARL’s capacity to perform more statistical analysis; (2) to broaden the diversity of human participants when gathering test data; and (3) to test the efficacy of AI agents and other software systems in real hardware earlier in the development cycle and to place more importance on the use of hardware development and qualification in parallel with software efforts. This focus on hardware will help further differentiate this core competency from efforts being performed in academia or industry where these institutions are either resource limited or focused on non-hardware integration challenges. Specific examples are discussed in more detail below.
The use of a Minecraft three-dimensional (3D) environment for experimenting with and showcasing the AI model is impressive. The solution space is complex and challenging, as the degrees of freedom are high and may consider many possible actions (e.g., moving the agent, adding or removing geometry, and adding different types of objects) that can be executed in the 3D space. This problem is clearly more challenging than the two-dimensional game platforms (e.g., Pong) out there due to its high
___________________
9 See the Neural Networks for Microcontrollers database at https://github.com/correlllab/nn4mc_cpp, accessed December 5, 2023.
dimensionality and complexity. If the AI-trained agent can operate well in the Minecraft 3D environment, it represents a meaningful step toward enabling it to operate in real 3D environments.
The proposed approach is novel in that it incorporates human preferences in training the AI model. This methodology aligns with the theme of human-guided system adaptation well and would likely find different applications in other practical scenarios. That said, it is uncertain whether there would be a risk of the AI model overfitting to human preferences and what the implications of that may be. It would be important to consider how such a risk could be mitigated. There is some uncertainty when it comes to evaluating human–machine teaming, as it can sometimes be unclear what is meant by good or bad teamwork. And so, it would be helpful to define metrics for characterizing the quality of human–machine teaming, which could serve as part of the goals for optimization as well as the basis for evaluation and comparison of different human–machine teaming approaches. Perhaps one can obtain inspiration from human–human teaming in sports for defining such metrics. Clarifying the complexity of the solution space and the list of the possible actions that the agent can undertake would better illustrate the power of the proposed approach. It would also be interesting to investigate approaches to adapt AI to better collaborate with different human users.
“Grace’s Quarter” as well as the use of imperfect terrain in the negative obstacle rollover demo were also both impressive and interesting efforts. This work is worth doubling down—however, more consideration could also be made in this project for the human subjects that are being sampled—specifically, it would be important to broaden the recruiting effort to enhance the pool of subjects in terms of diversity.
The competition about letting a human user play Super MegaTeTRIS (i.e., 10 Tetris games) simultaneously in collaboration with an AI is very creative and inspiring. Such a scenario is extremely useful and promising for investigating how a human user collaborates with an AI in control-heavy and decision-making tasks. The findings and resulting methodologies would likely find good applications in many practical scenarios, where a human user needs to collaborate with an AI to control multiple machines or entities to solve problems at the same time and at scale.
It would be helpful to perform and depict an in-depth analysis of how a human user collaborates with the AI during the Super MegaTeTRIS competition. For example, it would be useful to investigate when a human user will take control of the program in lieu of the AI, and to determine what the reason is for the human user to take control rather than relying on the AI. It is helpful to investigate the user’s experiences (e.g., trust, stress, and confidence) in such human–AI collaboration scenarios. Such investigations would inform future research on enhancing human–AI collaboration tools, algorithms, and user interfaces so as to improve overall performances and user experiences. The associated hackathon, where students tried to understand how it worked, was a nice approach to train people how to think about this space; however, in this case, it would also be important to try to increase the demographic diversity of participants.
The “MEERKAT Intelligent and Interactive Data Structures for Complex Data Types” program seeks to answer the question of how to design data structures and the format of data to inspire AI learning—and so require human supervision to help the AI learn. The problems that need solutions are (1) the data structures that can handle heterogeneous data sets are not in a standard format; and (2) how may AI be trained to understand if art is abstract or not—a question which requires the quantification of abstract art from numerous art galleries. The use of data structures to intake the painting imagery, and then the construction of a foundational AI model without the typical modeling approaches, was novel. The use of people to instruct the AI if it was right or wrong and iterate over time brings into question the scalability of this solution. It is difficult to quantify success for this project and partnering early and often with those that have domain expertise in these data types is suggested.
Finally, the “Continuously Adaptive Human-in-the-Loop Exobot” project was very exciting. The project inspiration was the acknowledgement that humans and systems need to rapidly and continuously adapt to each other and the changing dynamics of the battlefield for optimal performance, and so, there is a need to understand the mechanisms and factors that can predict and shape mutual and stable adaptation between humans and adaptive systems across time. Thus, the long-range objective of
this project was to incorporate time-varying human dynamics in agent controls to explore using human physio-based state estimation to optimize human–agent co-adaptation in complex, dynamic scenarios. To this end, researchers at ARL used a mobile brain body imaging approach to gain a better understanding of how humans adapt to intelligent systems. They then planned to identify signals that have added benefit to the current state of the art (metabolic energy and kinematics) for humans in order to optimize the assistance of intelligent physical augmentation systems (e.g., active exoskeletons).10 It was surprising that this was the only effort of its kind in this space. Similar AI efforts could be attempted more often.
Generally, the scientific efforts of this core competency were found to be innovative and leading the field of AI. There are some concerns however, that these projects are not being tested in real systems (e.g., systems that incorporate the actual targets that these human–agent teaming efforts are trying to solve) early enough, and therefore errors could be compounding unnecessarily. At times, there was also a lack of clarity about the specific and measurable accomplishments that qualify success—and so more quantitative targets could be applied. For example, in one case, rollover simulations were performed in silico, but validation of the simulations, appear to not have been performed. Testing on real vehicles and the same type of terrain that the researchers had simulated, before moving to more complex terrain, could be helpful. Another example is the use of Super MegaTeTRIS, with the implication that a single human could coordinate multiple-hardware agents in the field. It is important to ask if there are plans to take this game and translate it into hardware that uses systems relevant to the Army? Determining the bottlenecks, such as sources of lag or data outages, in concert with the requirements of these systems would help to accurately identify where the next tranche of resources should be invested. As noted, the research portfolio could attempt to involve more hardware development and integration, as well as human subjects more frequently and early in the development process. Failure modes could also be explored, either earlier in software or even earlier in hardware. Additionally, providing environmental inputs and the adaptive physical outputs could also happen earlier in the project process; an example is the roll over prediction simulator that could be tested under real conditions (e.g., mud) at Grace’s Quarter at Aberdeen Proving Ground.
ARL’s Human System Team Interaction Core Competency program is clearly in the lead, nationally if not internationally, in research and technology for human–human and human–machine integration and teaming technology. The research aimed specifically at developing better understanding of teams operating in complex environments is exceptional. The human system team interaction team leads have developed metrics—such as those for qualities like team cohesion, trust, warmth, force-multiplication factor and collective knowledge development—as well as field exploitation of these metrics. They have also tackled the very challenging problem of measuring the effectiveness of teaching machines to teach and have developed cutting edge research methodologies and findings. A promising take-away from the presentations is that ARL scientists are focused on research that leads quickly to support the development and fielding of technology and techniques that facilitate faster and better decisions in very complex environments. Still, there exists some opportunities to improve the statistical accuracy of this core competency, which are discussed later in this chapter.
Measuring individual performance and its determinants is challenging, even in the best of circumstances. Studying teams of humans (human-to-human teams [HHTs]) adds an additional layer of complexity to this line of inquiry, making research into team performance and team effectiveness especially complex and multifaceted. When automated agents are added into that mix, the problem space
___________________
10 B.J. Courtney, ARL, “A Continuously Adaptive Human-in-Loop Exoboot,” in “ARL Humans in Complex Systems Competency TAB Review,” read-ahead material provided to the Panel on Assessment of Humans in Complex Systems for the November 2, 2022, meeting.
of human-automated hybrid teams (HATs) expands exponentially.11 The ARL organization has fully embraced this HHT and HAT problem space, procuring the resources and expertise necessary to tackle aspects of this research.
The highlight of the human system team interaction portion of the November site visit was the ARL project titled “Information for Mixed Squads” (INFORMS Laboratory). With the ability to simulate multiple vehicles with varying squad sizes and command structures, this demonstration underscored recent advances in team research made by ARL. This demonstration of two teams of humans performing coordinated tasks simultaneously was especially impressive, requiring significant technical and computer programming expertise.
The two After-Action Review (AAR) systems that were showcased during and following the INFORMS Laboratory multi-team demonstration were also outstanding, representing significant progress in the challenging task of developing empirically derived and objective metrics for team behaviors, performance, and effectiveness. These two playback systems allow analysts and observers to monitor team behaviors in real time as well as to conduct post hoc analyses to glean insights into team behaviors, communications, and interactions. Due to the critical need to understand communication patterns in HHTs and HATs, the work on speech analysis presented by an ARL early-career researcher analyzing speech patterns is another crucial aspect of team research. Speech communication analysis will continue to be important to the human system team interaction efforts both at ARL and to the research community. The presentation, “Cohesion Metrics for Human Autonomy Teaming” was also excellent and was given by an early-career ARL scientist who addressed ways of measuring human–autonomy teams, which is a critical gap in team performance research. Early career scientists sometimes struggle to contribute relevant research that could immediately impact the Department of Defense, relying instead on extensions of their graduate school studies. The work from this investigator is both timely and challenging and it was encouraging to see the progress and ideas by this early-career scientist. The presentation, “Adaptation for Human Agent Teams,” given by an extramural senior scientist was also very insightful given the burgeoning field of human–agent teaming.
It was not entirely clear from the interactions at the November meeting how closely the extramural faculty are engaged with ARL personnel who are studying related fields. When questioned, they were seemingly unaware of the crossover and linkages among their various studies. The ambiguity surrounding these interactions raised the concern that extramural performers may not be interacting as closely as they could with ARL employees, missing an opportunity for cross-fertilization of ideas and methods. If this is true, this is a situation that if addressed correctly, could strengthen overall ARL scientific expertise. Additionally, further leveraging extramural expertise—whether through seminars, virtual coaching, site visits, and internships or externships—could be extremely beneficial to ARL.
Another opportunity was identified through the presentation, “Project Vitreous Helmet Mounted Display Technologies for the Next Generation Combat Vehicle” (a demo). Simulator sickness is a well-known phenomenon and it was concerning that the researcher appeared to be unaware of the magnitude of the problem. There are other laboratories that have been working in this area for decades, and ARL could benefit from working with the Air Force Research Laboratory in Dayton, Ohio, which studies simulator sickness.
Finally, ARL has made good use of “field introductory” teams for transitioning experimental approaches to the field. Doubling down on interaction with field-savvy operators whose wisdom is invaluable, particularly when it comes to fielding techniques and technologies to soldiers in highly complex environments, is encouraged.
___________________
11 J.E. Mathieu, P.T. Gallagher, M.A. Domingo, and E.A. Klock, 2019, “Embracing Complexity: Reviewing the Past Decade of Team Effectiveness Research,” Annual Review of Organizational Psychology and Organizational Behavior 6:17–46, https://doi.org/10.1146/annurev-orgpsych-012218-015106.
The hybrid human–technology intelligence core competency focuses on antidisciplinary research aimed at uncovering foundational hybrid approaches to enhance human system teams in Multi-Domain Operations.12 This core competency area reflects new and emerging research at ARL; and work in this core competency had only begun 30 days prior to the time of this review (November 2–4, 2022). Their current trajectory shows high likelihood of strong scientific contributions. The last decade has seen significant advances in the use of ML to support cross-domain, creative, and generative AI13 and studying how to incorporate such advances into human decision-making, as well as how to design new technologies to better foster improved decision-making, is a growth area that ARL stands to lead in.
The leadership for this core competency aims for an antidisciplinary approach to research, bringing in lessons from research across multiple fields, disciplines, and domains including human–computer interaction, psychology, AI, and design. The emerging work is well situated in these fields, and proposed future work promises to embrace this antidisciplinary approach via bringing together leaders in these different fields into conversation to inform future research.
Though it is hard to tell at this early stage, the work does not currently appear to engage with contemporary work in the ethics of AI—this is especially critical given the importance to hybrid human–machine thinking of matters of trust in decision-making and the soundness of decision-making. A prominent publication venue worth investigating to improve engagement with this area of research is the Association of Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT), which publishes annual proceedings.14
The methods proposed for research in the core competency area are sound and draw on a set of assumptions about both future intelligent technologies and future human–machine teaming that are well grounded in the literature. In alignment with the antidisciplinary approach this team strives toward, they could consider an embedded ethics approach to engage with continual critical feedback.15
In relation to the connection of this core competency to other research lines, a strength of the humans in complex systems competency areas is the alignment and cross-pollination between projects across the other core competencies within the humans in complex systems competency. As an emerging core competency area for ARL, the hybrid human–technology intelligence team will be well served by frequent connection and collaboration with teams who engage with hybrid human machine–technology intelligence (HMMI) in their work. Example projects that connect to HMMI include:
___________________
12 DEVCOM ARL, “Foundational Research Competencies and Core Competencies” document for the Army Research Laboratory Technical Assessment Board, received March 30, 2022.
13 See R. Nakano, J. Hilton, S. Balaji, et al., 2021, “WebGTP: Browser-Assisted Question Answering with Human Feedback,” Cornell University, https://arxiv.org/abs/2112.09332; and R. Bommasani, D.A. Hudson, E. Adeli, et al., 2021, “On the Opportunities and Risks of Foundation Models,” Cornell University, https://arxiv.org/abs/2108.07258.
14 Association of Computing Machinery (ACM), “Conference on Fairness, Accountability, and Transparency (ACM FAccT),” https://facctconference.org; conferences proceedings archived at portal.acm.org.
15 S. McLennan, A. Fiske, L.A. Celi, R. Müller, J. Harder, K. Ritt, S. Haddadin, and A. Buyx, 2020, “An Embedded Ethics Approach for AI Development,” Nature Machine Intelligence 2:488–490, https://www.nature.com/articles/s42256-020-0214-1.
Many of the same people in leadership of the core competency are engaged with work in these other core competency areas and formalizing the connections between the core competencies and offering more opportunities for researchers engaged in these areas to connect with each other would help strengthen this core competency area.
Overall, the balance between intramural and extramural research is strong, and the integration of the two is an exciting direction for ARL to take. There is great potential for the core competency to double down on the antidisciplinary approach to research, and potentially transition toward a multidisciplinary or transdisciplinary model that permits the teams to intentionally expand on the set of disciplines they draw from and co-inform. Potential disciplinary expansion areas include: game design and development (key relevant conference communities include the Artificial Intelligence in Interactive Digital Entertainment; the Institute of Electrical and Electronics Engineers Conference on Games16; the ACM Special Interest Group on Computer-Human Interaction [ACM SIGCHI] Annual Symposium on Computer-Human Interaction; and the International Conference on the Foundations of Digital Games) and computational creativity (key conferences include the International Conference on Computational Creativity; the ACM Conference on Creativity and Cognition; and the ACM International Conference on Tangible and Embedded Interaction).17
High-quality research was demonstrated across all talks and presentations. The work presented was equivalent or better (in many cases) than leading research institutions in the United States. A broad understanding of science and research conducted elsewhere was also demonstrated by the research activities presented as part of this core competency. The methods and approaches presented here were also at par with leading research institutes in terms of technical quality and innovation. Below are a few highlighted examples:
___________________
16 Institute of Electrical and Electronics Engineers, “IEEE Symposium on Computational Intelligence and Games, CIG,” https://ieee-cog.org/2022, accessed December 5, 2023.
17 ACM, “Sigchi—Upcoming Conferences,” https://sigchi.org/conferences/upcoming-conferences, accessed December 4, 2023.
Second, the work on developing novel neuroscience-based AI methods was very impressive. The presenter specifically focused on modeling computational principles for adaptive learning and how those can be used in the future to improve human performance (especially in cases where the environment is highly variable and unpredictable).
In terms of opportunities, although the presented research is already exemplary and at par with major research institutions, for higher impact at the individual level more focus on “closing the loop” is suggested. That is, in addition to doing basic science research, more emphasis could be spent on applying the knowledge learned into real-world ecological settings, which might further accelerate the research, as well as its impact.
In a similar vein, focusing on deriving personalized results could also provide an efficient approach. Just like precision medicine approaches that aim to provide personalized interventions to patients, precision neuroscience approaches could also provide precise actionable insights at the individual level (as opposed to group-based neuroscience studies). Precision neuroscience approaches include (1) using dense sampling of individuals (e.g., n = 1 studies); for example, better characterization of state changes; (2) using individual-based neuroanatomical features, such as, brain parcellations or EEG power bands calibrated to individual participants; and (3) modeling changes in the environment in addition to changes in brain states to better account for signal variations such as, for example, clenching teeth due to environmental stress that could provide a signal of interest that might otherwise get discarded (due to artifact correction).
In terms of challenges, there were a few concerns about the balance of research and experimental design of secondary data analysis versus autonomy over designing experiments. It was observed that many projects were based on data collected outside of ARL and were used in a secondary data analytics approach. Such reuse of data is highly efficient and more often considered as a starting point for developing insights that can be later tested by collecting new data. Although some data are being collected at ARL, many data sets were observed to be collected at collaborating research institutions. And so, it was unclear whether ARL research teams have a say in how data were collected or how experiments were designed. For higher impact, ARL research teams could have more autonomy in terms of designing experiments and data collection, if not already available.
In terms of promising new areas of research as well as on doubling down on existing areas of research, three key areas were noticed for higher impact and potential:
High-caliber research was evident in all presentations that fell outside of the core competencies listed above. These presentations included (1) “Uncertainty Integration Across Visualizations and Contexts,” (2) “Blocking EMS with Advanced Fabrics: A Systematic Evaluation,” (3) “Interfacing Bio-amplitude Neuromorphic Devices,” (4) “Suicide Prevention through Social Network Influence,” and (5) “Dynamics of Cooperation and Collective Intelligence for Human-System Team Interactions.”
The presentation “Suicide Prevention through Social Network Influence” is noteworthy because the scientific approaches and the intended translation and application systems developed by the intramural Army and extramural teams is very sophisticated or more advanced than current non-Army organizational approaches. The Army’s science-based campaign has the potential to save many lives through what could be termed “precision suicide prevention”; moreover, to attenuate the problematic parts of a soldier’s life and experience that appear to lead the soldier into a trajectory of even a first contemplation of suicide.
The research portfolio presented by the ARL investigators provided a comprehensive and deep mastery of the principle scientific objectives that the research community is pursuing. The research teams
appear to be well connected with their peer communities; both among ARL, as well as scientific colleagues outside of ARL. Below is a discussion of the scientific expertise within each core competency.
The scientific expertise in the bidirectional human–system communication core competency, is strong, both internally and with its extramural partners. As noted above, identifying key areas that are important to the core competency and ensuring expertise is maintained and developed is essential. Staying connected with the academic community and industry is important for many of the research areas. This includes eye tracking, learning sciences, virtual environment development, ML, as well as domain experts.
The portfolio of scientific expertise for the estimating and predicting humans in complex systems core competency area is excellent, with highly qualified project leaders and technically competent staff scientists and engineers appropriate for the 2022 portfolio of research. The profile of expertise is especially bolstered by collaborations with leading scientists at leading extramural institutions. As new research foci develop, the core competency will need to continue to adapt to demands of new projects by recruiting new intramural and extramural talent and leveraging additional facilities (discussed in detail in the next section). As previously mentioned, the inclusion of data science expertise is recommended.
The human-guided system adaptation core competency has chosen strong team leads. ARL is making good decisions in terms of their recruitment of scientific experts. Additional expertise in embedded systems and mechatronic systems may be useful to investigate, as the system-level implementation of human-guided system adaptation is complex enough that errors will compound without early real-world testing, and the efficacy of the advances will be more meaningful if tested with humans early. If the decision is made to close the loop on hardware earlier (as described above), a makerspace with technical experts would accelerate these developments. Furthermore, this core competency could also prioritize the incorporation of data scientists to work on statistical analysis of data sets.
The expertise with the human-system team interactions core competency is very good, and it was observed that ARL has equipped itself exceptionally well to be able to explore team performance and team effectiveness. Still, it was not clear that they currently have the in-house research design and statistical expertise to fully exploit their state-of-the-art laboratory tools. It was noted that some statistical tests used by ARL presenters were not appropriate for the categories of data and the analysis would have benefitted from consultation with a statistician. By working more closely with and under the tutelage of their extramural partners, some of this expertise could perhaps be grown locally. More expertise could also be developed by expanding the educational and training opportunities focused on statistical analysis and data analysis for current staff. Adding interdisciplinary teams of psychologists and behavioral scientists, as well as teaming who come with study design expertise would be a major asset to the ARL organization in future scientific endeavors.
The team within the hybrid human–technology intelligence core competency is highly qualified to conduct research in this area. They have the appropriate disciplinary background and experience with leading multidisciplinary research efforts. The approach of embedding ARL technical staff into research laboratories across the nation is a major strength, permitting the free exchange of ideas and is in alignment with ARL’s antidisciplinary research approach.
The neuroscience and neurotechnologies core competency had impressive qualifications of teams and extramural scientists. The cross-disciplinary nature of the teams across the board was very positive. The team members’ expertise was quite wide and that seemed to allow complementary synergy between the members. As previously mentioned, it was observed that many projects were based on data collected outside of ARL and were used in a secondary data analytics approach. Although some data are being collected at ARL, many data sets were observed to be collected at collaborating research institutions. And so, it was unclear whether ARL research teams have a say in terms of how data were being collected or how the experiments were designed. For higher impact, ARL research teams could have more autonomy in terms of designing experiments and data collection, if not already available.
Army research facilities were considered from two vantage points: (1) government laboratories and other spaces and (2) university or other formal laboratory settings. In both domains, there appeared to be adequacy in laboratory sophistication, including those that are computational or communicational. The academic or contractor laboratories were not inspected, however from communication with ARL during the review, it appears there is effective and collaborative use of laboratories at the universities (including Institutional Review Board [IRB] approvals, etc.). Below is a discussion of the facilities and resources in relation to each core competency, with suggestions on how they may be supported and bolstered.
The facilities and resources to support the bidirectional human–system communication core competency are suitable for the work being done. A key challenge moving forward is the maintenance of computing resources and technology platforms that are essential for this work. Having the ability to manage and update computing platforms will require investment in state-of-the-art digital infrastructure and technical support personnel to ensure that it is not a bottleneck in the development of research testbeds. This can be challenging due to security but it is essential to strengthen the digital infrastructure to serve the needs of the organization.
The estimating and predicting humans in complex systems core competency benefits from excellent access to and use of digital infrastructure supported by information technology (e.g., modified open-source online gaming interfaces), as well as equipment and facilities (e.g., the laboratories within ARL laboratories and the laboratories of external collaborators conducting IRB approved human participants studies) that are appropriate for addressing the current portfolio of research. The expertise of staff is well matched to capabilities of facilities and goals of current projects. This core competency demonstrated an exemplary approach to leverage existing open-source digital infrastructure and gaming platforms for modified reuse.
There is an opportunity for this core competency moving forward to utilize state-of-the-art facilities within ARL to generate unique, home-grown big data sets in human–human and human–agent adaptive teaming scenarios (e.g., from the INFORMS Laboratory). In addition, to the INFORMS Laboratory infrastructure, the facilities highlighted by other core competencies could provide platforms for data sets in settings closer to real-world scenarios. For example, within the bidirectional-human system communication core competency the demonstration “Interpreting Gaze Behavior in Open-World Virtual Environment” highlighted capability to track and interpret head-mounted gaze data from operators participating in human–human team adaptive tasks in real time. This could be coupled with the research found in the presentation “Modeling Collision Avoidance Decisions by a Simulated Human-AI Team with Inverse Reinforcement Learning” described above.
As previously mentioned, given the goal to develop data-driven approaches to modeling and predicting adaptive human behavior it is advised to invest more in bolstering resources that support research in modern data science including recruiting more experts in the areas of advanced statistical approaches that go beyond standard linear multi-factor regression (e.g., Bayesian approaches), and developing state of the art ML tools and computing infrastructure that enables rapid, parallel computation.
For the human-guided system adaptation core competency the facilities toured were excellent, and it was exciting to hear about Grace’s Quarter as a hardware testing facility. A makerspace with manufacturing experts in embedded systems and hardware that is more streamlined to use, would encourage closing the hardware loop earlier, and so would also benefit this core competency. To meet the needs of the human-system team interactions core competency it was observed that facilities and resources available to the individuals working on the team research at ARL were outstanding with the exception of the INFORMS Laboratory, which, as described above, does not have a convenient or uncomplicated means of updating the software in their large bank of computers and servers due to the local computer security restrictions. This problem poses a significant hindrance to the programmers and research scientists working in the INFORMS Laboratory. As the human system team interaction researchers affirmed, Windows operating systems require frequent mandatory updates, which are difficult to accomplish in the current configuration.
Overall, the facilities and resources to support the existing work in the Hybrid Human–Technology Intelligence core competency are high-quality and adequate for the needs of the current research. The INFORMS Laboratory was impressive, and there is clear potential for the technologies in this laboratory to be used by this core competency area. This includes work in studying advanced decision-making in multi-agent vehicle scenarios, as well as adapting the technologies available for AAR to reviewing decision-making. The INFORMS Laboratory is a jewel of ARL and is worthy of investment for its maintenance and upkeep. Currently, this laboratory does not have a convenient or uncomplicated means of updating the software in their large bank of computers and servers due to the local computer security restrictions. This problem poses a significant hindrance to the programmers and research scientists working in the INFORMS Laboratory In particular, the usage of industry-standard and cutting-edge technologies requires continual upgrading and software maintenance, often over the open internet, which is challenging to undertake in the secure network environment that ARL requires. It is worth investing in this laboratory and adequately resourcing it—either through new staff hires or re-allocation of existing staff—with appropriately trained staff (requiring a combination of information technology [IT] management and game development expertise) to overcome the challenges faced in maintaining a laboratory of this type in a secure network environment.
The antidisciplinary and prototype-based approach taken by this core competency would be well served by (1) improving staff resources in game development, especially for art assets, and (2) a dedicated makerspace facility. In terms of art asset development, high-fidelity models are important for user experience in game-based research in order to create a naturalistic environment; many projects had high-quality, high-fidelity art assets (e.g., the INFORMS Laboratory demonstrations), while others seemed not to have access to skilled artists and designers, and so there is an opportunity to focus into this area. In terms of a makerspace—antidisciplinary, experimental, and speculative design research—which the core competency team aims to undertake, would benefit from access to makerspace technologies such as 3D printing, laser cutting, and textile prototyping. Human decision-making is inherently tangible and embodied—to support this aspect of the research, having access to tools that can help the research team prototype new interfaces and technologies will be key to its success.
To meet the needs of the neuroscience and neurotechnologies core competency, it was observed that the overall quality, appropriateness, and utilization of available resources to ARL teams was impressive. The following suggestions are listed below:
This section of the chapter reviews crosscutting observations that emerged from the individual reviews of humans in complex systems core competencies to define broader observations and suggestions for the humans in complex systems competency as a whole.
The overall scientific quality of the teams across all core competencies was found to be extremely impressive. The work presented was in many cases equivalent or better than leading research institutions in the United States. Furthermore, a broad understanding of science and research conducted elsewhere was also demonstrated by the research activities across the different competencies. The methods and approaches presented were many times at par with leading research institutes in terms of technical quality
and innovation. The scientific expertise within the competency—both internally and with ARL’s extramural partners is also very impressive across the board, with strong qualifications and appropriate subject-matter expertise. Suggestions for where expertise could be broadened are mentioned in the findings above. There are also excellent synergies amongst the core competency teams within the competency. Additionally, ARL has shown tremendous impact in terms of early-career research training. Several stellar examples of career trajectories by ARL trainees was evident across presentations. It is highly suggested that there be continued support and mentorship of early-career researchers in the future.
A crosscutting concern that emerged upon review of the individual core competencies was that the humans in complex systems competency needs to better prioritize data science and the incorporation of statisticians. For the human-system team interactions core competency, it was noted that some statistical tests used by ARL presenters were not appropriate for the categories of data and the analysis would have benefitted from consultation with a statistician. Analysis of the human-guided system adaptation core competency gave a similar recognition that a statistician is needed.
Review of the work in the neuroscience and neurotechnologies core competency also noted concerns about the balance of research and experimental design in terms of secondary data analysis versus developing autonomy over the design of experiments. Although some data are being collected at ARL, many data sets were collected at collaborating research institutions. For higher impact, ARL research teams could have more autonomy in terms of designing experiments and data collection. One suggestion to increase the rigor, replication, and reproducibility of neuroscience experiments, could be the creation of a small team (or center or laboratory) that provides support to the ARL teams in terms of data science principles for improving reproducibility of experiments. Similar centers exist in most research intensive university settings.
The data science and statistics gap was also identified through reviewing the work of the estimating and predicting humans in complex systems core competency, where most if not all projects in the core involve collecting, processing, evaluating, and managing vast amounts of data, in some cases in real time. To keep up with industry and leading academic institutions, it would be important to invest in bolstering the data science expertise within this core competency as well. ARL may consider incorporating more scientists and engineers who work in the nascent field of big data. This core competency would benefit from experts trained in cutting edge techniques (e.g., nonlinear and Bayesian), computer scientists, and applied mathematicians who can develop and apply leading edge ML algorithms and tools and computing infrastructure that enables rapid, parallel computation. Expertise in statistical analysis and data science may be grown locally through working more closely with ARL’s extramural partners or by expanding the educational and training opportunities focused on statistical analysis and data science for current staff.
Another crosscutting observation in review of the work of the estimating and predicting humans in complex systems and neuroscience and neurotechnologies core competencies is that some research could be transitioned to real-world ecological settings earlier to ensuring the ecological validity of the research. For the estimating and predicting humans in complex systems core competency, nearly all of the projects use experimental frameworks that try to recreate real-world scenarios in a controlled laboratory or virtual environment. This is an undoubtedly important first step, but overreliance on this may eventually stifle progress and so, it is suggested that the researchers in this core competency consider how to rapidly accelerate toward acquiring data and studying behaviors in ecologically relevant contexts (i.e., in the field). Operationally this means more investment in lean, wearable sensor technology that can be used in these settings. For the neuroscience and neurotechnologies core competency there is a suggestion that in addition to doing basic science research, more emphasis could be spent on applying the knowledge learned into real-world ecological settings, which might further accelerate the research, as well as its impact.
The facilities within the competency are equipped to serve the needs of the researchers, and across the board, are impressive and suit the needs of the researchers in the humans in complex systems competency. There was a concern; however, that computing resources and technology platforms will need investment to ensure their digital infrastructure is supported and strengthened. For example, the
INFORMS Laboratory does not have a convenient or uncomplicated means of updating the software in their large bank of computers and servers due to the local computer security restrictions. This problem poses a significant hindrance to the programmers and research scientists working in the INFORMS Laboratory. Windows operating systems require frequent mandatory updates, which are difficult to accomplish in the current configuration. The INFORMS Laboratory is a very important part of ARL and is worthy of investment for its maintenance and upkeep. In particular, the usage of industry-standard and cutting-edge technologies requires continual upgrading and software maintenance, often over the open internet, which is challenging to undertake in the secure network environment that ARL requires. It is worth investing in this laboratory and adequately resourcing it with appropriately trained staff (requiring a combination of IT management and game development expertise) to overcome the challenges faced in maintaining a laboratory of this type in a secure network environment.
Through the review of the work within the hybrid human–technology intelligence core competency and the human-guided system adaptation core competency there emerged an idea to develop a dedicated makerspace that would have technologies such as 3D printing, laser cutting, and textile prototyping. Human decision-making is inherently tangible and embodied; to support this aspect of the research requires access to tools that can help the research team prototype new interfaces and technologies. For the human-guided system adaptation core competency, a makerspace with manufacturing experts in embedded systems and hardware, that is more streamlined to use, may encourage closing the hardware loop earlier.