Artificial Intelligence Frequently Asked Questions

This FAQ, developed by the Air Force Research Laboratory (AFRL), presents a number of frequently asked questions concerning autonomous systems. The goal is to provide a self-consistent position on the subject that can be used to facilitate discussions on some of the underlying concepts, the science and technology challenges, and the potential benefits for addressing capability gaps.

Part 1. What is an Autonomous System (AS)?

We have defined an autonomous system in terms of its attributes across three dimensions—namely, proficiency, trustworthiness, and flexibility:

  • An AS should be designed to ensure proficiency in the given environment, tasks, and teammates envisioned during operations. Desired properties for proficiency include situated agency, adaptive cognition, multiagent emergence, and experiential learning.

  • An AS should be designed to ensure trust when operated by or teamed with its human counterparts. Desired tenets of trust include cognitive congruence and transparency, situation awareness, effective human-systems integration, and human-systems teaming/training.

  • An AS should exhibit flexibility in its behavior, teaming, and decision- making. Desired principles of flexibility include flexibility in terms of being able to conduct different tasks, work under different peer-to-peer relationships, and take different cognitive approaches to problem-solving.

We believe that all of these dimensions need to be satisfied to some degree if we are to effectively field and use ASs in the Air Force. Stated another way, a failure to satisfy the design space across all three dimensions will lead to a failure of a fielded AS: low proficiency will lead to the use of other systems, low trust will lead to disuse, and low flexibility will lead to an AS that fails to exhibit true autonomy under changing circumstances that may not have been envisioned during the design phase.

 

A natural question is whether or not these dimensions have any meaning toward how much (or what level of) autonomy the system has. The answer is no. Purposefully, there is no intent to use the three dimensions to define or guide some notion of levels of autonomy. This is simply because it is not clear this is a useful construct.

Part 2. General Concepts

What is intelligence? What is artificial intelligence?

Intelligence is the ability to gather observations, create knowledge, and appropriately apply that knowledge to accomplish tasks. Artificial intelligence (AI) is a machine that possesses intelligence.

What is an AS's internal representation?

Current ASs are programmed to complete tasks using different procedures. The AS’s internal representation is how the agent structures what it knows about the world, its knowledge (what the AS uses to take observations and generate meaning), how the agent structures its meaning and its understanding, for example, the programmed model used inside of the AS for its knowledge base. The knowledge base can change as the AS acquires more knowledge or as the AS further manipulates existing knowledge to create new knowledge.

What is meaning? Do machines generate meaning?

Meaning is what changes in a human’s or AS’s internal representation as a result of some stimuli. It is the meaning of the stimuli to that human or system. When you, a human, look at an American flag, the sequence of thoughts and emotions that it evokes in you is the meaning of that experience to you at that moment. When the image is shown to an AS, and if the pixel intensities evoked some programmed changes in that AS’s software, then that is the meaning of that flag to that AS. Here we see that the AS generates meaning in a manner that is completely different than from how a human does it. The change in the AS’s internal representation, as a result of how it is programmed, is the meaning to the AS. The meaning of a stimulus is the agent-specific rep - representational changes evoked by that stimulus in that agent (human or AS). The update to the representation, evoked by the data, is the meaning of the stimulus to the agent. Meaning is not just the posting into the representation of the data; it is all the resulting changes to the representation. For example, the evoking of tacit knowledge, or a modification of the ongoing simulation (consciousness; see below), or even the updating of the agent’s knowledge resulting from the stimuli, is included in the meaning of a stimulus to an agent. Meaning is not static and changes over time. The meaning of a stimulus can be different for a given agent depending on when it is presented to the agent.

What is understanding? Do machines understand?

Understanding is an estimation of whether an AS’s meaning will result in it acceptably accomplishing a task. Understanding occurs if it increases the belief of an evaluating human (or evaluating AS) that the performing AS will respond acceptably. Meaning is the change in an AS’s internal representation resulting from a query (presentation of a stimulus). Understanding is the impact of the meaning resulting in the expectation of successful accomplishment of a particular task.

What is knowledge?

Knowledge is what is used to generate the meaning of stimuli for a given agent. Historically, knowledge comes from the species capturing and encoding via evolution in genetics, experience by an individual animal, or animals communicating knowledge (via culture) to other members of the same species. With advances in machine learning, it is a reasonable argument that most of the knowledge that will be generated in the world in the future will be done by machines.

What is thinking? Do machines think?

Thinking is the process used to manipulate an AS’s internal representation; a generation of meaning, where meaning is the change in the internal representation resulting from stimuli. If an AS can change or manipulate its internal representation, then it can think.

What is reasoning? Do machines reason?

Reasoning is thinking in the context of a task. Reasoning is the ability to think about what is perceived and the actions to take to complete a task. If the system updates its internal representation, it generates meaning and is doing reasoning when that thinking is associated with accomplishing a task. If the system’s approach is not generating the required “meaning” to acceptably accomplish the task, it is not reasoning appropriately.

What is a task?

A task (or goal) is an agentcentric desired future state or sequence of states of the world.  The word “state” could include internal states (e.g., for learning) and the definition is specifically agentcentric: tasks only exist in the context of a particular agent's model, including the criteria for successful completion of the task. Also, this formulation includes change-based tasks (transform this thing into that thing, or move from here to there) as well as maintenance-type tasks (keep doing something, never allow something to happen). It is important to note that we must make a distinction between the specification of a task (problem description) and the performance of a task (solution execution).

Task specification involves:

  • constraints on the system evolution  (i.e., how are states allowed to change -- or in what sense they must stay the same)

  • initial and completion criteria expressed in the agent's representation (this could be probabilistic and/or multi-faceted)

Performance of a task requires the communication and translation of a task specification onto an executing agent’s representation. The executing agent might need to incorporate planning, sub-tasking or other techniques in order to achieve the objectives of the task specification. Completion of a task is again agentcentric, and there could be misalignment between the evaluation of task completion with respect to the specifying agent and executing agent.

The statement of the equivalence of two tasks is always with respect to some agent's evaluation of their equivalence.

What is cognition? What makes a system cognitive?

Cognition is the process of creating knowledge and understanding through thought, experience, and the senses (includes creating knowledge by ‘culture’). A system that can create knowledge and understanding through thinking and experience and sensing is cognitive. As an example, a Cognitive Electronic Warfare (CEW) system gathers data from its senses and creates knowledge. It uses relevant knowledge to accomplish its EW mission, which demonstrates a level of understanding the system has with respect to its task.

What is a situation?

A situation is the linkage of individual knowledge entries in the AS’s internal representation that can be combined to make a new single-knowledge entry. This new single-knowledge entry becomes a situation due to its linkage to the individual entries it is composed of. Situations are the fundamental unit of cognition. Situations are defined by their relationship to, and how they can interact with, other situations. Situations are comprehended as a whole.

What is situated cognition?

Situated cognition is a theory that posits that knowing is inseparable from doing by arguing that all knowledge is situated in activity bound to social, cultural, and physical contexts. This is the so-called see/think/do paradigm.

What is learning? What is deep learning?

Learning is the cognitive process used to adapt knowledge, understanding, and skills, through experience, sensing, and thinking, to be able to adapt to changes. Depending upon the approach to cognition the agent is using (its choice of a representation, for example, symbolic, connectionist, etc.), learning is the ability of the agent to encode a model using that representation (the rules in a symbolic agent or the way artificial neurons are connected and their weights adjusted, for a connectionist approach). Once the model has been encoded, it can be used for inference. Deep learning is a subset of the connectionist approach incorporating many neuronal processing layers, with a learning paradigm that has overcome past limitations associated with the multilayer “credit assignment” problem (i.e., which weight should be adjusted to improve performance), has made use of big data and multiple instantiations for training, and has made advances in computational infrastructures. Deep learning has received much attention in recent years due to its ability to process image and speech data; it is largely made possible by the processing capabilities of current computers, the dramatic increase in available data, and modest modifications in learning approaches. Deep learning is basically a very successful big data analysis approach.

Part 3. Examples

Is a garage door opener automated or automatic?

A garage door opener opens the door when it is signaled to do so and stops based on some preset condition (number of turns the motor makes, or by a switch). When it is closed, it opens when it is signaled to do so and stops based on the same sort of preset condition. A garage door opener is an automatic system since it performs a simple task based on some trigger mechanism and stops at the completion of its task, also based on some trigger mechanism.

Is an automatic target recognition system automated or automatic?

Current methods used for target recognition work under a set of assumed operating conditions, against known targets, and can reject target-like objects resulting in some level of robustness. As such, they are automated solutions.

Is a Ground-Collision Avoidance System (GCAS) autonomous?

A GCAS takes control of an aircraft if there is concern that the pilot will cause the aircraft to collide with the ground. Here, the system takes command and control (C2) away from the pilot to keep the aircraft from colliding with the ground, then the pilot can regain C2 (either explicitly relinquished control or the pilot takes it back). The GCAS demonstrates peer flexibility and is therefore addressing a key challenge for an AS noted earlier. Notice that in this description the system does not, however, exhibit task or cognitive flexibility. Some argue that GCAS is merely an automated system, due to its lack of cognitive flexibility.

Is an autopilot system autonomous?

An autopilot system has the task of flying a particular trajectory, at a particular speed, and at a particular altitude (or altitude profile) set by the pilot. The autopilot does not change its task or its peer relationship with the pilot, nor does it change how it controls the aircraft. It therefore does not satisfy any of the three flexibility principles of autonomy. It does, however, reflexively adapt to changing conditions that impact its heading, speed, and altitude to maintain the parameters provided to it. It is therefore an automated system.

Is an adaptive cruise control system autonomous?

A cruise control system exists to maintain a constant speed. An adaptive cruise control system also maintains its speed but adapts to sensed changes in front of it by changing its speed, without permission from the driver, to maintain a safe distance behind the car in front of it. It may also brake in case the need arises. An adaptive cruise control is automated since it never changes its peer relationship, never changes its task (only the way it is accomplishing its task in a preprogrammed manner), and does so with no cognitive flexibility.

Is an air-to-air missile autonomous?

An air-to-air missile, or even a cruise missile, has a fixed peer relationship with the human that launched it. The missile is doing a predefined task and doing it in a preprogrammed way. None of the three principles of flexibility are demonstrated—and it is therefore not autonomous. The system is remarkable and able to complete a very complex task, but it is merely automated.

Are the Google Car and Tesla with autopilot autonomous?

The Google Car drives to a location as directed, but it can change its task from driving to making an emergency stop to avoid running into a pedestrian, for example. When the Tesla autopilot locks the driver out and won’t allow engagement because the driver is taking his/her hands off the wheel, or when the VW autonomous car takes control and brakes to prevent a head-on collision, we are seeing instances of peer flexibility, as in GCAS described earlier. But it is not autonomous.

Is the Roomba autonomous?

The Roomba is a popular home commodity that serves as the homeowner’s proxy to vacuum the carpet. It is quite capable, and new versions incorporate modern robotics, to include the ability to determine its current location while mapping out its environment. The Roomba has a sole task—to vacuum. It does not have an ability to change its peer relationship, and it does not change its model for completing its task. As such, the Roomba is not autonomous, but it is a very capable and useful automated system.

Is IBM's Watson intelligent? Is Watson an AI?

Watson has knowledge that is gathered and/or generated by a combination of human programming and the application of those programs to large stores of data. It is capable of efficiently storing and retrieving potentially relevant knowledge so that it may respond to queries. One could argue that when Watson is allowed to use its programming to search and appropriately index large repositories of data, it is gathering information that it later applies appropriately. In doing its search, it uses an ensemble learning approach, which means it changes its model so it can provide better results, an aspect of cognitive flexibility. As such, Watson is therefore addressing a key challenge for an AS, but we would not consider it autonomous. However, Watson is a combination of hardware and software that exhibits intelligence. As defined previously, it is therefore an AI.

Is Siri intelligent? Is Siri an AI?

The information gathered by Siri to respond to queries is done via its programming. It does gather data in response to a query and often appropriately uses that knowledge to provide value to answering it and is a good automation. Since Siri is a combination of hardware and software and exhibits intelligence as defined previously, we would label Siri an AI.

Does Siri understand what I am asking?

One can say that Siri understood when we have a reasonable expectation that it will give an answer that can be used and say it did not understand when we have a reasonable expectation that the answer will not be acceptable. But with all AI systems, the user must realize the meaning generated by the system is not “human meaning” and thus must be used judiciously. As an example, an AI can call a school bus an ostrich with high confidence, yet   any human looking at the image will not be able to understand how the AI could possibly make that error. The reason is that, to the AI, the meaning is a location in a vector space reached by processing pixel intensities and colors and an ostrich is an object category that does not possess the rich meaning humans associate/generate in our meaning.

Does AlphaGo understand the game of Go?

AlphaGo’s understanding of the game Go can only be assessed from the perspective of another agent. As a non-Go player, one might be willing to say AlphaGo understands the game because, from a naïve perspective, it responds acceptably to the task of playing the game. Here, AlphaGo has generated “meaning” of any board states that facilitate an expected acceptable response. From the perspective of one who wants to define “understanding the game” as “the AS has generated internal to itself the meaning of what a game is,” then one might conclude AlphaGo does not understand the game of Go. AlphaGo is an automated system, and because it uses knowledge to generate its meaning of those board states that facilitate its response, it is an AI. Table B.1 attempts to summarize some of the categorizations we have made above.

AS Table

Mapping of several systems to the prinicples of flexibility, artificial intelligence, automated systems, and automatic systems.

Part 4. Practical Limitations of AI

Where do current AI systems fail?

Current approaches to AI rely on knowing all that is needed to know about the environment and programming in acceptable responses for all possibilities. These approaches are unable to respond correctly when they are unable to get all of the data they expect or if they encounter a stimulus they do not have a programmed response for. The problem is compounded when both conditions occur.

How can we be sure an AI will do what we want it to do versus something we absolutely do NOT want it to do?

A significant challenge faced when using AI to perform tasks is the problem of validation and verification of that AI’s performance. Any AI is programmed to generate an internal representation (its meaning), and that representation has to capture all aspects of what delineates acceptable and unacceptable behavior to be confident it will only do what it is supposed to do. It is impossible to test an AI under all possible operating conditions; it will not be known when it will fail or perform unacceptably. The same issues are faced with human agents. The Air Force goes to great lengths to train its Airmen to do tasks and cannot possibly test them on all the operating conditions they will face. Training is continually refined based on feedback on performances of those Airmen doing their jobs. The same will have to be done with AIs. There is the additional challenge/advantage in that the meaning of a given stimulus can be programmed into the AI and therefore tested against.