Defining Symbolic AI
Symbolic AI, GOFAI, or Rule-Based AI (RBAI), is a sub-field of AI concerned with learning the internal symbolic representations of the world around it. The main objective of Symbolic AI is the explicit embedding of human knowledge, behavior, and “thinking rules” into a computer or machine. Through Symbolic AI, we can translate some form of implicit human knowledge into a more formalized and declarative form based on rules and logic.
Understanding explicit and implicit knowledge
Explicit knowledge is any clear, well-defined, and easy-to-understand information. Explicit knowledge is based on facts, rules, and logic. An excellent example of explicit knowledge is a dictionary. In a dictionary, words and their respective definitions are written down (explicitly) and can be easily identified and reproduced.
Implicit knowledge refers to information gained unintentionally and usually without being aware. Therefore, implicit knowledge tends to be more ambiguous to explain or formalize. Examples of implicit human knowledge include learning to ride a bike or to swim. Note that implicit knowledge can eventually be formalized and structured to become explicit knowledge. For example, if learning to ride a bike is implicit knowledge, writing a step-by-step guide on how to ride a bike becomes explicit knowledge.
In the Symbolic AI paradigm, we manually feed knowledge represented as symbols for the machine to learn. Symbolic AI assumes that the key to making machines intelligent is providing them with the rules and logic that make up our knowledge of the world.
Humans, symbols, and signs
Symbolic AI is heavily inspired by human behavior. Humans interact with each other and the world through symbols and signs. The human mind subconsciously creates symbolic and subsymbolic representations of our environment. It’s how we think and learn. Our world is full of fuzzy implicit knowledge. Objects in the physical world are abstract and often have varying degrees of truth based on perception and interpretation. Yet somehow, we can still knowingly navigate our way through life. We can share information and teach each other new skills. We can do this because our minds take real-world objects and abstract concepts and decompose them into several rules and logic. These rules encapsulate knowledge of the target object, which we inherently learn.
This approach has been our way of life since the beginning of time. Thomas Hobbes, a British philosopher, famously said that thinking is nothing more than symbol manipulation, and our ability to reason is essentially our mind computing that symbol manipulation. René Descartes also compared our thought process to symbolic representations. Our thinking process essentially becomes a mathematical algebraic manipulation of symbols. Think about it for a second. What happens when we think? We start to formulate ideas. Ideas are based on symbols that represent some other object. For example, the term Symbolic AI uses a symbolic representation of a particular concept, allowing us to intuitively understand and communicate about it through the use of this symbol. Then, we combine, compare, and weigh different symbols together or against each other. That is, we carry out an algebraic process of symbols – using semantics for reasoning about individual symbols and symbolic relationships. Semantics allow us to define how the different symbols relate to each other. They also enable us to interpret symbolic representations.
To properly understand this concept, we must first define what we mean by a symbol. The Oxford Dictionary defines a symbol as a “Letter or sign which is used to represent something else, which could be an operation or relation, a function, a number or a quantity.” The keywords here represent something else. Symbols are merely explicit references to implicit concepts. We use symbols to standardize or, better yet, formalize an abstract form. This process is also commonly referred to as conceptualization. At face value, symbolic representations provide no value, especially to a computer system. However, we understand these symbols and hold this information in our minds. In our minds, we possess the necessary knowledge to understand the syntactic structure of the individual symbols and their semantics (i.e., how the different symbols combine and interact with each other). It is through this conceptualization that we can interpret symbolic representations.
Let’s consider a newborn child, for example. At birth, the newborn possesses limited innate knowledge about our world. A newborn does not know what a car is, what a tree is, or what happens if you freeze water. The newborn does not understand the meaning of the colors in a traffic light system or that a red heart is the symbol of love. A newborn starts only with sensory abilities, the ability to see, smell, taste, touch, and hear. These sensory abilities are instrumental to the development of the child and brain function. They provide the child with the first source of independent explicit knowledge – the first set of structural rules.
With time and sensory experiences, these structural rules become innate to the human mind, promoting further psychological development. The child begins to understand and learn rules – such as if you freeze water, it will eventually become ice. Here, ice is purely a label representing frozen water. Fire is hot, and if you touch hot, it will hurt. The child will begin to understand the physical and psychological world one rule at a time, continuously building the world’s symbolic representation by learning newer and perhaps more complex syntactic and semantic logical rules. Eventually, the child will be able to communicate these symbolic representations with other humans and vice versa. As humans, we widely encourage the formalization of knowledge. Therefore, we are entirely dependent on symbolic knowledge. Some symbolic examples include the following:
- Phonograms: Any symbol (typically a letter or character) used to represent vocal sounds or linguistics (or both). Phonograms are used to describe the pronunciation of a particular word. For example, the term dog has the phonogram d/o/g (3), while the word strawberry has the phonogram s/t/r/aw/b/err/y (7).
- Logograms: Any linguistic symbol (a letter or sign) that is used to represent any complete word or phrase. Logograms do not consider the phonetics of the said word or phrase. The $ (dollar) and & (ampersand) signs are good examples of logograms.
- Pictograms: Any schematic graphical (pictorial) symbol representing an entire word, phrase, or concept. Gender symbols and graphical charts are two examples of pictograms.
- Typograms: Any symbol, typically linguistic, that represents the definition or implication of a particular word through manipulating its letters. A typogram essentially becomes a symbol that encapsulates another symbol. For example, a typogram of the word missing might be m-ss-n-g. This is because the “i”s are missing from the word.
- Iconograms: Any graphical symbol that is used to represent an entire word, phrase, or concept. Iconograms differ from pictograms because they tend to be more graphically and artistically detailed. A drawing of a flower or a view of a map are examples of iconograms.
- Ideograms: Any symbol that represents a word or concept. Ideograms are often in geometric shapes, which differ from other graphical symbols. As the name suggests, while they can define words, they are typically used to represent ideas. Examples of ideograms include a traffic stop sign or a no smoking sign.
Irrespective of our demographic and sociographic differences, we can immediately recognize Apple’s famous bitten apple logo or Ferrari’s prancing black horse. Even our communication is heavily based on symbols.
Figure 2.1 depicts the Sumerian language, which is recognized as being the first human language, dating back to circa 3100 BC (source: https://www.history.com/topics/ancient-middle-east/sumer#:~:text=The%20Sumerian%20language%20is%20the,for%20the%20next%20thousand%20years.). Its alphabet comprised graphical symbols representing various nouns, objects, and actions of the time. This is perhaps the best representation of the “thinking in symbols” concept.
Figure 2.1: The Sumerian language. Image by Mariusz Matuszewski on Pixabay
Humans thrive on interaction, and formalizing and declaring representations of implicit concepts and abstract objects is crucial to universal communicative abilities. The ability to create symbolic representations of the world around us might be a differentiating trait of intelligence. Recently, scientists have also found that other animals, including primates, dolphins, and horses, could understand and utilize human symbols to interact and communicate with humans. In one of their experiments, a group of horses was shown three symbols representing “no change," “add a blanket,” and “remove a blanket.” The horses could choose what they wanted based on the weather conditions by pointing toward the respective symbol. This feat is truly remarkable and drives the point home of the power behind symbols!
Now that we’ve discussed the vital role that symbols and signs play in everyday life, how does all this tie together with Symbolic AI?