Main | Introduction | The Turing Test | The Loebner Prize | Strong AI | Sources | About
Inseparability of Mind and Body | The Frame Problem | A Holistic Approach to AI

The Frame Problem

While a chatbot or search engine can reasonably function using pure word-to-word textual references, they falter when topics become more specialized; chatbots become incoherent, search engines tend to present only the most popular or obvious information. (Dreyfus) This is an example of the Frame Problem: when presented with an unfamiliar subject, a computer does not know what information is important and what is unnecessary. (Copeland) A certain amount of knowledge about a subject is necessary to make that determination, and without that information in their database, a computer cannot make the distinction. This problem is compounded when a machine does not have "bodily common sense". In the case of CYC, its creators have not yet solved the problem of how it can quickly access relevant and only relevant information for a certain query. It seems that if a machine could inhabit a body and learn from its interactions with the environment it could gain this sort of common sense itself; it could also learn for itself what is important and what is not at the same time, thus avoiding the frame problem.

Even so, the developers of many bodily AI systems, such as the robotic insect Shakey, treat their creations' bodies as burdens rather than a means to a solution to the frame problem. Shakey, for example, can see its surroundings via camera, but translates the what it sees into a "micro-world" filled with symbols. It then interprets the placement of the symbolic objects to determine its next move. Its movement was extremely slow, as the robot must continually update its large "micro-world" every time the environment changes or it moves. (Copeland) This approach to physical interaction is much like a search engine's interpretation of image tag data, essentially translating what is there into something else so that the computer can understand it. When the symbols become more complex and more numerous, the system's frame problem causes the process to slow to a crawl.

The flaws of symbolic AI form a self-perpetuating cycle: The system does not have common sense, so it is given it piece by piece. Then, it has too much information to know how to handle effectively, arising mainly because it has no common sense. There is a sense that without a significant change in paradigm, the AI community will be restricted to time-consuming and dubiously effective projects with flaws that currently have no solution.