Difference between revisions of "PC"

From Knoesis wiki
Jump to: navigation, search
(3.1 Semantic Computing (SC))
 
(16 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
[http://knoesis.org/amit Amit Sheth], [http://knoesis.org/researchers/pramod/ Pramod Anantharam], and [http://knoesis.org/researchers/cory/ Cory Henson]<br /><br />
 
[http://knoesis.org/amit Amit Sheth], [http://knoesis.org/researchers/pramod/ Pramod Anantharam], and [http://knoesis.org/researchers/cory/ Cory Henson]<br /><br />
  
Visionaries and scientists from the early days of computing and electronic communication have discussed the proper role of technology to improve human experience. Technology now plays an increasingly important role in facilitating and improving personal and social activities and engagements, decision making, interaction with physical and social worlds, generating insights, and just about anything that a human, as an intelligent being, seeks to do. We have used the term Computing for Human Experience (CHE) [1] to capture this essential role of technology in a human centric vision. CHE emphasizes the unobtrusive, supportive, and assistive role of technology in improving human experience, so that technology “takes into account the human world and allows computers themselves to disappear in the background” (Mark Weiser [2]). This can be distinguished from Licklider’s vision of human-computer collaboration [5], Engelbart’s vision of augmenting human intellect [16], and McCarthy’s definition of intelligent machines [11].
+
While the debate about whether AI, robots, or machines will replace humans is raging (among Gates, Hawking, Musk, Theil and others), there remains a long tradition of viewpoints that take a progressively more human-centric view of computing. A range of viewpoints, from machine-centric to human-centric, have been put forward by McCarthy (intelligent machines), Weiser (Ubiquitous computing), Englebert (augmenting human intellect), Lickleider (man-machine symbiosis) and others. We focus on the recent progress taking place in human-centric visions, such as Computing for Human Experience (CHE) (Sheth) and Experiential Computing (Jain) [S10].  CHE is focused on serving people’s needs, empowering us while keeping us in loop, making us more productive with better and timely decision making, and improving and enriching our quality of life.<br/>
 +
The Web continues to evolve and serve as the infrastructure and intermediary that carries massive amounts of multimodal and multisensory observations, and facilitates reporting and sharing of observations.  These observations capture various situations pertinent to people’s needs and interests along with all their idiosyncrasies. Data of relevance to people’s lives span the physical (reality, as measured by sensors/devices/IoT), cyber (all shared data and knowledge on the Web), and social (all the human interactions and conversations) spheres [SAH13]. Web data may contain event of interest to everyone (e.g., climate), to many (e.g., traffic), or to only one (something very personal such as asthma). These observations contribute toward shaping the human experience.<br/>
 +
We emphasize contextual and personalized interpretation in processing such varying forms of data to make it more readily consumable and actionable for people. Toward this goal, we discuss the computing paradigms of semantic computing, cognitive computing, and an emerging paradigm in this lineage, which we term perceptual computing.  In our view, these technologies offer a continuum to make the most out of vast, growing, and diverse data about many things that matter to people’s needs, interests, and experiences.  This is achieved through actionable information, both when humans desire something (explicit) and through ambient understanding (implicit) of when something may be useful to people’s activities and decision making. Perceptual computing is characterized by its use of interpretation and exploration to actively interact with the surrounding environment in order to collect data of relevance useful for understanding the world around us.<br/>
 +
This article consists of two parts. First we describe semantic computing, cognitive computing, and perceptual computing to lay out distinctions while acknowledging their complementary capabilities to support CHE. We then provide a conceptual overview of the newest of these three paradigms—perceptual computing. For further insights, we describe a scenario of asthma management and explain the computational contributions of semantic computing, cognitive computing, and perceptual computing in synthesizing actionable information.  This is done through computational support for contextual and personalized processing of data into abstractions that raise it to the level of the human thought process and decision-making.
 +
==1. Challenge: Making the Web More Intelligent to Serve Humans Better==
 +
As we continue to progress in developing technologies that disappear into the background, as envisaged by Mark Weiser [MW91], the next important focus of human-centered computing is to endow the Web, and computing in general, with sophisticated human-like capabilities to reduce information overload. In the near future, such capabilities will enable computing at a much larger scale than the human brain is able to handle, while still doing so in a highly contextual and personalized manner. This technology will provide more intimate support of our every decision and action, ultimately shaping the human experience. The three capabilities we consider include: semantics, cognition, and perception.  While dictionary definitions for cognition and perception often have significant overlap, we will make a distinction based on how cognitive computing has been defined so far and on the complementary capabilities of perceptual computing. Over the next decade the development of these three computing paradigms—both individually and in cooperation—and their integration into the fabric of the Web will enable the emergence of a far more intelligent and human-centered Web.
 +
==2. Semantics, Cognition, and Perception==
 +
Semantics, cognition, and perception have been used in a variety of situations. We would like to clarify our interpretation of these terms and also specify the connections between them in the context of human cognition and perception and how data (observations) relates to semantics, cognition, and perception. Accordingly, we ignore their use in other contexts; for example, the use of semantics in the context of programming languages, or perception in the context of people interacting with computing peripherals.
  
In this article, we present a vision of the future of computing, called physical-cyber-social (PCS) computing. PCS computing is a holistic treatment of data, information, and knowledge from physical, cyber, and social worlds to integrate, understand, correlate, and provide contextually relevant abstractions to humans. PCS computing takes ideas from, but goes significantly beyond, the current progress in cyber-physical systems, socio-technical systems and cyber-social systems to support CHE [3].  We will exemplify future CHE application scenarios in healthcare and traffic management that are supported by (a) a deeper and richer semantic interdependence and interplay between sensors and devices at physical layers, (b) rich technology mediated social interactions, and (c) the gathering and application of collective intelligence characterized by massive and contextually relevant background knowledge and advanced reasoning in order to bridge machine and human perceptions. We will share an example of PCS computing using semantic perception [4], which converts low-level, heterogeneous, multimodal and contextually relevant data into high-level abstractions that can provide insights and assist humans in making complex decisions.  The key challenge for PCS computing requires moving away from traditional data processing to multi-tier computation along a data-information-knowledge-wisdom dimension, which supports reasoning to convert data into abstractions that are more familiar, accessible, and understandable by people.
+
<b>Semantics</b> are the meanings attributed to concepts and their relations within the mind. This Web of concepts and relations are used to represent knowledge about the world. Such knowledge may then be utilized for interpreting our daily experiences through cognition and perception. Semantic concepts may represent (or unify, or subsume, or map to) various patterns of data, e.g., we may recognize a person by her face (visual signal) or by her voice (speech signal) but once recognized, both visual and speech signals represents a single semantic concept to a person. Semantics hide syntactic and representational differences in data and helps refer to data using a conceptual abstraction. Generally, this involves mapping observations from various physical stimuli such as visual or speech signals to concepts and relationships as humans would interpret and communicate.
  
 +
<b>Cognition</b> is an act of interpreting data in order to derive an understanding of the world around us. This interpretation is done by utilizing domain/background knowledge and reasoning. Interpretation involves pattern recognition and classification of patterns from sensory inputs generated from physical stimuli. Our interpretation of sensory inputs is robust to noise and impreciseness inherent in the environment. Interpretation converts multimodal and multi-sensory data from our senses to knowledge.  Further, the derived knowledge can be utilized to gain insights or to answer questions.
  
 +
<b>Perception</b> is also an act of interpreting data from the world around us. Unlike cognition on its own, perception utilizes sensing and actuation to actively explore the surrounding environment in order to collect data of relevance. This data may then be used by our cognitive facilities to more effectively understand the world around us. Perception involves interpretation and exploration, with a strong reliance on background knowledge [G97].
  
 +
Perception is a cyclical process of interpretation and exploration of data utilizing associated cognition. Perception constantly attempts to match incoming sensory inputs (and engage associated cognition) with top-down expectations or predictions (based on cognition), and closely integrates with human actions. While the outcome of cognition results in understanding of our environment, the act of perception results in applying our understanding for asking the right (contextually relevant) question.
  
==Role of Technology in Human Experience==
+
<b>Exploring Connections</b>
Ideas on the ways technology -- including devices, computing and communication -- help humans have taken many forms, including: natural human interfaces and interactions (natural computing [17], gesture computing [18], and intelligence at the interface [19]), ubiquitous computing [2], robotics that have focused on mimicking simple human activities, and current efforts in automating complex human activities such as war.  
+
Perception and cognition utilize semantics for associating meaning with observations, resulting in the integration of heterogeneous observations. Both perception and cognition deal with the interpretation of observations. However, perception further explores the observation space, which is influenced by the current knowledge of the domain and the environmental observations (cognition). Perception acts as a bootstrapping process between observations and the knowledge of the domain resulting in the improvement of the domain knowledge.
  
These different visions of computing fall within a spectrum between machine-centric computing and human-centric computing. One end of the spectrum that delineates major visions (Figure 1) focuses on making computing more intelligent in order to think and behave like humans, in the vein of Vannever Bush (i.e., Memex [20]). A variety of approaches to creating Artificial Intelligence include recent work in neurocomputing all the way to current discussions of droids with increasingly human-like capabilities. The fundamental premise here is for technology to be as capable as humans, or at least more like humans. This indeed can indirectly serve humans. For CHE, however, we are interested in the other end of the spectrum, where the focus is on developing technology, which directly complements humans and enhances their experiences. Between the two ends of the spectrum occupied by making computing smarter and CHE, the middle ground is occupied by work in advanced Human Computer Interaction (HCI) and augmented human intellect, or Ambient Intelligence. In Ambient Intelligence, the focus has been on making machines surrounding humans behave intelligently, making it more machine-centric. HCI accommodates the human experience of interacting with technology, making it closer to the human-centric vision of CHE.
+
==3. Computational Aspects of Semantics, Cognition, and Perception==
 +
For conceptual clarity and general understanding of what the three terms mean, we exemplify semantics, cognition, and perception using a real-world scenario of asthma management. Asthma is a multifaceted, highly contextual, and personal disease. Multifaceted since asthma is characterized by many aspects such as triggers in environment and patient sensitiveness to triggers. Contextual since events of interest, such as the location of the person and triggers at a location, are crucial for timely alerts. Personal since asthma patients have varying responses to triggers and their actions vary based on the severity of their condition.  
  
[[File:technologies.png|Figure 1: Continuum of the role of technology, from making computers smarter to enhancing natural human experiences.]]
+
Asthma patients are characterized by two measures as used by clinical guidelines: severity level and control level. The severity level is diagnosed by a doctor and can take one of four states: mild, mild persistent, moderate, and severe. The control level indicates the extent of asthma exacerbations and can take one of three states: well controlled, moderately controlled, and poorly controlled. Patients don’t change their severity level often, but their control level may vary drastically depending on triggers, environmental conditions, medication, and symptomatic variations of the patient.
  
During the last few decades, and especially in recent years, we have seen accelerated development within each layer of physical, cyber, and social domains that have a strong bearing on human experience. We see a new form of systems developing:  the cyber component encompassing computing and communications, and its modern evolution for collective intelligence and physical devices (which have been assisting humans for many years). The role of digital technologies (cyberspace) in impacting the physical [21], intellectual, and social worlds is of particular significance.
+
Let’s consider an asthma patient, Anna who is age 10 and has been diagnosed with ‘severe’ asthma. She is quite disciplined with her medication and avoids exposure to triggers, resulting in a control level of ‘well controlled’. She receives an invitation from her friend to play soccer in a few days. Now, she and her parents must maintain a balance between her longing to play in the soccer match and the need to avoid exacerbating her asthma.
  
For example, healthcare is suffering from the increasing cost of services and an aging population all over the world. In the U.S., over $30 billion was spent in 2006 for hospital readmissions . Reducing healthcare costs using innovative sensing and analytics to create a sustainable future is a grand challenge. There is a need to provide assisted living for the elderly still in their own homes to minimize costs, enhancing their comfort and overall experience. The full potential of the decreasing cost of sensors that can monitor physiological and physical parameters, low cost networking, and increased use of smart phones is yet to be realized.  
+
The solution to this dilemma is not straightforward and cannot be answered using only existing factual information found on the Web, in medical books or journals, or electronic medical records. Knowledge found on the Web may contain common knowledge about asthma, but it may not be directly applicable to Anna. For example, while there may be websites [ATM14] describing general symptoms, triggers, and tips for better asthma management, Anna and her parents may not be able to use this information since her symptomatic variations for environmental triggers may be unique. While medical domain knowledge of asthma in the form of publications (e.g., PubMed) may contain symptomatic variations for various triggers, it is challenging to apply this knowledge to Anna’s specific case even though it may be described in her electronic medical records. Furthermore, such an application of knowledge would not consider any environmental and physiological dynamics of Anna or her quality of life choices.  
 +
<br/>We will explore the semantic, cognitive, and perceptual computing and then explain their role in providing a solution to the asthma problem.
 +
===3.1 Semantic Computing (SC)===
 +
SC encompasses technology for representing concepts and their relations in an integrated semantic network that loosely mimics the inter-relation of concepts in the human mind. This conceptual knowledge, represented formally in the form of an ontology, can be used to annotate data and infer new knowledge from interpreted data (e.g., to infer expectations of cognized concepts). Additionally, SC plays a crucial role in dealing with multisensory and multimodal observations, leading to the integration of observations from diverse sources (see “horizontal operators” in [SAH13]). SC has a rich history of over 15 years [S14] resulting in various annotation standards for a variety of data (e.g., social and sensor data [C12] are in use). The annotated data is used for interpretation by cognitive and perceptual computing. Figure 1 has SC as a vertical box through which the interpretation and exploration are routed (further explained in Section 3.3). SC also provides languages for formal representation of background knowledge.
  
We will consider a healthcare scenario as a motivating example to exemplify the vision of Physical-Cyber-Social (PCS) computing, extending beyond conventional Physical-Cyber systems. For example, consider the case of Ram, an Asian male of age 60 who wants to take care of his health, and yet is aware of the costs. He visits his doctor and receives a blood pressure screening and discovers that his blood pressure is slightly higher than expected: 90 diastolic. Let’s consider two questions that he may have: What is the normal blood pressure of an Asian male of his age? What are the best ways of managing a diastolic blood pressure of 90? To answer these questions, we need access to physiological observations obtained from people of similar characteristics and demographics (Physical). The ethnic, social, cultural, and economic background should be considered for similarity (Social). Moreover, in addition to expert knowledge, the knowledge and experiences of similar people dealing with the same health issue is important (Cyber). Current computing systems such as cyber-physical systems are not capable of answering these questions. We envision that PCS computing will provide answers to such important questions in a holistic manner.  
+
The semantic network of general medical domain knowledge related to asthma and its symptoms define asthma control levels in terms of symptoms. This general knowledge may be integrated with knowledge of Anna’s specific case found in her EMR. The weather, pollen, and air quality index information observed by sensors may be available through web services. These annotated observations spanning multiple modalities, general domain knowledge, and context-specific knowledge (Anna’s asthma severity and control level) pose a great challenge for its interpretation. The interpretation of observations needs background knowledge and, unfortunately, Anna’s parents do not posses such asthma-related knowledge. Anna and her parents are left with no particular insights at this step since manually interpreting all the observations is not a practical solution. In the next two subsections, we describe the interpretation of data using domain knowledge for deriving deeper insights.
  
==Contemporary Work==
+
[[File:PC.png|Figure 1. Conceptual distinctions between perceptual, cognitive, and semantic computing along with the demonstration of the cyclical process of perceptual computing]]<br/>Figure 1. Conceptual distinctions between perceptual, cognitive, and semantic computing along with the demonstration of the cyclical process of perceptual computing
For clarity, we have organized the related work in a progressive path toward PCS computing as shown in Figure 2.
+
  
[[File:pcssystems.png|Figure 2. PCS computing at the heart of physical, cyber, and social worlds]]
+
===3.2 Cognitive Computing (CC)===
 +
DARPA, when launching a project on cognitive computing in 2002 had defined it as “reason[ing], [the] use [of] represented knowledge, learn[ing] from experience, accumulat[ing] knowledge, explain[ing] itself, accept[ing] direction, be[ing] aware of its own behavior and capabilities as well as respond[ing] in a robust manner to surprises.” Cognitive hardware architectures and cognitive algorithms are two broad focus areas of current research in CC. Cognitive algorithms interpret data by learning and matching patterns in a way that loosely mimics the process of cognition in the human mind. Cognitive systems learn from their experiences and then get better when performing repeated tasks. CC acts as  prosthetics for human cognition by analyzing a massive amount of data and being able to answer questions humans may have when making certain decisions. One such example is IBM Watson which won the game show Jeopardy! in early 2011. The IBM Watson approach (albeit, not the technology) is now extended to medicine to aid doctors in clinical decisions. CC interprets annotated observations obtained from SC or raw observations from diverse sources and presents the result of the interpretation to humans. Humans, in turn, utilize the interpretation to perform action, which go on to form additional input for the CC system. CC systems utilize machine learning and other AI techniques in achieving all this without being explicitly programmed. Figure 1 shows the interpretation of observations by CC utilizing background knowledge.
  
===Physical-Cyber Systems===
+
Bewildered by the challenges in making the decision, Anna’s parents contact Anna’s pediatrician, Dr. Jones, for help. Let’s assume that Dr. Jones has access to a CC system such as IBM Watson for medicine and specifically for asthma management [W15]. Consequently, Dr. Jones is assisted by a CC system that can analyze massive amounts of medical literature, electronic medical records (EMRs), and clinical outcomes for asthma patients. Such a system would be instrumental in extending the cognitive abilities of Dr. Jones (minimizing the cognitive overload caused by the ever increasing research literature). Dr. Jones discovers from medical literature and EMRs that people with well-controlled asthma (i.e., patients who match with Anna) can indeed engage in physical activities if under the influence of appropriate preventive medication. Dr. Jones is still unclear about the vulnerability of Anna’s asthma control level due to weather and air quality index fluctuations. Dr. Jones lacks personalized and contextualized knowledge about Anna’s day-to-day environment, rendering him ill-informed in making any recommendation to Anna.  
There are a myriad of patient monitoring apps that continuously monitor physical/physiological parameters of a person and report to first responders through the cyber world (e.g. LifeWatch , MedApps , CardioNet , and Intelesense ) . These systems span the physical and cyber worlds. In this case, they monitor physical things, such as physiological parameters of a patient, and deliver information to first responders through cyber infrastructure. However, Ram’s questions cannot be answered by a system that does not understand the social aspects of the question.  
+
  
===Cyber-Social Systems===
 
Social networks such as Facebook, Twitter, and Myspace fall into the category of cyber-social systems. The actors are in the cyber world, engaging in social activities such as sharing, discussing, and propagating their experiences, opinions, and perceptions. Some of the communities may be specialized, such as healthcare (e.g. PatientsLikeMe, 23andMe) , product reviews (e.g. Amazon, Epinions), and disaster management (e.g. Ushahidi ). Ram has to sift through tens of thousands of documents or discussions to find experiences of similar people. However, the physical component is not addressed and he knows that the answers he finds may not be relevant to him.
 
  
===Physical-Cyber-Social Systems===
+
===3.3 Perceptual Computing (PC)===
PCS systems involve physical, cyber, and social components, but within current manifestations they are often loosely connected. Quantified Self  is an example of a physical-cyber-social system. It involves a community of enthusiasts who monitor health related parameters, such as food intake, exercise, sleep, and other physiological parameters. They analyze these observations to come up with insights that are valuable for their own health, fitness, and overall well-being. These insights are shared through articles, videos, and social events organized in many cities around the world. Ram has collected observations of his blood pressure, which are stored in a database on his computer. He visualizes them and presents them to the Quantified Self community. But, his questions still cannot be answered due to the lack of knowledge of the experiences of other people with a similar condition.  
+
Socrates taught that knowledge is attained through the careful and deliberate process of asking and answering questions. Through data mining, pattern recognition, and natural language processing, CC has provided a technology to support our ability to answer complex questions. PC will complete the loop by providing a technology to support our ability to ask contextually relevant and personalized questions. PC complements SC and CC by providing machinery to ask the next question, aiding decision makers in gaining actionable insights. In other words, determining what data is most relevant in helping to disambiguate between the multiple possible causes (of Anna’s asthma condition, for example). If the expectations derived by utilizing domain knowledge and observations from the real world do not match the real-world outcomes, PC updates the knowledge of the real world. Through focused attention, utilizing sensing and actuation technologies, this relevant data is sought in the Physical-Cyber-Social environments. PC envisions the more effective interpretation of data through a cyclical process of interpretation and exploration in a way that loosely mimics the process of perception in the human mind and body. Neisser defines perception as “an active, cyclical process of exploration and interpretation [N67]." Machine perception is the process of converting sensor observations to abstractions through a cyclical process of interpretation and exploration utilizing background knowledge [HST12]. While CC efforts to date have investigated the interpretation of data, it has yet to adequately address the relationship between the interpretation of data and the exploration of (or interaction with) the environment. Additionally, PC involves the highly personalized and contextualized refinement of background knowledge by engaging in the cyclical process of interpretation and exploration. Interpretation is analogous to bottom-brain operation of processing observations from our senses and exploration compares to the top-brain processing of making/adapting plans to solve problems [K13]. This type of interaction—often involving focused attention and physical actuation—enables the perceiver to collect and retain data of relevance (from the ocean of all possible data), and thus it facilitates a more efficient, personalized interpretation or abstraction. Figure 1 demonstrates the cyclical process of PC involving interpretation and exploration. The interpretation of observations leads to abstractions (a concept in the background knowledge) and exploration leads to actuation for constraining the search space of relevant observations.  
  
Observations from these systems are too often stove-piped due to many challenges, such as the semantic integration and sharing of heterogeneous sensor observations, and multiple service providers. Current integration and interaction between physical, cyber, and social worlds is brittle involving limited syntactic interoperability or integration rather than semantic integration. These two challenges have led to significant human involvement in making sense of observations from contextually relevant information from physical, cyber, and social worlds.
+
A PC system may be implemented as an intelligence at the edge technology [HTS12], as opposed to a logically-centralized system that processes the massive amounts of data on the Web. In the case of Anna, a CC system processed all the medical knowledge, EMRs, and patient outcomes to provide information to Dr. Jones who then applied it to Anna’s case (personalization). Dr. Jones faced challenges in interpreting weather and air quality index with respect to vulnerability of Anna’s asthma control level. With the PC system being at the edge (closer to Anna, possibly realized as a mobile application with inputs from multiple sensors, such as kHealth: http://wiki.knoesis.org/index.php/Asthma), it actively engages in the cyclical process of interpretation and exploration. For example, Anna in the last month exhibited reduced activity during a soccer practice. This observation is interpreted by a PC system as an instance of asthma exacerbation. Further, a PC system actively seeks observations (asking questions by a PC system) for weather and the air quality index to determine their effect on the asthma symptoms of Anna. A PC system will be able to take generic background knowledge (poor air quality may cause asthma exacerbations) for exploration and add contextual and personalized knowledge (poor air quality exposure of Anna may cause asthma exacerbations to Anna). Dr. Jones can be presented with these pieces of information along with the information from the CC component. Anna is advised to refrain from the soccer match due to poor air quality on the day of the soccer game. This information will be valuable to Anna and her parents, possibly resulting in Anna avoiding conditions and/or situations, which may lead to the exacerbation of her asthma.
  
==Physical-Cyber-Social (PCS) Computing==
+
==4.Conclusions==
The vision of PCS computing extends Licklider’s vision of computing in 1960 [5] by adding social, collective, and personalized computing components.
+
Perceptual computing is an evolution of cognitive computing, in which computers can not only provide answers to the complex questions posed to them but can also subsequently ask the right follow up questions and then interact with the environment—either physical, cyber, or social—to collect the relevant data. As PC evolves, the personalization components would extend to include temporal and spatial context, and others factors that drive human decisions and actions, such as emotions, and cultural and social preferences.  This will enable more effective answers, better decisions and timely actions that are specifically tailored to each person. We envision the cyclical process of PC to evolve background knowledge toward contextualization and personalization. We demonstrated PC and its complementary nature to SC and CC by taking a concrete, real-world example of asthma management. The Internet of Things, often hailed as the next great phase of the Web, with its emphasis on sensing and actuation will exploit all these three forms of computing.
  
Figure 3 depicts PCS computing involving observations, experiences, background knowledge, and perceptions in a goal toward a human centric vision. The observations from the physical world are used to perceive an environment. Our perceptions are strongly influenced by our background knowledge and current observations. By analyzing the observations in the context of our background knowledge, we orient ourselves toward subsequent actions. The decision regarding which action to take is based on our evidence and experiences. The final action is executed and the process repeats in a loop. In Figure 3, the influence of background knowledge on the way we interpret current observations though perceptual inference is shown. Perceptions determine our experience and evolve our background knowledge. Experiences from the social world influence our background knowledge and indirectly shape our perceptions and subsequent experiences. All these interactions are heavily dependent on humans to analyze the connections between physical, cyber, and social worlds. The vision of PCS computing is to consider observations, knowledge, and experiences across PCS layers to provide a more holistic computational framework.  
+
==References==
 +
[ATM15] Asthma Triggers and Management: Tips to Remember, http://www.aaaai.org/conditions-and-treatments/library/asthma-library/asthma-triggers-and-management.aspx (Accessed Jan 18, 2015) <br/>
 +
[C12] Michael Compton, et al. "The SSN ontology of the W3C semantic sensor network incubator group." Web Semantics: Science, Services and Agents on the World Wide Web 17 (2012): 25-32. <br/>
  
Physical-Cyber systems can no longer be limited to special purpose embedded systems designed for a single task [22]. Our vision goes beyond the interactions and integration of observations from physical, cyber, and social domains. We envision a deeper semantic integration and interplay between the three layers, as shown in Figure 2. The outer circle in the figure represents systems with shallow integration between physical, cyber, and social systems. PCS computing in the inner circle provides deeper integration, interpretation, and personalization of the physical, cyber, and social worlds.
+
[C13] Andy Clark. "Whatever next? Predictive brains, situated agents, and the future of cognitive science." Behavioral and Brain Sciences 36.03 (2013): 181-204. <br/>
  
[[File:pcscycle.png|Figure 3. Computational Cycle in Physical Cyber Social Computing exemplified with a healthcare example]]
+
[G97] Richard Gregory. "Knowledge in perception and illusion," Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences 352.1358 (1997): 1121-1127. <br/>
  
==Relationship between PCS and CHE==
+
[HST12] Cory Henson, Amit Sheth, Krishnaprasad Thirunarayan. 'Semantic Perception: Converting Sensory Observations to Abstractions,' IEEE Internet Computing, vol. 16, no. 2, pp. 26-34, Mar./Apr. 2012 <br/>
John Boyd’s concept of observe, orient, decide, and act (OODA-Loop) [23] provides a useful context to describe PCS computing, involving observations from physical, cyber, and social worlds, and their intelligent processing leading to CHE. In order to show how PCS computing can be used to improve human experience, we first need to be able to speak intelligibly about experience. Experience is a broad term, so we will begin by dividing experience into its component parts. This ontology of human experience is composed of observation (and perception), orientation, decision, and action. These experiential activities are derived from the OODA loop, which will serve as a foundational ontology of human experience.  
+
  
* Observe – Observation is the act of examining the environment and gathering data. The environment could be physical, examined through the human senses or machine sensors. The environment could be social, examined through social network technologies such as facebook or twitter. The environment could be cyber, examined through Web technologies such as Wikipedia, Google search, or Wolfram Alpha.
+
[HTS12] Cory Henson, Krishnaprasad Thirunarayan, and Amit Sheth. 'An Efficient Bit Vector Approach to Semantics-based Machine Perception in Resource-Constrained Devices,' 11th International Semantic Web Conference (ISWC 2012), Boston, Massachusetts, USA, November 11-15, 2012. <br/>
* Orient – Orientation is the act of analyzing the data from the observed environment to form a conceptualization of the situation. That is, integrate and interpret the data, translating it from low-level data into high-level knowledge. Orientation is how we conceptualize and understand a situation, based on “culture, experience, new information, analysis, synthesis, and heritage.”
+
* Decide – Decision is the act of deliberating and choosing between multiple actions in order to proceed towards a goal.
+
* Act – Action is the process of engaging in a decided activity. This could involve physical action, such as movement, or gathering additional information through observation, by examining the physical, social, or cyber environment.
+
We characterize the PCS computing process as an OODA-Loop process and Figure 3 can be viewed as an OODA-loop.
+
 
+
==How to resolve the Big Data problem?==
+
The use of physical, cyber, and social data indeed manifests into a Big Data challenge and opportunity. If the existence of social data leads to a Big Data problem, as with many contemporary social networking systems, with the introduction of physical data, the Big Data problem will explode. The Big Data problem can be divided into the task of managing data with four distinguished characteristics: volume, velocity, variety, and veracity. In our PCS computing approach, semantic computing plays a pivotal role in addressing these challenges. The following sketch describes how semantics-empowered PCS computing will address these challenges:
+
* The challenge of variety, or heterogeneity of data, is addressed by complementing traditional statistical and information retrieval, lexical, natural language processing techniques with semantic interoperability and integration. The former addresses syntactic and structural/representational interoperability. The latter involves the use of ontologies for semantic descriptions of concepts that data and observations capture, semantic annotation of data and semantic integration or fusion of the data.  Semantic services enhanced techniques are also used to model and interact with devices and sensors in the physical world, or agents interfacing with humans.
+
* The challenge of volume and velocity is addressed by integration and interpretation of data at the source (or as close as possible), through "intelligence at the edge." This is accomplished by downscaling semantic processing of data and making each resource-constrained device more intelligent, i.e., capable of semantic filtering, integration, and interpretation of streaming heterogeneous data [9].
+
* The challenge of veracity is addressed by assessing trustworthiness and provenance of observations in the PCS systems. Trustworthiness may have different interpretations in physical, cyber, and social domains. Semantics can play an important role of meaningfully integrating these different notions of trust spanning physical, cyber, and social worlds [24].
+
 
+
==PCS Operators==
+
Besides the semantics-empowered capabilities such as those described above, we expect that the core of PCS computing will be made possible by a series of PCS operators. We describe two such operators as early examples, shown in Figure 4.
+
 
+
[[File:pcsoperators.png|Figure 4. The DIKW [15] triangle along with two types of operators of PCS computing acting on healthcare related data as an example]]
+
 
+
==Horizontal Operators==
+
Semantic integration of heterogeneous, multi-modal observations play an important role in deeper integration and understanding of observations spanning physical, cyber, and social domains. These operators map multimodal PCS data into concepts to support semantic integration within each level of the multi-tier computation along data-information-knowledge-wisdom dimension. Heterogeneous observations in the domain of healthcare include: (1) machine sensor observations: physical measurements from multiple, heterogeneous, active (active human involvement e.g. blood pressure) and passive (passive human involvement e.g. heart rate) sensors, (2) self-observations (in the form of subjective thoughts, feelings, moods, etc.), (3) social observations (from a network of family, friends, colleagues, etc.), and (4) demographic observations (aggregated characteristics of the population with similar attributes and/or lineage). PCS horizontal operators integrate all these observations (within each layer in Figure 4) using semantics-empowered integration techniques.
+
 
+
==Vertical Operators==
+
Vertical operators are used to transcend observations from a lower level to a higher level in Figure 4. These operators are unique compared to existing operations of ascending through the DIKW triangle in the following two ways: (1) the operator is agnostic of the source of the observations, and (2) the knowledge base is not limited to a formal ontology – it spans from statistical knowledge to social experiences. Semantic perception [4,8,9] is one way of ascending through the triangle.
+
 
+
===Semantic Perception===
+
Perception is the act of translating low-level data, acquired through observation, into high-level knowledge. People have evolved sophisticated mechanisms to perceive their environment, which allow for high-level conceptualizations (or abstractions) of a situation. These abstractions are then used for subsequent decision-making and action, without mentally overwhelming a person through the sheer volume and velocity of incoming sensory signals. Through study in cognitive psychology, scientists now understand that the key to perception is prior knowledge [6][7]. In order to design machines capable of perception, Semantic Web technologies may be utilized to integrate observation data with prior knowledge on the Web (such as Linked Open Data) [4][8][9], in order to interpret and make sense of the data. Traditionally, perception is thought of as the act of abstracting physical sensory signals (i.e., physical data). However, we must expand this notion to envision machine perception capable of utilizing prior knowledge found throughout the Web (cyber) in order to integrate and interpret observations from physical, cyber, and social dimensions. This type of perception is far more encompassing and far-reaching than the capability of any single person.
+
 
+
“In the next century, planet earth will don an electronic skin. It will use the Internet as a scaffold to support and transmit its sensations. This skin is already being stitched together [10].” With the advance of technologies such as semantic perception, along with PCS computing, this dream is quickly becoming reality. Another concept called the Global Brain [22], envisioning computing to be more encompassing and holistic states that “The best symbiosis of man and computer is where a program learns from humans but notices things they would not.”
+
 
+
Next we describe two early examples of next generation applications that can be made possible with PCS computing.
+
 
+
==Scenarios==
+
===Healthcare===
+
Personalized health and patient empowerment are important themes for the next generation of healthcare technology. Today we have smartphone applications driven by background knowledge, we have an increasing number of cyber-physical systems, such as medical implants for heart related diseases, and we have social media applications for patients to learn from other patients and experts. Nevertheless, medicine and health are complex, with a wide array of variables, including symptoms, vital signs, personal history, family history, medications taken, genetic makeup, diet, exercise regimens, expectations and quality of life choices, social and economic factors, and so on. All these variables need to be put in a social and personal context. As an example, the NIH has identified that a person with diastolic pressure between 86 and 90 may be considered pre-hypertensive, and above 90 classified as stage I hypertension . For a South Asian male such as Ram, however, preferred and normative ranges may be lower.
+
 
+
Given the rapid growth of the Quantified Self movement and the use of social media for health, it is appropriate to expect that in the future one would have devices taking vital signs with respect to a health concern, and PCS computing technology constantly evaluating health conditions with respect to relevant knowledge (e.g., knowledge derived from the NIH, and other recommendations) as well as social knowledge. For the latter, consider that a South Asian male is likely to have a good cross section of friends with similar genetic and socio-economic profiles, so his health condition can be compared with those of his friends for a more personalized solution. Knowledge that several friends who are close in age, with similar education and socio-economic situations have cardiovascular problems would empower this person to closely compare a broader variety of variables. This leads to a better understanding of his risk profile that wouldn’t be possible without such social knowledge.
+
 
+
PCS computing would be able to answer Ram’s questions, introduced in the beginning of the article, and as shown in Figure 3. The numbering indicates the ordering of the processes in the cycle. Ram has diastolic blood pressure between 86 and 90 mmHg. The background knowledge from authoritative sources (e.g. NIH) indicates that he is pre-hypertensive. However, knowledge from similar people with the same ailment indicates that Asian males have lower thresholds than average for being diagnosed with hypertension. This background knowledge influences the perception process in inferring a higher risk. PCS computing can provide deeper insights and corrective actions. It is capable of integrating observations, experiences, and knowledge from people with similar ethnic, social, cultural, and economic background. Considering all these aspects while computing solutions, PCS will provide effective, personalized, and actionable information to Ram, as shown in Figure 3. Being an Asian male, it is recommended to reduce his salt intake and substitute it with spices.
+
 
+
===Vehicular Traffic===
+
With an increased demand for resources, cities are under pressure to reduce and optimize resource consumption in a city. Traffic monitoring is part of a broader scope of sustainability applications (e.g. water, energy, public safety) and involves cyber-physical systems. Observations in the domain of traffic span across multiple modalities spanning machine sensors to citizen sensors [12]. Machine sensors include speed sensors, cameras, noise sensors, etc., while citizen sensors include people reporting observations about traffic (such as police, and commuters). All these sensors are monitoring a physical system, the road network, which collectively forms a physical, cyber, and social system. These sensors are interconnected and their observations are available on the web. There are social (textual) observations about various events in a city some of which may influence traffic. PCS computing envisions a holistic approach to computing by considering observations from all these modalities and further exploiting cyber and collective knowledge.
+
 
+
Figure 5 shows a slow moving traffic event being detected by sensors monitoring the speed of vehicles (red strip) on I-77 South at Ridgewood Road. Current cyber-physical systems [12, 13] do not exploit collective knowledge of the domain of traffic available from existing knowledge bases on the web (e.g. ConceptNet 5 [14], Open Data from city authorities). The knowledge of relationships between events affecting traffic can be derived from these sources. In this example, ConceptNet 5 defines a causal relation between an accident and slow moving traffic. There is a tweet and a news article reporting an accident on Ridgewood Road where the machine sensors detected slow moving traffic.
+
 
+
Such an event will have a different effect on a person travelling vs. a decision-maker such as traffic authorities. PCS computing will consider the context of this observation by asking questions: Who needs this information? How does this impact a person travelling? How does this impact a decision-maker? How long will these events last? What available knowledge and social experiences can be used in this analysis?
+
 
+
[[File:traffic.png|Figure 5. A traffic scenario showing physical, cyber, and social observations collectively processed to make sense of a traffic condition.]]
+
 
+
Considering different modalities for analysis will help us deal with incompleteness (complementary sensor observations) and uncertainty (redundant sensor observations) which prevail in most of the domains making physical-cyber-social observations an important way of dealing with problems in many domains.
+
 
+
PCS computing involves personalized and contextual processing of observations for enhancing human experience.
+
 
+
==Conclusions==
+
Technologies have been assisting humans to solve problems in many ways. There are many computing theories that exist for solving problems using crisp and well-developed theories. Humans are inundated with a lot of observations from physical, cyber, and social worlds. Yet, we perform integration and interpretation of these observations in a seamless way.
+
While the approach of computing has been in modeling problems using well-formed theories, we envision two possible ways of achieving seamlessness as humans: (1) Use multiple theories that are well formulated to solve a problem (e.g. probabilistic, logical, statistical), and (2) A fundamentally different approach to computing.
+
 
+
PCS computing captures a synergetic interaction between computing and humans while providing holistic computational solutions extending physical, cyber, and social worlds. The future of technology will not be primarily about asking questions and receiving relevant documents for search queries. CHE enabled by PCS computing represents a paradigm shift from search-based technology to solution-based technology where knowledge is generated by continuous observation of human activities within the physical, cyber, and social worlds, and to use this knowledge to improve human experience. That is, the vision of PCS computing is to think of computing which translates into action using knowledge from cyber space, an idea that is synergistic to that of the global brain [22].
+
==Related Talks and Presentation==
+
{{#widget:SlideShare
+
|doc=physical-cyber-social-computing-wims13-keynote-130611152912-phpapp02
+
|width=425
+
|height=348
+
|padding=20px
+
}}
+
==References==
+
[1] Amit Sheth, 'Computing for Human Experience: Semantics-Empowered Sensors, Services, and Social Computing on the Ubiquitous Web,' IEEE Internet Computing (Sp. Issue on Internet Predictions: V. Cerf and M. Singh, Eds.), vol. 14, no. 1, pp. 88-91, Jan./Feb. 2010, doi:10.1109/MIC.2010.4 <br/>
+
[2] Weiser, Mark. "The computer for the 21st century." Scientific American 265, no. 3 (1991): 94-104. <br/>
+
[3] Amit Sheth, Semantics empowered Physical-Cyber-Social systems, In: What will the Semantic Web look like 10 years from now? In conjunction with the 11th International Semantic Web Conference 2012 (ISWC 2012). Boston, USA. November 11-15, 2012 (Workshop date: 11/11/2012). <br/>
+
[4] C. Henson, A. Sheth, K. Thirunarayan, Semantic Perception: Converting Sensory Observations to Abstractions <br/>
+
[5] Licklider, J.C.R., "Man-Computer Symbiosis", IRE Transactions on Human Factors in Electronics, vol. HFE-1, 4-11, March 1960. <br/>
+
[6] Neisser, U.: Cognition and Reality. Psychology, 218, San Francisco: W.H. Freeman and Company (1976). <br/>
+
[7] Gregory, R.L.: Knowledge in perception and illusion. In: Philosophical Transactions of the Royal Society of London, 352(1358), pp.1121-1127 (1997). <br/>
+
[8] Cory Henson, Krishnaprasad Thirunarayan, Amit Sheth. An Ontological Approach to Focusing Attention and Enhancing Machine Perception on the Web. Applied Ontology, vol. 6(4), pp.345-376, 2011. <br/>
+
[9] Cory Henson, Krishnaprasad Thirunarayan, and Amit Sheth, 'An Efficient Bit Vector Approach to Semantics-based Machine Perception in Resource-Constrained Devices,' In: Proceedings of 11th International Semantic Web Conference (ISWC 2012), Boston, Massachusetts, USA, November 11-25, 2012. <br/>
+
[10] Neil Gross, “The Earth Will Don an Electronic Skin,”  BusinessWeek, Aug. 1999; www.businessweek.com/1999/99_35/b3644024.htm. <br/>
+
[11] J. McCarthy, What Is Artificial Intelligence?  <br/>
+
[12] Miller, Mahalia, and Chetan Gupta. "Mining traffic incidents to forecast impact." Proceedings of the ACM SIGKDD International Workshop on Urban Computing. ACM, 2012. <br/>
+
[13] Horvitz, Eric J., et al. "Prediction, expectation, and surprise: Methods, designs, and study of a deployed traffic forecasting service." arXiv preprint arXiv:1207.1352 (2012). <br/>
+
[14] Speer, Robert, and Catherine Havasi. "Representing general relational knowledge in ConceptNet 5." International conference on language resources and evaluation (LREC). 2012. <br/>
+
[15] Ackoff, R. L., "From Data to Wisdom", Journal of Applies Systems Analysis, Volume 16, 1989 p 3-9.<br/>
+
[16] Engelbart, Douglas. "Augmenting human intellect: a conceptual framework (1962)." PACKER, Randall and JORDAN, Ken. Multimedia. From Wagner to Virtual Reality. New York: WW Norton & Company (2001): 64-90.<br/>
+
[17] De Castro, Leandro Nunes. Fundamentals of natural computing: basic concepts, algorithms, and applications. Chapman & Hall/CRC, 2006.<br/>
+
[18] Mistry, Pranav, and Pattie Maes. "SixthSense: a wearable gestural interface." ACM SIGGRAPH ASIA 2009 Sketches. ACM, 2009.<br/>
+
[19] Puerta, Angel R. "The study of models of intelligent interfaces." Proceedings of the 1st international conference on Intelligent user interfaces. ACM, 1993.<br/>
+
[20] Bush, Vannevar. "As we may think." (July 1945): Atlantic Monthly, 101-108.<br/>
+
[21] John Markoff (2010-10-09). "Google Cars Drive Themselves, in Traffic". The New York Times (Retrieved 11-28-2012).<br/>
+
[22] Global Brain, http://www.slideshare.net/timoreilly/towards-a-global-brain-7968429, TEDxSV talk, May 14, 2011 (Retrieved 11-30-2012).<br/>
+
  
[22] White, Jules, et al. "R&D challenges and solutions for mobile cyber-physical applications and supporting Internet services." Journal of Internet Services and Applications 1.1 (2010): 45-56.
+
[MW91] Mark Weiser. “The Computer for the 21st Century,” Scientific American, 265 (3), September 1991, pp. 94-104. <br/>
[23] OODA-Loop, http://en.wikipedia.org/wiki/OODA_loop, (Retrieved 11-30-2012).<br/>
+
[24] Pramod Anantharam, Cory A. Henson, Krishnaprasad Thirunarayan and, Amit P. Sheth, "Trust Model for Semantic Sensor and Social Networks: A Preliminary Report", Aerospace and Electronics Conference (NAECON), Proceedings of the IEEE 2010 National , vol., no., pp.1-5, 14-16 July 2010.<br/>
+
[25] Yan Xu, Maribeth Gandy, Sami Deen, Brian Schrank, Kim Spreen, Michael Gorbsky, Timothy White, Evan Barba, Iulian Radu, Jay Bolter, and Blair MacIntyre. 2008. BragFish: exploring physical and social interaction in co-located handheld augmented reality games. In Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology (ACE '08). ACM, New York, NY, USA, 276-283.<br/>
+
[26] Blake Sawyer, Francis Quek, Wai Choong Wong, Mehul Motani, Sharon Lynn Chu Yew Yee, and Manuel Perez-Quinones. 2012. Using physical-social interactions to support information re-finding. In CHI '12 Extended Abstracts on Human Factors in Computing Systems (CHI EA '12). ACM, New York, NY, USA, 885-910.<br/>
+
  
==Citation==
+
[N67] Ulric Neisser. "Cognitive psychology." (1967). <br/>
Amit Sheth, Pramod Anantharam, Cory Henson, [http://knoesis.org/library/resource.php?id=1816 'Physical-Cyber-Social Computing: An Early 21st Century Approach,'] IEEE Intelligent Systems, pp. 79-82, Jan./Feb. 2013
+
  
 +
[S10]  Amit Sheth. 'Computing for Human Experience: Semantics-Empowered Sensors, Services, and Social Computing on the Ubiquitous Web,' IEEE Internet Computing 14(1), pp. 88-91, Jan./Feb. 2010 <br/>
  
 +
[SAH13] Ami Sheth, Pramod Anantharam, Cory Henson. 'Physical-Cyber-Social Computing: An Early 21st Century Approach,' IEEE Intelligent Systems, pp. 79-82, Jan./Feb. 2013 <br/>
  
==Related Writings, Talks and Events==
+
[S14] Amit Sheth 15 years of Semantic Search and Ontology-enabled Semantic Applications, Amit Sheth, http://amitsheth.blogspot.com/2014/09/15-years-of-semantic-search-and.html (Accessed Jan 21, 2015) <br/>
* Krishnaprasad Thirunarayan and Amit Sheth, " Semantics-empowered Approaches to [http://www.knoesis.org/library/resource.php?id=1903  Big Data Processing] for Physical-Cyber-Social Applications," Kno.e.sis Technical Report, May 14, 2013.
+
* A. Sheth, [http://www.slideshare.net/apsheth/physical-cyber-social-computing-an-early-21st-century-approach-to-computing-for-human-experience Physical-Cyber-Social-Computing An early 21st century approach to Computing for Human Experience] Keynote at The International Conference on Web Intelligence, Mining and Semantics (WIMS 13), July 12-14, 2013. [http://videolectures.net/wims2013_sheth_physical_cyber_social_computing/ Video of the KeyNote]
+
* A. Sheth, [http://amitsheth.blogspot.com/2012/08/semantics-empowered-physical-cyber.html Semantics empowered Physical-Cyber-Social systems], Blog, August 12, 2012
+
* A. Sheth, [http://www.slideshare.net/apsheth/physical-cyber-social-computing Physical Cyber Social Computing], Talk at What will the Semantic Web look like 10 years from now? (SW2022 at ISWC2012), November 11, 2012.
+
  
* [http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=13402 Dagstuhl Seminar on Physical-Cyber-Social Computing] (Organizers: Barnaghi P., Jain R., Sheth A., Staab S., Strohmier M.), Dagstuhl Seminar, September 30-October 04, 2013.
+
[W15] IBMs Watson may soon be the best doctor in the world,
* A. Sheth, P. Anantharam, [http://knoesis.org/library/resource.php?id=1884 Physical Cyber Social Computing for Human Experience], Intl Conf on Web Intelligence, Mining and Semantics (WIMS2013), June 12-14, 2013, Madrid, Spain (Invited Paper).
+
http://www.businessinsider.com/ibms-watson-may-soon-be-the-best-doctor-in-the-world-2014-4 (Accessed Jan 20, 2015) <br/>
  
Also see: [http://knoesis.org/index.php/Computing_For_Human_Experience Computing for Human Experience]
+
[K13] Stephen Michael, and Stephen Kosslyn. Top Brain, Bottom Brain: Surprising Insights Into how You Think. Simon and Schuster, 2013. <br/>

Latest revision as of 20:33, 21 May 2015

Semantic, Cognitive and Perceptual Computing:

Advances toward Computing for Human Experience


Amit Sheth, Pramod Anantharam, and Cory Henson

While the debate about whether AI, robots, or machines will replace humans is raging (among Gates, Hawking, Musk, Theil and others), there remains a long tradition of viewpoints that take a progressively more human-centric view of computing. A range of viewpoints, from machine-centric to human-centric, have been put forward by McCarthy (intelligent machines), Weiser (Ubiquitous computing), Englebert (augmenting human intellect), Lickleider (man-machine symbiosis) and others. We focus on the recent progress taking place in human-centric visions, such as Computing for Human Experience (CHE) (Sheth) and Experiential Computing (Jain) [S10]. CHE is focused on serving people’s needs, empowering us while keeping us in loop, making us more productive with better and timely decision making, and improving and enriching our quality of life.
The Web continues to evolve and serve as the infrastructure and intermediary that carries massive amounts of multimodal and multisensory observations, and facilitates reporting and sharing of observations. These observations capture various situations pertinent to people’s needs and interests along with all their idiosyncrasies. Data of relevance to people’s lives span the physical (reality, as measured by sensors/devices/IoT), cyber (all shared data and knowledge on the Web), and social (all the human interactions and conversations) spheres [SAH13]. Web data may contain event of interest to everyone (e.g., climate), to many (e.g., traffic), or to only one (something very personal such as asthma). These observations contribute toward shaping the human experience.
We emphasize contextual and personalized interpretation in processing such varying forms of data to make it more readily consumable and actionable for people. Toward this goal, we discuss the computing paradigms of semantic computing, cognitive computing, and an emerging paradigm in this lineage, which we term perceptual computing. In our view, these technologies offer a continuum to make the most out of vast, growing, and diverse data about many things that matter to people’s needs, interests, and experiences. This is achieved through actionable information, both when humans desire something (explicit) and through ambient understanding (implicit) of when something may be useful to people’s activities and decision making. Perceptual computing is characterized by its use of interpretation and exploration to actively interact with the surrounding environment in order to collect data of relevance useful for understanding the world around us.
This article consists of two parts. First we describe semantic computing, cognitive computing, and perceptual computing to lay out distinctions while acknowledging their complementary capabilities to support CHE. We then provide a conceptual overview of the newest of these three paradigms—perceptual computing. For further insights, we describe a scenario of asthma management and explain the computational contributions of semantic computing, cognitive computing, and perceptual computing in synthesizing actionable information. This is done through computational support for contextual and personalized processing of data into abstractions that raise it to the level of the human thought process and decision-making.

1. Challenge: Making the Web More Intelligent to Serve Humans Better

As we continue to progress in developing technologies that disappear into the background, as envisaged by Mark Weiser [MW91], the next important focus of human-centered computing is to endow the Web, and computing in general, with sophisticated human-like capabilities to reduce information overload. In the near future, such capabilities will enable computing at a much larger scale than the human brain is able to handle, while still doing so in a highly contextual and personalized manner. This technology will provide more intimate support of our every decision and action, ultimately shaping the human experience. The three capabilities we consider include: semantics, cognition, and perception. While dictionary definitions for cognition and perception often have significant overlap, we will make a distinction based on how cognitive computing has been defined so far and on the complementary capabilities of perceptual computing. Over the next decade the development of these three computing paradigms—both individually and in cooperation—and their integration into the fabric of the Web will enable the emergence of a far more intelligent and human-centered Web.

2. Semantics, Cognition, and Perception

Semantics, cognition, and perception have been used in a variety of situations. We would like to clarify our interpretation of these terms and also specify the connections between them in the context of human cognition and perception and how data (observations) relates to semantics, cognition, and perception. Accordingly, we ignore their use in other contexts; for example, the use of semantics in the context of programming languages, or perception in the context of people interacting with computing peripherals.

Semantics are the meanings attributed to concepts and their relations within the mind. This Web of concepts and relations are used to represent knowledge about the world. Such knowledge may then be utilized for interpreting our daily experiences through cognition and perception. Semantic concepts may represent (or unify, or subsume, or map to) various patterns of data, e.g., we may recognize a person by her face (visual signal) or by her voice (speech signal) but once recognized, both visual and speech signals represents a single semantic concept to a person. Semantics hide syntactic and representational differences in data and helps refer to data using a conceptual abstraction. Generally, this involves mapping observations from various physical stimuli such as visual or speech signals to concepts and relationships as humans would interpret and communicate.

Cognition is an act of interpreting data in order to derive an understanding of the world around us. This interpretation is done by utilizing domain/background knowledge and reasoning. Interpretation involves pattern recognition and classification of patterns from sensory inputs generated from physical stimuli. Our interpretation of sensory inputs is robust to noise and impreciseness inherent in the environment. Interpretation converts multimodal and multi-sensory data from our senses to knowledge. Further, the derived knowledge can be utilized to gain insights or to answer questions.

Perception is also an act of interpreting data from the world around us. Unlike cognition on its own, perception utilizes sensing and actuation to actively explore the surrounding environment in order to collect data of relevance. This data may then be used by our cognitive facilities to more effectively understand the world around us. Perception involves interpretation and exploration, with a strong reliance on background knowledge [G97].

Perception is a cyclical process of interpretation and exploration of data utilizing associated cognition. Perception constantly attempts to match incoming sensory inputs (and engage associated cognition) with top-down expectations or predictions (based on cognition), and closely integrates with human actions. While the outcome of cognition results in understanding of our environment, the act of perception results in applying our understanding for asking the right (contextually relevant) question.

Exploring Connections Perception and cognition utilize semantics for associating meaning with observations, resulting in the integration of heterogeneous observations. Both perception and cognition deal with the interpretation of observations. However, perception further explores the observation space, which is influenced by the current knowledge of the domain and the environmental observations (cognition). Perception acts as a bootstrapping process between observations and the knowledge of the domain resulting in the improvement of the domain knowledge.

3. Computational Aspects of Semantics, Cognition, and Perception

For conceptual clarity and general understanding of what the three terms mean, we exemplify semantics, cognition, and perception using a real-world scenario of asthma management. Asthma is a multifaceted, highly contextual, and personal disease. Multifaceted since asthma is characterized by many aspects such as triggers in environment and patient sensitiveness to triggers. Contextual since events of interest, such as the location of the person and triggers at a location, are crucial for timely alerts. Personal since asthma patients have varying responses to triggers and their actions vary based on the severity of their condition.

Asthma patients are characterized by two measures as used by clinical guidelines: severity level and control level. The severity level is diagnosed by a doctor and can take one of four states: mild, mild persistent, moderate, and severe. The control level indicates the extent of asthma exacerbations and can take one of three states: well controlled, moderately controlled, and poorly controlled. Patients don’t change their severity level often, but their control level may vary drastically depending on triggers, environmental conditions, medication, and symptomatic variations of the patient.

Let’s consider an asthma patient, Anna who is age 10 and has been diagnosed with ‘severe’ asthma. She is quite disciplined with her medication and avoids exposure to triggers, resulting in a control level of ‘well controlled’. She receives an invitation from her friend to play soccer in a few days. Now, she and her parents must maintain a balance between her longing to play in the soccer match and the need to avoid exacerbating her asthma.

The solution to this dilemma is not straightforward and cannot be answered using only existing factual information found on the Web, in medical books or journals, or electronic medical records. Knowledge found on the Web may contain common knowledge about asthma, but it may not be directly applicable to Anna. For example, while there may be websites [ATM14] describing general symptoms, triggers, and tips for better asthma management, Anna and her parents may not be able to use this information since her symptomatic variations for environmental triggers may be unique. While medical domain knowledge of asthma in the form of publications (e.g., PubMed) may contain symptomatic variations for various triggers, it is challenging to apply this knowledge to Anna’s specific case even though it may be described in her electronic medical records. Furthermore, such an application of knowledge would not consider any environmental and physiological dynamics of Anna or her quality of life choices.
We will explore the semantic, cognitive, and perceptual computing and then explain their role in providing a solution to the asthma problem.

3.1 Semantic Computing (SC)

SC encompasses technology for representing concepts and their relations in an integrated semantic network that loosely mimics the inter-relation of concepts in the human mind. This conceptual knowledge, represented formally in the form of an ontology, can be used to annotate data and infer new knowledge from interpreted data (e.g., to infer expectations of cognized concepts). Additionally, SC plays a crucial role in dealing with multisensory and multimodal observations, leading to the integration of observations from diverse sources (see “horizontal operators” in [SAH13]). SC has a rich history of over 15 years [S14] resulting in various annotation standards for a variety of data (e.g., social and sensor data [C12] are in use). The annotated data is used for interpretation by cognitive and perceptual computing. Figure 1 has SC as a vertical box through which the interpretation and exploration are routed (further explained in Section 3.3). SC also provides languages for formal representation of background knowledge.

The semantic network of general medical domain knowledge related to asthma and its symptoms define asthma control levels in terms of symptoms. This general knowledge may be integrated with knowledge of Anna’s specific case found in her EMR. The weather, pollen, and air quality index information observed by sensors may be available through web services. These annotated observations spanning multiple modalities, general domain knowledge, and context-specific knowledge (Anna’s asthma severity and control level) pose a great challenge for its interpretation. The interpretation of observations needs background knowledge and, unfortunately, Anna’s parents do not posses such asthma-related knowledge. Anna and her parents are left with no particular insights at this step since manually interpreting all the observations is not a practical solution. In the next two subsections, we describe the interpretation of data using domain knowledge for deriving deeper insights.

Figure 1. Conceptual distinctions between perceptual, cognitive, and semantic computing along with the demonstration of the cyclical process of perceptual computing
Figure 1. Conceptual distinctions between perceptual, cognitive, and semantic computing along with the demonstration of the cyclical process of perceptual computing

3.2 Cognitive Computing (CC)

DARPA, when launching a project on cognitive computing in 2002 had defined it as “reason[ing], [the] use [of] represented knowledge, learn[ing] from experience, accumulat[ing] knowledge, explain[ing] itself, accept[ing] direction, be[ing] aware of its own behavior and capabilities as well as respond[ing] in a robust manner to surprises.” Cognitive hardware architectures and cognitive algorithms are two broad focus areas of current research in CC. Cognitive algorithms interpret data by learning and matching patterns in a way that loosely mimics the process of cognition in the human mind. Cognitive systems learn from their experiences and then get better when performing repeated tasks. CC acts as prosthetics for human cognition by analyzing a massive amount of data and being able to answer questions humans may have when making certain decisions. One such example is IBM Watson which won the game show Jeopardy! in early 2011. The IBM Watson approach (albeit, not the technology) is now extended to medicine to aid doctors in clinical decisions. CC interprets annotated observations obtained from SC or raw observations from diverse sources and presents the result of the interpretation to humans. Humans, in turn, utilize the interpretation to perform action, which go on to form additional input for the CC system. CC systems utilize machine learning and other AI techniques in achieving all this without being explicitly programmed. Figure 1 shows the interpretation of observations by CC utilizing background knowledge.

Bewildered by the challenges in making the decision, Anna’s parents contact Anna’s pediatrician, Dr. Jones, for help. Let’s assume that Dr. Jones has access to a CC system such as IBM Watson for medicine and specifically for asthma management [W15]. Consequently, Dr. Jones is assisted by a CC system that can analyze massive amounts of medical literature, electronic medical records (EMRs), and clinical outcomes for asthma patients. Such a system would be instrumental in extending the cognitive abilities of Dr. Jones (minimizing the cognitive overload caused by the ever increasing research literature). Dr. Jones discovers from medical literature and EMRs that people with well-controlled asthma (i.e., patients who match with Anna) can indeed engage in physical activities if under the influence of appropriate preventive medication. Dr. Jones is still unclear about the vulnerability of Anna’s asthma control level due to weather and air quality index fluctuations. Dr. Jones lacks personalized and contextualized knowledge about Anna’s day-to-day environment, rendering him ill-informed in making any recommendation to Anna.


3.3 Perceptual Computing (PC)

Socrates taught that knowledge is attained through the careful and deliberate process of asking and answering questions. Through data mining, pattern recognition, and natural language processing, CC has provided a technology to support our ability to answer complex questions. PC will complete the loop by providing a technology to support our ability to ask contextually relevant and personalized questions. PC complements SC and CC by providing machinery to ask the next question, aiding decision makers in gaining actionable insights. In other words, determining what data is most relevant in helping to disambiguate between the multiple possible causes (of Anna’s asthma condition, for example). If the expectations derived by utilizing domain knowledge and observations from the real world do not match the real-world outcomes, PC updates the knowledge of the real world. Through focused attention, utilizing sensing and actuation technologies, this relevant data is sought in the Physical-Cyber-Social environments. PC envisions the more effective interpretation of data through a cyclical process of interpretation and exploration in a way that loosely mimics the process of perception in the human mind and body. Neisser defines perception as “an active, cyclical process of exploration and interpretation [N67]." Machine perception is the process of converting sensor observations to abstractions through a cyclical process of interpretation and exploration utilizing background knowledge [HST12]. While CC efforts to date have investigated the interpretation of data, it has yet to adequately address the relationship between the interpretation of data and the exploration of (or interaction with) the environment. Additionally, PC involves the highly personalized and contextualized refinement of background knowledge by engaging in the cyclical process of interpretation and exploration. Interpretation is analogous to bottom-brain operation of processing observations from our senses and exploration compares to the top-brain processing of making/adapting plans to solve problems [K13]. This type of interaction—often involving focused attention and physical actuation—enables the perceiver to collect and retain data of relevance (from the ocean of all possible data), and thus it facilitates a more efficient, personalized interpretation or abstraction. Figure 1 demonstrates the cyclical process of PC involving interpretation and exploration. The interpretation of observations leads to abstractions (a concept in the background knowledge) and exploration leads to actuation for constraining the search space of relevant observations.

A PC system may be implemented as an intelligence at the edge technology [HTS12], as opposed to a logically-centralized system that processes the massive amounts of data on the Web. In the case of Anna, a CC system processed all the medical knowledge, EMRs, and patient outcomes to provide information to Dr. Jones who then applied it to Anna’s case (personalization). Dr. Jones faced challenges in interpreting weather and air quality index with respect to vulnerability of Anna’s asthma control level. With the PC system being at the edge (closer to Anna, possibly realized as a mobile application with inputs from multiple sensors, such as kHealth: http://wiki.knoesis.org/index.php/Asthma), it actively engages in the cyclical process of interpretation and exploration. For example, Anna in the last month exhibited reduced activity during a soccer practice. This observation is interpreted by a PC system as an instance of asthma exacerbation. Further, a PC system actively seeks observations (asking questions by a PC system) for weather and the air quality index to determine their effect on the asthma symptoms of Anna. A PC system will be able to take generic background knowledge (poor air quality may cause asthma exacerbations) for exploration and add contextual and personalized knowledge (poor air quality exposure of Anna may cause asthma exacerbations to Anna). Dr. Jones can be presented with these pieces of information along with the information from the CC component. Anna is advised to refrain from the soccer match due to poor air quality on the day of the soccer game. This information will be valuable to Anna and her parents, possibly resulting in Anna avoiding conditions and/or situations, which may lead to the exacerbation of her asthma.

4.Conclusions

Perceptual computing is an evolution of cognitive computing, in which computers can not only provide answers to the complex questions posed to them but can also subsequently ask the right follow up questions and then interact with the environment—either physical, cyber, or social—to collect the relevant data. As PC evolves, the personalization components would extend to include temporal and spatial context, and others factors that drive human decisions and actions, such as emotions, and cultural and social preferences. This will enable more effective answers, better decisions and timely actions that are specifically tailored to each person. We envision the cyclical process of PC to evolve background knowledge toward contextualization and personalization. We demonstrated PC and its complementary nature to SC and CC by taking a concrete, real-world example of asthma management. The Internet of Things, often hailed as the next great phase of the Web, with its emphasis on sensing and actuation will exploit all these three forms of computing.

References

[ATM15] Asthma Triggers and Management: Tips to Remember, http://www.aaaai.org/conditions-and-treatments/library/asthma-library/asthma-triggers-and-management.aspx (Accessed Jan 18, 2015)
[C12] Michael Compton, et al. "The SSN ontology of the W3C semantic sensor network incubator group." Web Semantics: Science, Services and Agents on the World Wide Web 17 (2012): 25-32.

[C13] Andy Clark. "Whatever next? Predictive brains, situated agents, and the future of cognitive science." Behavioral and Brain Sciences 36.03 (2013): 181-204.

[G97] Richard Gregory. "Knowledge in perception and illusion," Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences 352.1358 (1997): 1121-1127.

[HST12] Cory Henson, Amit Sheth, Krishnaprasad Thirunarayan. 'Semantic Perception: Converting Sensory Observations to Abstractions,' IEEE Internet Computing, vol. 16, no. 2, pp. 26-34, Mar./Apr. 2012

[HTS12] Cory Henson, Krishnaprasad Thirunarayan, and Amit Sheth. 'An Efficient Bit Vector Approach to Semantics-based Machine Perception in Resource-Constrained Devices,' 11th International Semantic Web Conference (ISWC 2012), Boston, Massachusetts, USA, November 11-15, 2012.

[MW91] Mark Weiser. “The Computer for the 21st Century,” Scientific American, 265 (3), September 1991, pp. 94-104.

[N67] Ulric Neisser. "Cognitive psychology." (1967).

[S10] Amit Sheth. 'Computing for Human Experience: Semantics-Empowered Sensors, Services, and Social Computing on the Ubiquitous Web,' IEEE Internet Computing 14(1), pp. 88-91, Jan./Feb. 2010

[SAH13] Ami Sheth, Pramod Anantharam, Cory Henson. 'Physical-Cyber-Social Computing: An Early 21st Century Approach,' IEEE Intelligent Systems, pp. 79-82, Jan./Feb. 2013

[S14] Amit Sheth 15 years of Semantic Search and Ontology-enabled Semantic Applications, Amit Sheth, http://amitsheth.blogspot.com/2014/09/15-years-of-semantic-search-and.html (Accessed Jan 21, 2015)

[W15] IBMs Watson may soon be the best doctor in the world, http://www.businessinsider.com/ibms-watson-may-soon-be-the-best-doctor-in-the-world-2014-4 (Accessed Jan 20, 2015)

[K13] Stephen Michael, and Stephen Kosslyn. Top Brain, Bottom Brain: Surprising Insights Into how You Think. Simon and Schuster, 2013.