Artificial intelligence (AI) can not be viewed as a single tool or a one size fits all instruction set, rather it is a suite of algorithmic computing capacities that can perform humanlike functions across varying settings. When we think about AI, it usually refers to dynamic machine intelligence, including facial recognition (computer vision), perception (computer vision and speech recognition), whole language processing (chatbots and data mining), and social intelligence (emotive computing and sentiment analysis), and this is just scratching the surface. The actual lines of code powering AI tools are commands that tell machines what to do, which can be neutral strings of directives. However, those who program the code, the data that powers outcomes, and the social systems in which these tools are deployed all inevitably reflect existing structural inequalities. This means that while AI coding can be unhindered and does not present any clear prejudice, this does not mean that it doesn’t exist. Plus in some cases, we would require them too, in order to give an accurate representation of our world. The ability to discern social cues and non-verbal indications are all things we humans mostly do unconsciously and we are now adapting AIs to be able to do the same. But first, before an AI can be socially intelligent there is a process of evolution that has to take place.
What is Intelligence
Let’s take a look at what Intelligence actually is. Intelligence derives from the Latin verb intelligere which derives from inter-Legere meaning to "pick out" or discern. A form of this verb, intellectus became the medieval technical term for understanding and a translation for the Greek philosophical term nous.
How to define intelligence is controversial. Groups of scientists have stated the following:
“Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.”
If we can construct a model for describing, assessing, and developing social intelligence, or "SI," then we can add another important piece to the MI model. We can characterize SI as a combination of a basic understanding of people - a kind of strategic social awareness - and a set of skills for interacting successfully with them. A simple description of SI is the ability to get along well with others and to get them to cooperate with you.
A careful review of social science research findings, ranging from Gardner and Goleman to Dale Carnegie, suggests five key dimensions as a descriptive framework for SI.
But how do you blend together machine learning and human social engagement together? This is where an understanding of what emotional intelligence is in the first place. Emotional intelligence was proposed as “the subset of social intelligence that involves the ability to monitor one’s own and others’ feelings and emotions, to discriminate among them and to use this information to guide one’s thinking and actions”. It includes the capability to become aware of, identify and label one’s own or another person’s affective state; the capability to reason in terms of the appraisals that lead to affective responses and to predict possible future actions from the affective state; and finally, a set of capabilities related to the regulation of the affective states, be it the capability to hold back a socially inappropriate own emotion or to act in a certain way so as to influence the emotions of another person. There is evidence that the capability of feeling one’s own emotion is an essential element of a range of seemingly unrelated capabilities.
For example, lesions in emotion-related brain regions leave people unable to feel emotions, engage in simple decision-making, or make socially appropriate choices. Capgras syndrome disconnects emotions from rational areas leading people to conclude impostors have replaced friends or family. What role should the concepts of social and emotional intelligence play in machine intelligence? The answer to this question naturally depends on the concept of machine intelligence one chooses to adopt, which is related to the perspective on machines in general.
From a philosophical point of view, one could ask basic questions such as whether a machine can potentially be conscious. We will approach the topic from a more pragmatic perspective here, and view machines from a utilitarian point of view. From this viewpoint, we can say that a machine is “intelligent” if it is useful if it is good at its job. Given the fact that humans have created machines to do work for them, one can say that a machine’s job, in general, is to, in one way or another, make the life of humans easier.
Merging of AI and SI
Its no doubt that artificial intelligence raises wider questions about the society in which we live and that of the future. Market-research institutes foresee huge efficiency gains, but are these credible and, if so, how will such gains be distributed? Any, including feminists and anti-racists to name a few, have expressed concern that the algorithms on which AI depends unconsciously embed the social prejudices of their human creators. Then there are the issues of privacy and civil liberty surround the possession and control of the data mined by artificial Intelligence. How education must change so that citizens can feel empowered rather than alienated by AI is also at stake—as is the ever-present issue of where AI fits in meeting the existential challenge of climate change and biodiversity loss.
Artificial Intelligence (AI) is reshaping the future of many industries ranging from healthcare, energy supply chain, building controls to legal services. Transportation systems will undergo a similar transformation; self-driving vehicles, delivery robots, and smart environments will be ubiquitous. To realize this future, we have to develop machines that can not only perform intelligent tasks but do so while co-existing with humans in the open world - without negatively perturbing their habits. Self-driving vehicles will navigate the streets alongside human drivers, and delivery robots will share sidewalks with pedestrians. Consequently, machines need to learn unwritten common-sense rules and comply with social conventions.
In order for the environment to be useful, it is clear that the agents in the system must be endowed with cultural characteristics that closely mimic those of their human counterparts. At the same time, they must be modeled in such a way as to provide fairly decisive predictions about their behavior. Such systematic ways of representing culture do not appear in the artificial intelligence literature, hence they were adapted for this project from three different sources in social theory, taken from various social science disciplines. First of all, the representational framework for culture has been taken from the grid-group typology used in cultural Perspectives anthropology and political science
Besides social simulation, another possible avenue for fruitful application of social theory to AI might be natural language processing, where an understanding of the social context of language is increasingly seen as key not only to unlocking semantics but even syntax. Yet another is the design of user-modeling computer interfaces since human-computer interactions are increasingly viewed as a form of social process. In both of these areas, of course, a certain amount
of formalization of sociological concepts will be necessary before they are ready for use in AI systems. On the other hand, one body of social theory that is already highly mathematical, social network theory, is more easily adaptable and indeed is already the subject of a number of applied projects that may in the future lead to working AI systems.
AI used to be software only but is now also commonly used in physical devices that can manipulate our world. The impact will be much larger than before and it is, therefore, appropriate to focus on the intersection of engineering and artificial intelligence research. The presence of big data and much faster computers have addressed the knowledge gap between artificial agents and people, but the more fundamental problems are still the same and unsolved
The interactions between philosophy, social sciences, and computer science around social intelligence are manifold, and many concepts and theories from social science have found their way into artificial intelligence and agent-based research. In the latter, coordination and cooperation between largely independent, autonomous computational entities are modeled. Conversely, logical and computational models and their implementations have been used in the social sciences to help improve simulations, hypotheses, and theories. Among the most prominent subjects at the interface are action and agency, communicative interaction, group attitudes, socio-technical epistemology, and social coordination. In computer science, these concepts from social science are sometimes deployed at a more metaphorical level rather than in the form of rigorous implementations of the genuine concepts and their corresponding theories.
A Complex Future
As the mechanisms of machines become more complex and the causal chains of functioning become opaque humans need to replace simple cause and effect models with more complex mental models of functioning. A natural second metaphor for machines is that of a social entity. Even though people often explicitly deny anthropomorphism regarding machines and computers, they still interact with them as social entities in states of “mindlessness”– states similar to the implicit/automatic systems in dual-processing theories.
In fact, it has been shown that people tend to interact with computers, new media, and the likes as they do with people: they are polite, behave differently with computers that speak with a male vs. a female voice, they use proximity-regulating behavior with faces on the screen, and much more. It seems that people apply rules similar to those governing social behavior when interacting with new technology beyond a certain threshold of complexity.