Artificial General Intelligence

Posted by Sumeet Singh on

Unless you have been living under a rock with no WIFI or internet connection of any kind, it would be hard not to come across the words Artificial Intelligence or AI for short. By now we are used to hearing about the great frets AIs have been able to accomplish and how much better they are at those tasks than humans. You might start to imagine that it’s not long before the rise of the machine occurs or that AI is more or less perfect and any further steps would mean we biodevices [or humans] would become obsolete.


But if you were to look closer you would notice a strange pattern. It would seem that these AIs that are good at one thing are only good at one thing. There is a reason for this, which is that in most cases AIs are built for a specific task or to help solve a specific problem. This does not mean that one AI would only be good at a given task but that you can consider it more like being its specialty. Just as we have doctors, lawyers, engineers, and others. The only difference between this and doing it using regular programs would be that the AI is able to expand beyond the confines of “Logic”. 


This is where we introduce the more refined concept of Artificial General Intelligence. An Artificial General Intelligence (AGI) would be a machine capable of understanding the world as well as any human, and with the same capacity to learn how to carry out a huge range of tasks. In theory, artificial general intelligence could carry out any task a human could, and likely many that a human couldn't. At the very least, an AGI would be able to combine human-like, flexible thinking and reasoning with computational advantages, such as near-instant recall and split-second number crunching.


For many, AGI is the ultimate goal of artificial intelligence development. Since the dawn of AI in the 1950s, engineers have envisioned intelligent robots that can complete all kinds of tasks -- easily switching from one job to the next. AGI would be able to learn, reason, plan, understand natural human language, and exhibit common sense. In short, AGI would be a machine that is capable of thinking and learning much in the same way that a human is. It would understand situational contexts and be able to apply things it learned about completing one task to other tasks. 


For instance, deep learning algorithms used by social media sites are becoming increasingly adept at recognizing objects, people, and even detailed characteristics of these objects and people. Modern computer vision technology driven by deep learning can now identify people in images posted to social media, the position of the person in the image, their expressions, and any accessories they might be wearing. This gives AI systems the ability to perceive images similar to the way humans do. These systems can go beyond simply identifying people from images and even analyze subtle patterns to discern non-obvious attributes. One example is a Stanford University study that shows how deep neural networks can identify people’s sexual orientation just by analyzing their faces -- an ability that is highly unlikely to be present in humans.


Another instance of AI systems performing human-like feats is natural language processing (NLP), where AI can understand speech or text delivered in natural language. AI is becoming proficient in understanding the meaning of text and speech as part of applications such as chatbots and virtual assistants in smartphones (think of Siri, Cortana, etc.) And advancements in natural language generation, which is the generation of information in normal human language, is being used in numerous applications where machines are required to respond to people's voice or text.


We're still a long way from realizing AGI. Today's smartest machines fail completely when asked to perform new tasks. Even young children are easily able to apply things they learn in one setting to new tasks in ways that the most complex AI-powered machines can't. 


Researchers are working on the problem. There are a host of approaches, mainly focused on deep learning, that aim to replicate some element of intelligence. Neural networks are generally considered state-of-the-art when it comes to learning correlations in sets of training data. Reinforcement learning is a powerful tool for teaching machines to independently figure out how to complete a task that has clearly prescribed rules. Generative adversarial networks allow computers to take more creative approaches to problem-solving.


But there are few approaches that combine some or all of these techniques. This means today's AI applications can only solve narrow tasks, and that leaves us far from artificial general intelligence. Of course, the reality is we’re still some way from AI being able to convincingly mimic a human being. So, how far can AI truly go, and in what things will humans continue to have an advantage? The answer to this would depend on which side of the fence you are on. We will not be going into that here as that would be a rabbit hole that would take a while to crawl out of.


Progress in developing true AGI is rather slow. Why is this the case, surely we should have been further along by now? Part of the reason it's so hard to pin down is the lack of a clear path to AGI. Today machine-learning systems underpin online services, allowing computers to recognize the language, understand speech, spot faces, and describe photos and videos. These recent breakthroughs, and high-profile successes such as AlphaGo's domination of the notoriously complex game of Go, can give the impression society is on the fast track to developing AGI. Yet the systems in use today are generally rather one-note, excelling at a single task after extensive training, but useless for anything else. As we made mention in the beginning, their nature is very different from that of general intelligence that can perform any task asked of it, and as such these narrow AIs aren't necessarily stepping stones to developing an AGI.


This has led some to speculate that AGI is something we are not even close to developing and would still take decades before true AGIs come into being.


In predicting that AGI won’t arrive until the year 2300, Rodney Brooks, an MIT roboticist and co-founder of iRobot, doesn’t mince words: “It is a fraught time understanding the true promise and dangers of AI. Most of what we read in the headlines… is, I believe, completely off the mark.”


Brooks is far from being the only skeptic in an imminent AGI evolution. Leading AI researchers such as Geoffrey Hinton and Demis Hassabis have stated that AGI is nowhere close to reality. In responding to one of Brooks’ posts, Yann LeCun, a professor at the Courant Institute of Mathematical Sciences at New York University (NYU), is much more direct: “It’s hard to explain to non-specialists that AGI is not a ‘thing’, and that most venues that have AGI in their name deal in highly speculative and theoretical issues...”


Still, many academics and researchers maintain that there is at least a chance that human-level artificial intelligence could be achieved in the next decade. Richard Sutton, professor of computer science of the University of Alberta, stated in a 2017 talk: “Understanding human-level AI will be a profound scientific achievement (and economic boon) and may well happen by 2030 (25% chance), or by 2040 (50% chance)—or never (10% chance).”


If we were to create a shortlist of the issues plaguing the development of AGI we would get something that reads like this:

  • The lack of a working protocol to help with artificial intelligence or machine learning networking is problematic. This deficiency coerces systems to work as standalone models in a closed environment. And such a mode of operation is a stark contrast from the convoluted and highly social “human experience.”

  • Communication gaps come in the way of seamless data sharing and the inter-learning of machine learning models, which reduces universality. 

  • The absence of an artificial intelligence network also hinders the overall development of a common goal.

  • Organizational executives are in the dark on how to integrate AI with their business operations to drive specific results.

  • The lack of direction, complemented by the fact that companies cannot afford to hire a dedicated team of AI experts, makes the implementation of AI platforms costly.

  • AI developers and companies often experience issues while selling their code and services.

 

However, all these are not insurmountable obstacles, Plus with the amount of work already done in the field, it would not take much to achieve. The next decade will play a crucial role in accelerating the development of AGI. In fact, experts believe that there is a 25% chance of achieving human-like AI by 2030. Furthermore, advancements in robotic approaches and machine algorithms, paired with the recent data explosion and computing advancements, will serve as a fertile basis for human-level AI platforms. How we, as humans, would adapt to achieving AGI within the next decades, is still an unresolved debate.


Share this post



← Older Post


Leave a comment