Collaborating with AI

published on 08 July 2022

We have always created tools to augment ourselves. Unlike all the other tools we've created so far, AI is not just going to assist us but be an equal partner with us. 

Human vs AI and a dark dystopian future where AI enslaves humanity has been the main plot line of sci-fi novels and movies. 

A more hopeful future is one where we collaborate with AI. One where AI helps us tap into our untapped potential. And, one where we become superhuman. Changing our perception of "us vs them" to "us with them" is what'll save our planet.

Rise of AI

The rise of AI is because of the advances in algorithms, an influx of capital, and a massive uptick in data. 

Cloud services have reduced the barriers of entry for companies to embed AI into their products. With the maintenance and management of the specialized hardware and software that’s required for AI outsourced to cloud companies, startups can focus on their distinct contributions to the AI effort. 

Does this rapid rise of AI mean that we are about to be overtaken by AI? Nope. 

Several breakthroughs have to happen before we have anything resembling machines with artificial general intelligence.

The perfect match

Tasks that are easy for humans are difficult for AI and tasks that are difficult for humans are easy for AI. 

AI doesn’t have imagination, intuition, or a common sense understanding of the world, which humans have. Humans can’t search massive amounts of data and find patterns, which AI can. 

Humans and AI have complementary skill sets. Humans and AI need to work together. Working with AI will lead us to solutions to the biggest problems facing humanity. 

AI augments humans by:

  1. Improving decision making by providing data-driven insights.
  2. Extending physical capabilities with robots. 

Humans augment AI by:

  1. Providing labeling data for training and transferring expert knowledge. 
  2. Sustaining their proper functioning. 

AI can help human cognition overcome its many limitations like:

  • Memory: Ray Kurzweil has theorized that human experts can hold about 100,000 chunks of information about a given subject in their brains. This might sound like a lot. But when you think about how small each chunk is, it’s less impressive. Humans can hold only about 7 items in their short-term memory at any given time. AI has access to massive amounts of data and can access all of its data at any time. 
  • Decision paralysis and the paradox of choice: As psychologist Barry Schwartz explained in his book, ‘The Paradox of Choice,” humans are overwhelmed by too many choices. AI can offload some of this burden. 

Humans can help AI overcome its many limitations as well:

AI understands the world in terms of numbers, which is how it encodes incoming information. Working in a particular field like Physics, it can scan every research paper and spot a gap in the current attempts of problem solving. The AI might parse out the problem, focusing on a particular aspect just as in Einstein’s day, scientists sought a theory of the electron. 

Only Einstein realized that they were all working on the wrong problem. Scientists had still to elucidate the basic concepts of Physics: the nature of light. To confront this entirely different problem, Einstein had to tap into branches of Physics that had nothing to do with light, such as Thermodynamics, to come up with his theory of special relativity. 

Humans can help reorient AI to focus on the right problem and come up with the same leaps of judgment. 

Humans teaching AI

Humans teaching AI is the first phase on the path of human AI collaboration. We need to enable the transfer of knowledge from a human expert to an AI system. 

Professionals such as lawyers, accountants, engineers, nurses, fork-lift operators, and so on, who have little AI expertise can impart abstract concepts to an AI system. They need to provide the examples to train the AI system and correct them through a feedback loop. This process of teaching by humans and learning by AI can be completely automated. 

For example, AI can observe factory workers and learn how to control a variety of factory equipment. 

Awareness types that humans can teach AI:

  • Situational awareness, which allows us to comprehend what’s happening in our environment with respect to time and space.
  • Social awareness, which is an understanding of basic social manners and protocol. 
  • Empathetic awareness, which is the ability to read other’s body language and sense their moods. 
  • Salience awareness, which is the ability to immediately recognize what's relevant to a situation and discard the rest. 

AI can also learn from the collective experience of people. The data of the masses is what gives Tesla an edge in developing self-driving technology. Most competitors have to collect data the hard way by hiring and paying human safety drivers riding along in test cars. 

Association-based reasoning / causality-based reasoning

Most AI learns from making associations from massive amounts of data. For example, if AI analyzes stolen car data, it might an association that white and gray color cars are most likely to be stolen. From this, an AI model might recommend not buying a car that’s white or gray because of the greater likelihood of it car being stolen. This association-based reasoning fails to take into account that white and gray are the most popular car colors. More white and gray cars are stolen than other cars because there are more white and gray cars on the road. 

Instead of noting that there’s a correlation between X and Y and therefore they must be connected somehow, causality intelligence is capable of reasoning that X causes Y. Causality intelligence is something most humans understand implicitly. This is because most of human knowledge is composed of causal relationships.

Humans can take AI’s correlation-based reasoning and provide the context to the situation. 

AI applications can be fooled by correlations and can make biased decisions based on things like race or zip code. These decisions are very nuanced as is the case with most human ethics and so having humans involved ensures that the process adheres to ethical principles and avoids discrimination. 

Examples of the human AI team

Doctors, lawyers, farmers, writers, sales people, and all other professionals who use AI to assist them in their decision making will be way more successful than colleagues who don’t. 

  • AI for architects: AI tools offer possible design options that take into account all the laws that govern how you build a structure. Architects can focus on their client’s needs and their own aesthetic sense to choose and iterate on the best one.
  • AI for judges: AI recommends who should be granted parole and who shouldn't based on factors such as estimated risk of recidivism.  
  • AI for scientists: AI runs experiments at scale that quickly assess the hypothesis of a scientist. 
  • AI for chefs: AI recommends combining unlikely ingredients to help chefs concoct new recipes. 
  • AI for factory workers: AI performs all physical labor such as handling objects and assembling parts. 
  • AI for drivers: AI reads the behavior of a driver and provides alerts to prevent accidents because of texting, sleeping at the wheel, or road rage.
  • AI for farmers: Drones with high-resolution cameras and AI-supported image analysis identify where fertilizer is needed or where pests have to be controlled.
  • AI for sports: Freestyle chess is where a human and AI team compete with another human and AI team.
  • AI for art: AI can produce artwork that can spark an artist's imagination. 
  • AI for writers: A human author could co-write a novel with AI. Each taking turns in writing chapters and learning from each other's styles. 
  • AI for doctors: AI can help diagnose diseases by correlating the patient’s symptoms with millions of other people with a similar set of symptoms.

A hybrid brain

The human brain is also hybrid. We have specialized left and right brains. While their function is not the oversimplified creative right and logical left, they do have distinct functions. 

We also have dual process cognition. This was popularized in Daniel Kahneman in Thinking: Fast and Slow. 

  • System 1. A slow thinking and logical side
  • System 2. A fast thinking and experiential side. 

How do we do things unconsciously? Where we can climb a staircase without even thinking about it. 

System 2 makes your careful, creative, rational decisions which take valuable time and fuel to process. 

System 1 is what gets you up the staircase. 

System 2 is like an older sibling charged with keeping a younger sibling out of trouble. System 2 does its job only under ideal circumstances. When under stress, System 1 is left to do the job. 

The brain takes up just 2% of our body weight but consumes over 20% of our energy. That’s why complicated, careful thinking wears us out. System 1 costs us less and that’s why we rely on it more heavily. We end up making snap judgments by System 1 even at times when the decision matters, which is a problem. 

AI learns how to behave in well-defined situations. When presented with a new situation, they would have to learn how to behave from scratch. Being able to generalize a response based on past experience comes very close to cognitive behavior–a once impossible barrier that now seems to be crumbling.

What if we had an AI version of Kahneman’s System 2 that helps ideas birthed by humans?

The hybrid approach to cognition is also used in deep learning systems where GANs use pairs of collaborative neural networks to optimize and solve problems. 

A hybrid world

We have more than 7 billion human minds. What if each of these had their own personal army of AI assistants and collaborators? As soon as a human has an idea, the human could, in effect, immediately set off manifesting that idea. Kind of like the law of attraction in real time. 

What will this mean for the future of jobs? Perhaps the next Turing prize winner or the next Pultizer prize winner might be a human AI team. 

The systematic development of human knowledge through education and work experience will have less value because of this interface with AI. 

Can collaborating with AI make us dumb?

In his book “Things that make us smart” Donald A. Norman brought up examples of technologies that seem to have made us lose some mental capacity. One casualty of smartphones for most of us is our memory for navigation. Does outsourcing this task make us dumber? 

Nope, because presumably, the brain has allocated its precious resources for other more pressing tasks. The human brain uses the conservation principle. As collaboration with AI enables us to offload certain cognitive tasks, it reallocates capacity for new skills and capabilities. 

Measure human-AI intelligence

A measure of intelligence in human subjects is through IQ tests. This test is solely to measure human intellect. 

For human-AI intelligence, we would need a new test that covers both human and AI categories of intelligence. 

Assistants -> partners

Today’s virtual assistant technology is a metaphor that aims too low: only to be an assistant rather than a partner. Instead, imagine a virtual partner that’s almost like your intellectual equal. An entity that you could brainstorm with about a possible breakthrough solution. 

This level of collaboration requires interacting with AI not just in the language space but also in the intuition or thought space. Could we carry out thought experiments with AI? It might be possible for AI to distill the complex universe down to an idealized form, its fundamental essence, where it helps us reason about fundamental principles and leads us to some sort of a Eureka moment.

Explanatory intelligence

In 1983, Richard Feynman was interviewed as part of a BBC show called “Fun to Imagine.” The interviewer asked why magnets attract or repel each other. Feynman’s answer is a fascinating analysis of the role of context in relation to intelligence. He explained that to be able to answer any question that begins with why, there must first be a mutual understanding of the person asking the question and the person answering the question of what information can be taken for granted. So one needs to know the context of the other person’s understanding to properly explain anything to them. There’s a big difference between the explanation a physicist will give another physicist and the explanation the physicist will give to a 3 year old. 

We understand implicitly that we need to change our explanation to suit the audience. This is because humans are great at understanding context.

Researchers are working on explainable AI. An AI that’ll explain its rationale to humans in way that a human understands. 

Language for communication

Humans use writing to translate intuitions into symbols. Another human reads these symbols and then translates the writing back to intuition. What we need is AI to understand language. . 

A project by Google–word2vec–-translates language into vector space mathematics. Each word in the English language is converted into a sequence of numbers representing its position in a vector space. Similar words are closer to each other in this space, which facilitate human and AI communication. 

Language AI models such as GPT-3 and BERT are making huge progress in allowing humans and AI to communicate through natural language. 

The new psychedelic

As much as we talk about thinking outside the box, the cultural grounding for us is so strong that it's difficult to extract ourselves from them. Psychedelic drugs like magic mushrooms and LSD can help. But not everyone wants their consciousness altered that much. 

AI might have the kind of psychedelic effect of sparking imagination. It might inspire human artists to express their creativity in ways that might not strike them otherwise. 

AI might help artists break boundaries like the way Picasso did with Cubism. 

A study was done in which two groups of humans were each given a Zen rock garden and asked to come up with their most creative ideas of tending it. One group was instructed how to go about doing this with a PowerPoint presentation; the other group was instructed through the interaction with an AI system called Robovie. Robovie encouraged the participants to try new things, prompting them with questions like can you think of another way to do that. The Robovie group participants engaged with the task longer than their PowerPoint counterparts. And they generated twice as many creative expressions. 

What’s next...

What’s next is a future that’s enhanced by AI.

The human AI will solve problems that humans alone couldn't solve. 

AI is not passive. It’s not simply a support tool like Adobe Photoshop. AI takes on creative responsibilities such as taking aesthetic criteria into account and producing explanations and commentaries alongside the works it creates. 

Human cognitive ability will be enmeshed into AI cognitive ability. It’ll be hard to distinguish between our ideas and AIs. 

When two humans work together they can bounce ideas off each other and each bring their own individual strengths to the table. Two person teams have made the biggest advancements in history. It’s only a matter of time that a human AI team changes history.  

Much of the difficult and abstract work of AI research has been done, and now it's now time to roll up our sleeves and get down to the dirty work of turning human AI teams into sustainable business. 

Read more