On Designing Robots

How we design robots today will influence how they get adopted tomorrow, and there are a number of things to consider if we’re going to ensure the best possible future.

You are designing our future with robots. Your role may be very direct – like engineering or marketing the hardware and software itself, or indirect – like implementing a policy that may accommodate robots or working on some other aspect of an ecosystem that will support robot use. You may also be informing the design of future robots as you (sometimes unknowingly) interact with them in the world around you. In fact, it’s likely you’re playing more than one of these roles because robots are everywhere. And as they become more intelligent and their capabilities more advanced, their presence in our lives at work, play and home will grow. How we design robots today will influence how they get adopted tomorrow, and there are a number of things to consider if we’re going to ensure the best possible future.

 

When I say robots are everywhere, I’m not kidding. I’m sure you’re aware of robots used in industrial applications. Robots are assembling, welding, and inspecting many of the physical products, most of the equipment, and all of cars we purchase today. Scientific robots have been exploring our undersea environment on earth and other worlds in space for years. Many of us see robots working at home or in the office: cleaning floors or delivering office mail for instance. We’re starting to see robots in our stores: keeping track of inventory and helping people locate items. Thousands of other robots power our internet shopping activities; almost every time we click “purchase” online, a robot is picking and packing our order. Robots are also assisting people in our hospitals: delivering medications and “beaming” far-away doctors into emergency rooms to give specialized care. And surgery itself is enhanced through robotic technology to give humans more finite control.

 

We think of robots as assistants to a large extent, but robots are starting to work together on their own. Autonomous vehicles coordinate as they traverse corporate campuses and certain cities today. Drones act as flying eyes that inform other robotic vehicles on the ground (in applications like mining and infrastructure inspection). Our military and humanitarian forces are safer because robots are keeping track of changes from the sky, making sure fields and buildings are safe to enter, and detecting and diffusing explosive devices. Local and international disaster relief teams are using robots to find missing people while providing critical communication back to human rescuers. And in some communities, police forces are starting to trial robots as security monitors, increasing an officer’s situational awareness and hopefully creating safer spaces. According to the Boston Consulting Group, spending on robotics totaled $26 billion in 2015 and they project spending to grow to $67 billion by 2025. More robots means more potential benefits but a number of questions arise too.

Having acknowledged the ubiquity of robots, and that we’re affecting their design in some way, it’s only natural to ask ourselves a number of questions: What do we want those robots to do? How empowering and how valuable can we make them? What are we giving up? How should we limit them and under what circumstances? As designers of these systems, we have to figure out what’s possible, what’s right and what’s best. What’s the right balance of technical capability, human factors safety, usability, and affordability…for who…and why? What’s best for the most people? And as we address these questions we must consider something even more fundamental – our willingness to engage with them as intended. If our robots aren’t approachable and attractive – if they don’t do what we expect, or behave as we wish, if they don’t earn our trust – they will fail and we as their creators, will have failed.

 

Looking at the world of consumer and B2B robots on the market today, we see what we call “first wave” robots. These robots, while impressive, are really only hinting at what is coming our way. We know this because as designers at Essential we’ve played a range of roles in their development. We’ve worked to identify latent user needs to frame target markets. We’ve defined user experiences, designed control interfaces, given form, and engineered functional hardware. In the process, we’ve addressed a range of human factors and usability issues that have resulted in innovations that have helped our clients build markets for their innovative technologies. As we’ve watched markets mature and main-stream user-segments replace early adopters, we’ve seen needs and expectations rise. As users become more familiar with Siri and Echo-like artificially intelligent systems, they expect more from the devices they purchase: certainly more than what the first wave of robots can deliver. In fact, many of the users we study want a lot more than “tools”; they want their robots to complete complex tasks, solve problems, and anticipate next actions. Because of this user/market pull we’re pursuing much higher-level design questions. We’re exploring ways robots can address higher level human needs. In many cases, “second wave” robots will only be successful if they’re designed to interact like humans interact with each other. The already complicated job of design effective hardware and software is getting more complex. A new organizing framework is shaping the mission – what should the relationship be between the human and the robot? And how should that drive hardware and software decisions?

Understanding Human-Robot Relationships

Relationships develop over time and within cultural contexts so it’s important to consider the past as we design the future – especially as it pertains to robots. In a paper entitled “Designing Human Robot Relationships” (published by O’Reilly Publishing in their book Designing for Emerging Technologies), Bill Hartman and I briefly described man’s history creating mechanical imitations of human beings and other creatures (automatons) dating back as far back as 1000 BC. Inventors like Leonardo daVinci imagined and reportedly built robotic devices. As materials and technologies evolved, technical advances in aid of the human condition were mostly welcomed. But with the onset of the Industrial revolution, when people started to work alongside more advanced and powerful machines, real concerns about technology arose. That’s when science fiction (a new literary genre) was born and if you’re a fan, you know a recurring theme is that of machines becoming more powerful than man and robots becoming our overlords.

 

The stories are fun and scary, but what’s important about them (to designers like us anyway) is the effect they’ve had on how people think, and how this affects what gets designed. While these writers created much of what people fear today (by imagining every bad outcome possible), they also began a few important design processes. Risk identification is one of those processes they nailed. From minor miscommunication inconveniences, to assembly line disasters, to robo-self-awareness and violent self-preservation, a wide range of potential risks were imagined. Failure mode and error analysis is another. Errors and failures were (and are) the currency of the genre. Someone inevitably analyzes the situation, finds the root cause of the problem and embarks on yet another design process: risk mitigation. In most cases, a risk mitigation solution is proposed. Sometimes rules of engagement are forwarded (like Isaac Asimov’s 3 Laws of Robotics), sometimes controls are offered to minimize the downside of a robot’s misbehavior, and sometimes a design revision is the pathway out of the unsafe scenario. As designers today I think we owe science fiction writers a debt of gratitude. They help us understand peoples’ fears better, they remind us of our limitations as creators, and they inspire us to think harder so we design more responsibly.

Design, to a large extent, is about balancing an ideal future vision with the cold, hard realities of technology capability, cost, and schedule limitations to find a desirable solution. There’s a big difference between imagining what a robot can do and developing a robot that can reliably deliver on that vision. Artificial intelligence is a good case in point: real world artificially intelligent systems are only somewhat intelligent – it doesn’t take long to discover issues. Most (affordable) systems simply can’t interpret the world like humans can, so despite our best efforts, even second-wave robots will demonstrate limitations that will test the patience of some users. In real-world scenarios our robots will face noisy environments. Acoustic noise, visual noise, and physical/environmental noise for example, will compromise our robots ability to make the right decision and take the right action every time. We have to anticipate and design for those situations and offer the human collaborators information about the confusion – then we have to design behaviors that are acceptable on both social and operational levels so both the robot and the person recover effectively and elegantly. The art of design is sometimes described as “not experiencing design at all”. Great products enable desired experiences. Better robot  technologies won’t be experienced as technology at all.

 

The good news is the state of the (experience design) art will advance as technologies advance. According to ABI Research, we’ve entered the age of IoRT (Internet of Robotic Things). The robots we’re designing now have access to big data, cloud computing services, distributed intelligence in sensor-enabled environments, and they communicate and activate other robotic systems. These robots interpret information from a variety of sources and make their own decisions. This is very exciting from a design perspective because multiple inputs will allow data comparisons and more “right” decisions will get made. On the other had, multiple robots making the same bad decision multiplies the potential for negative consequences. Because we won’t have direct control like we used to, we need to address higher-level design issues now, before things get messy. In research and development labs around the world, fundamental decisions are being made about technology capability, control, and safety. We believe these conversations need to be structured around human interaction and behavior interpretation. We need to design the way robots communicate, the way they behave, the way they imply intent, the way they make you feel, and ultimately, the way you behave too.

Designing a Robot

It’s very easy to become overwhelmed by the number of issues to address in any design process, let alone the design of a robot. To cope we use a variety of tools that allow us to identify, understand and address these complexities.

 

One of these tools is our opportunity map. We map the design opportunity space from three diverging perspectives: user needs and desires, technology possibilities, and market factors. We build a large Post-it Note map diagramming all the needs, goals, and technical possibilities we can. The “one map” approach allows the team to see everything in one place – so they can start connecting dots. Next, we imagine as many product design possibilities, experience design possibilities, and brand promise possibilities as we can. We don’t limit ourselves with preconceived notions, instead we imagine ideas we might pursue and ideas that others might pursue – creating a world of possibility. With a wide range of ideas identified we’re able to anticipate competitive positions while refining ours. Then, we turn those ideas to co-creation materials for design research. Design research, unlike market research, allows people to give form to their personal ideal future. It is about understanding the reasons why users prefer certain things over others. We conduct as much design research as possible, leveraging our co-creation materials to elicit deep conversations with every stakeholder. By having users “build” their personal ideal future robot and then comparing those ideal personal visions to find patterns, we understand what really matters and why.

 

With good design research output you will have the material necessary to inform the creation of a design requirements specification that goes far beyond the requirements typical of engineering performance specs. You will also have what you need to establish the basic structure of the desired human/robot relationship. Having understood users’ needs and goals and having prioritized features and benefits, you’ll be able to define every interaction – who initiates them, how they transpire over time, and through what media or behavior. With an understanding of how your users and customers want to collaborate and how they want to share control, you’ll be able to address the ways learning and teaching takes place (bi-directionally) within the context of work and non-work activities in complex, networked information and social structures.

 

Of course the process described above is just the beginning of your robot design journey, but a deep understanding of stakeholders’ needs, goals, and desires are fundamental to our ultimate success. As you and we contribute to the design of any robotically enhanced endeavor, it’s critical that we push ourselves to new design heights. To create the best future possible, we must think and design at more levels, more collaboratively, and more responsibly than ever before.

 

_______________________________________________________________________

Scott Stropkay is a Founder and Partner at Essential Design.

Essential Design is an Innovation Strategy & Design firm providing Product Design, Service Design, and Digital Design services to help clients create breakthrough customer experiences.