Human Oversight Will Become
More Important Than Ever

 Last year a woman on a bike in Arizona was killed by a self-driving Uber test car whose safety driver failed to stop in time, raising questions about safety.  Recently, the outcome of the Ethiopian Air crash on a Boeing 737 Max airplane killing everyone on board raised similar questions about whether or not technology is going too far.   Much like when a shark attacks, we all fear going in the water again, despite the reality that hundreds of people are injured every day in car accidents driven by humans versus the headline-grabbing small number of injuries from robot-cars or sharks. 

Despite the statistics, we, as humans, all question, “what happens if these things, the robots, the devices, the computers we entrust every aspect of our life to, don’t work as planned?”

As c-suite executives and directors, a big part of your role is oversight of what’s happening in your organization and foresight to see what could be coming.  The realty is several aspects of these new technologies like Artificial Intelligence (AI), self-driving cars, drones, and the connection of all of our devices through the internet of things (IOT) will require more oversight throughout your organization.  The culture set from the very top to value oversight will be more important than ever as every company transforms into a technology company.   The time to start thinking about what will be needed in the future is now. 

 Bias in Artificial Intelligence & Hackability

It has been broadly established that artificial intelligence can have a bias.  Amazon is reported to have pulled offline some of its AI used in hiring because of a concern about a bias toward women.  The AI-driven tool was using characteristics of those who had been successful on the job (primarily men), which then created a perpetual loop of selecting men for job interviews based upon scanning resume terms. 

AI is encoded by a team of humans giving it a framework and set of assumptions from which it can begin to learn.  If most of the programmers are men in their twenties and thirties, then it’s not just possible but likely the set of assumptions that AI uses will come from their world point of view, that is unless measures are taken to oversee and spot check for that type of bias.  This is why oversight from the early phases of development all the way through the sales and implementation process will become increasingly important.  The promise for AI connected to devices and robots to transform our life is upon us. 

For example, in the near future, your bed could have the ability to help wake you at the right time through body sensors knowing when your REM cycle is ending closest to when you need to get up based upon your calendar.  Your devices may start playing music you like and gently turning up the lights to a perfect moment of awakening from restful sleep and then provide the information you need to get your day started.  Other devices may monitor your health and alert you if you need more activity, more time meditating or to adjust your diet.  As your day continues, your refrigerator may help you pick the right foods to eat, order what you’re missing for delivery or pick up on your way to wherever you’re going (because it knows your calendar), help you plan your dinner and activities, not to mention vacations and whatever else you are doing.  As you get into your driverless car or some other form of transportation, your perfect day continues.  Or, so we believe by the exuberance of technologists. 

That’s the vision and the promise of AI & IOT, particularly when 5G comes into the mix to perform it all at a faster speed.  But what happens when you aren’t a perfect human and don’t want to follow all those rules?  What happens if the AI tells you that you need to take a pill you know is going to give you a headache or cause side effects, but it can’t seem to accept that premise because it was designed for someone of a different race, ethnicity or body type and can’t seem to get with your program instead of who programmed it?  What if the bed malfunctions or gets hacked and wakes you up in the middle of the night and you not only can’t get back to sleep, you can’t get the lights to shut off? 

There is a long list of situations that may leave you wanting to just control things yourself or want a different framework that wasn’t coded into the AI.  I haven’t even addressed the privacy issue or the fact that cameras and listening devices are baked into all of that – that’s a different blog post.

Overriding the Robot

Not being able to get a light to shut off when you want would be annoying, but you’d still be alive.  Even having a privacy invasion would be emotionally upsetting, but you’d still be here.  If you are relying upon your car to drive you somewhere so you can read or text or whatever and it runs into the back of a school bus because it started snowing and it wasn’t prepared for that and you and a bunch of children die, that’s not recoverable.  That’s the end game. 

AI can only do what it is programmed to do and learn from those it interacts with – thus machine based deep learning.  It can only rely upon the sensors of the device to which it is connected.  If it doesn’t get exposed to other situations or people, it won’t learn from them.  For example, with driverless cars, so far, they don’t quite work as well in wintry climates because the bad weather shuts down their sensors and the AI is not yet equipped to react. 

While the airline crashes are still being fully investigated, the reality is we would all want our human pilots to be able to flip a switch and override the computer on a plane to save our lives.  No question about it.  We don’t want to worry that code could be hacked from the outside and the pilot would have no control.  We don’t want to wait months for a plane’s computer code to get updated by programmers.  We want to know that override button is always in place without having to read a manual to get it done because in an emergency situation with a car, a plane or heavy manufacturing machinery, or future armed robots, people can truly die in the time it takes you to figure out how to do it.

Future Oversight & Foresight for What’s Coming

This is an exciting time.  We are at the very beginning of the transformation by AI, IOT, 5G, quantum computing and age where robots that do things for us actually exist.  It’s not quite the Jetsons yet, but we are getting close and many of us will see this in our lifetimes.    But in the boardroom, leadership is needed now more than ever.  As we see the ramifications of the “move fast and break things” discovery philosophy, we have reached the tipping point where oversight, careful analysis, thinking and foresight about what all of this means is required to ensure proper controls and checks and balances are in place before the robots simply take over or humans can’t in a moment of crisis.   

In the boardroom, the time is now to start thinking about what future jobs and functions will be needed as automation becomes more prevalent.  There are skills to be developed and jobs to be created if automation is to work effectively. 

In the future, human jobs that help create governance structure, policies and true oversight will be increasingly important.  There is no question that when AI is tied to blockchain based databases and ultimately operating over 5G networks connected to a myriad of things, our lives will change.  But a lot of oversight and ability to override technology will also be needed. 

 Your organization will need people from the front lines (i.e. coding and developing)  to the very top whose job is oversight – to make sure that things are working they way that we expect.  You will need people who look at what’s happening and think about how to protect people’s privacy.  We will need lawyers to help create policies and governance structures over technology to protect us.  We will need process experts and tech expertise to create compliance and oversight of the devices and of the people who create code (i.e. check for bias before it gets out there).   We will need more cyber-expertise to monitor for bad actors.  We will need compliance and detail-oriented humans to ensure that code is regularly updated and patched as vulnerabilities become obvious.  We will need critical thinkers and skeptics who consider worst possible scenarios and help build in easy-to-use mechanisms to regain control of a robot with an attitude. 

 We will need privacy experts who help remind leaders of what their customers and regulators will expect.  And we will need training to help create this new workforce of the future.  The workforce that oversees the technology and leaders who have foresight to see what could be coming and be prepared for it.  We will need leaders who take the time to understand these new technologies, their dangers and risks alongside the opportunities to be profitable and grow.  We will need to trust by verify all of these functions are being performed.  In fact, the concept of Oversight could become as ubiquitous as Human Resources, Legal or Information Technology. 

Too many large corporations are in reaction mode right now, just trying to meet their quarterly numbers.  They call it being a “fast follower.”  That may not be good enough in the future – more foresight may be required to succeed and avoid catastrophic outcomes. 

As you prepare for your next “All Hands” meeting of senior management or board meeting, think about how oversight of technology and the changes that are coming will be needed and how you can be prepared to help provide that oversight. 

If you are interested in a digital workshop for your next team meeting to facilitate robust conversation about the future of cybersecurity, emerging technologies, and the future of work, tailored to your organization, contact Jen Wolfe at jwolfe@consultwolfe.com

Jennifer Wolfe advises boards and c-suite executives on digital disruption, the future of work and cyber security oversight. She has served as the CEO of Dot Brand 360 and has served as Managing Partner of prominent intellectual property and technology law firm, Wolfe, Sadler, Breen, Morasch & Colby. She is an NACD Governance Fellow and Board Leadership Fellow, Certified in Cybersecurity Oversight, a Direct Women Institute and Stanford Director's College graduate. Her books, Blockchain in the Boardroom, Digital in the Boardroom, Domain Names Rewired and Brand Rewired have been endorsed by senior executives at Microsoft, Procter & Gamble, DC Entertainment, Richemont, GE, Uber, and others.