A Book Review and Commentary on Roger McNamee’s new tell all about Facebook , “Zucked Waking Up to the Facebook Catastrophe”

In my digital disruption workshops, I often lead a discussion among executives and board members about Facebook’s role in the disruptive societal shifts we are all experiencing, capped with what role companies have played in building the Facebook empire through advertising dollars.  After reading McNamee’s new book and pouring through the latest revelations from the UK investigations, it becomes even more clear that directors and senior executives should be cautiously thinking about the future of Facebook and the role other mega-technology companies play in shaping the expectations of customers and your work force, as well as what laws and regulations may come as legislators awaken to the reality of how these companies have actually made their fortunes.

Regulation in Europe and, ultimately the U.S., will follow the slippery slope Facebook (and others) have greased by morally looking the other way in favor of profits and furtherance of its business model, which exploits and manipulates human behavior in exchange for “free” and “convenient” services.  Despite the platitudes that Facebook extolled since its inception, its ultimate means of making money at all costs have truly been at all costs.  Technology creates great opportunities and connects people in new ways.  But unchecked, unregulated, and without proper oversight, we have now seen that even the greatest idealists can fall.  What did we all expect when the business model that emerged in the digital era was to give it away for free?  They had to monetize their businesses somehow and we all blindly assumed it would be for a greater good.  I’ve highlighted here a few of the more compelling arguments McNamee makes in his book and commentary on what board members should be thinking about.  For background, McNamee is a self-proclaimed early mentor and advisor to Mark Zuckerberg and Facebook.  After the 2016 election, he wrote a memo to Zuckerberg and Sheryl Sandberg opining on the role it was clear to him that Facebook played in swaying the election.  If you love Facebook, you might not want to read this blog or the book, but you probably should.  If you never understood Facebook or liked it, read on, you will feel vindicated. 

The First Reality:  Facebook makes money by manipulating behavior intentionally, despite unintended bad consequences.

It’s not a Coca-Cola like secret formula to why Facebook is so successful.  The product is out there for all to see and completely free.  People willingly, often enthusiastically, provide the most intimate details of their life’s photos, experiences, likes, dislikes and beliefs, including outrage, fear, happiness and the gamut of emotions that fill a human life.  Many spend hours and hours of their waking life, sometimes hindering much needed sleep, to troll through Facebook and learn what their friends, and friends of friends and long-lost friends are doing.  Research has shown that there is typically more harm than good that comes from this type of digital voyeurism and that real relationships are better for us than digital ones, but that doesn’t stop the human weakness to compare oneself to others and avoid the fear of missing out (referred to as FOMO).  These are no longer speculative statements, but a reality of what Facebook does.  How did it succeed if that was their model? 

Facebook makes money by selling ads geared towards its users.  That’s fairly simple.  But it’s what’s going on behind that Wizard of Oz curtain that is so concerning. 

Sean Parker, one of the earliest investors in Facebook (also started Napster) and Silicon Valley icon, began saying in interviews about a year ago that he was concerned that Facebook was designed to manipulate behavior and get people hooked on the site by triggering releases of dopamine.  In speaking to The Guardian, Parker has said: “It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.” Our basic human flaw that we all possess is that we want to be liked.  Facebook preyed upon this to create addiction cycles to keep people returning through the fear of missing out and not being liked or better, responding positively to the likes others grant us through a simple click.  It sounds sophomoric, but that’s exactly what they did. 

And when they learned how do to it better than any other company and hooked hundreds of millions of people around the world, they took the next logical step.  The tested how far they could push it and how much money they could make monetizing all that data.  Through what’s often referred to as A-B testing in digital speak, they tested if they could make people happy or sad by what they put into their feeds and learned that they could.  They tested if they could evoke angry or fear-based messaging to keep people hooked longer.  They could. 

Facebook convinced most of the people on the planet to trust them and start using Facebook logins across other websites to make it more convenient and not have to remember so many passwords.  Sounds great, right?

What most people didn’t realize was that Facebook had just duped them into allowing surveillance of everything they did through their devices and computers all across the internet, spying on them and their friends, without what most of us would define as consent.  Facebook used code to monitor (or spy) on their users even when they logged off (they are not alone, Google does this too).  They could even capture information about friends who never use Facebook.  They then learned they could sell this data to third parties, who might then use that data to buy more advertising on Facebook by targeting a specific audience.  The treasure trove of data had no limits. The more they could collect and target, the more money they could make and sell this bounty to anyone, even nefarious actors. 

The concept of targeting is not exactly new.  Since the early days of direct mail and cable, marketers have had the ability to segment advertising messages by where people live or what they watch on television, creating old school demographic and psychographic profiles of their customers.  In the old days, real market research was required to understand the customer and when and how to reach them at an emotional level. This is what companies like Coca-Cola and Procter and Gamble mastered.    But Facebook is at such a deeper level because of the intimate nature of surveillance on the full psychological makeup of the Facebook user and the level of trust users have placed in it. 

According to McNamee, Facebook has become one of the most wealthy and powerful corporations by exploiting its users’ most intimate data across devices and by selling data about what they do online, even when  not on Facebook. They know everything, they then try to manipulate users to spend more time and they sell that to other parties all the while claiming that you control your data.  “When called to account for this tech companies blame pressure from shareholders.  Given that the founders of both Facebook and Google have total control of their companies, that excuse falls short,” says McNamee. 

Since its inception, Facebook (and other big tech) claim they are merely a technology platform, not a media company and that they should not face the same rules, regulations or laws as a media company. They make a lot of money because they don’t have to play by the same rules as everyone else.  For example, traditional media and cable companies are required to follow election rules regarding political advertising.  Facebook does not, they can mislead you without consequences.  While the FTC has focused inquiries on Facebook and invoked penalties through consent decrees, the punishments and fines, thus far, have been so inconsequential to it that it has done little to nothing to change the practices. 
  
The Big Revelation:  Brexit & the 2016 US Presidential Election

In McNamee’s book, he details his research and discovery process and how he became alarmed when he hypothesized that Facebook had been used to manipulate the outcome of the Brexit vote in the UK and the 2016 presidential election in the U.S.  What we now know as the Cambridge Analytica breach is actually just the tip of the iceberg. A Facebook fan and advocate, McNamee details how disappointed he was to learn that not only had Facebook done these things, but that they knew what they were doing.  They knew they were monetizing it and the consequences were not what mattered.  He details the culture at Facebook as one that will not accept criticism or fault and that insulates its leaders from any negative information.  While he clearly writes with a liberal political perspective, his analysis is politically neutral and his concerns for the future should be considered by both parties as they look to future election cycles.  If any company has so much information that it can help third parties (whether state actors or just a company that wants to target a disadvantaged group), manipulate individuals, shouldn’t that be regulated?  We don’t allow doctors or lawyers to exploit what they know about their clients without ramifications.    If Facebook can decide to manipulate you via your feed to make you happy or sad, they can certainly figure out who is more predisposed to liberal or conservative views, circulate news that outrages and enrages those people into action and facilitate the spread of more and more incomplete information (to put it mildly).  The mis-information spreads virally, creating the conflict and polarization we all experience today.  The Artificial Intelligence powering Facebook reinforces existing beliefs to keep you hooked and occasionally enrages you, also to keep you hooked.  This means the AI is actually doing more harm than good.  While there are certainly other factors at work, clearly, the manipulation by Facebook and other big technology companies have promulgated, accelerated and made possible this level of conflict in our society.  And, they are now incredibly wealthy corporations and their executives enjoy a level of wealth few others in our country ever will.  

While McNamee focuses primarily on the outcome of these two elections as what awakened him to the problem, he clearly explains the culture and the nature of what was happening at Facebook.    He details how even when faced with evidence that Facebook had been used to cause harm, spread hate and violence, or manipulate people, that the executives turned the other way because they are so focused on profits, their superiority and blinded by reality (perhaps manipulated by their own Artificial Intelligence).  What is even more concerning is the potential violence that is the consequence of Facebook’s action or failure to act and the moral culpability that everyone shares when this happens. 

The Violence:  Emerging Markets, Suicide and Crime

When Facebook touted that it would bring internet service to emerging markets, it may not have imagined that connectivity would be used to stir up violence against so many.  For example, in Myanmar, their platform was used to virally build fear and hate “enabling religious persecution and ethnic cleansing of the Rohingya minority,” cites McNamee.  In Sri Lanka, the government ordered internet service providers to block Facebook, Instagram and WhatsApp due to “an explosion of real-life violence against that country’s Muslim minority, triggered by online hate speech.”  They couldn’t reign in what they had created.  This is because the platform has baked in this level of manipulation, preying upon the worst rather than the best of human behavior with the goal to deliver echo chambers to keep people hooked and sell more advertising and data.  They cannot simply reverse what they have built.  Artificial Intelligence builds knowledge based upon what people say and how they behave.  When it’s built with an intent to create an echo chamber and manipulate people to spend more time on the platform, it’s not going to solve the problem with more code.  Despite apologies, Facebook practices have not changed.  

Back in the U.S., the target is also our youth and those already disadvantaged.  Headline after headline report young people committing suicide on Facebook Live, or filming themselves while they drive their car, distracted, into a fiery crash.  These desperate attempts for fame are self-fulfilling prophecies as more teenagers, young and disadvantaged watch and desire the same fame.  Facebook takes these videos down, but not before thousands, hundreds of thousands or even millions of other young people view it without any filter.  And the algorithms push it out, because that’s what they are designed to do when it sees people reacting emotionally to something.  “The platforms prey on weaknesses in human psychology, using ideas from propaganda, public relations and slot machines to create habits, then addiction. . . brain hacking,” says McNamee.    Violent crimes are also committed, foolishly, capturing it in a desperate desire for attention. 

If Facebook’s platform is being used for such nefarious purposes, is there some moral or legal responsibility?  Is it okay to simply say it’s a technology platform and therefore they are not responsible for what people post?  Is it okay for your company to spend money advertising on Facebook because that’s what you need to do to sell your product?  Where does the responsibility begin and end? McNamee cites that “Procter & Gamble and Unilever expressed displeasure with the platforms’ lack of transparency and accountability and argued that advertisers should not tolerate the biggest internet platforms’ failure to conform to the disclosure standards of the adverting industry.”  More recently, companies like Nestle, Walt Disney and AT&T have pulled advertising from You Tube after You Tube has failed to eliminate advertising messages showing up next to questionable videos, some even sexualizing young children or instructions on how to commit suicide. 

The Tradeoff:   Most companies are complicit in this; Facebook sells your product and people like you there and spread the word about you so why wouldn’t you be there?  

If your company has spent money on Facebook, have you contributed to these unintended consequences?  Do you have a responsibility to change that going forward?  I fully understand and appreciate the power of Facebook.  I have used it myself to promote fundraising campaigns and I have used other social media to try to reach potential clients and promote my business.  I understand the power and how much easier it is to use.  And, to be fair, Facebook has done a lot of good things, too.  It has connected friends and family around the world in a way that would not have been possible previously.  It has helped people raise money for important causes and share good times, as well as pain and suffering that allow us to feel more empathy and reach out with kind words or financial support to those in need where we may not have otherwise known about it.  Small businesses can promote their products and services for far less money and with great more amplification because of everything Facebook has done.  Those are all great success stories from Facebook and other technology companies.  But we need to embrace and accept the reality of what it is and what it has done and ask ourselves, “does the benefit outweigh the harm?”  I don’t proclaim to have the answer to that, but I believe education is critical in all things related to the new disruptors:  emerging technologies and how they are deployed (the good and the bad), the cyber vulnerabilities they create and the societal shifts at work, which can now be manipulated far more easily by large corporations than in the past.  Facebook set out to change the world and it did.  Facebook wanted to be one of the most wealthy, powerful and profitable companies for its shareholders and it did that.  But Facebook has operated somewhat in the dark to most of us and now the light is being shined on them.  

What to Do Now

In the boardroom, your job is not to make decisions about whether or not to advertise on Facebook, but you do set a moral tone for the company.  This topic is likely not top on your list of priorities in an already packed boardroom agenda, I get it.  Every company has priorities and roadmaps to follow, but it may be worth a conversation over morning coffee or evening cocktails to consider how our society goes forward now that we know the Wizards of technology are not what it appeared to be.  Perhaps Zuckerberg and other leaders will change the course and face up to what technology can and can’t do. 

In your own leadership, consider a couple of important factors:

  • Culture and tone at the top.  You help set the tone for the company.  While for-profit companies are certainly out to make a profit, is it at all costs?  Or, should there be some consideration for the consequences (intended or not) of what your company does when deploying technology, including using companies like Facebook, Google, Twitter, Amazon, etc.

  • What’s the real cost of leveraging these other platforms?  It’s easy and cheap now, yes, but will it be in the long-term? What other factors could impact your business if they have a cyber-breach or an Enron moment?  What if you build your business there and then they change all the rules?  What does that do to your business?

  • Should you be investing in doing some things yourselves versus just leveraging others without consideration for the impact?

  • Continue your education.  The implications of artificial intelligence connected to all of the things in our lives and robotics doing jobs humans used to do comes with a long line of ethical questions and unintended consequences.  Be sure to get educated on what these are, have meaningful and intellectual conversations about it and form a point of view as a board or executive team so that when faced with difficult decisions, it’s not the first time you encounter them. 

  • Whether or not you are a regular Facebook user, it may be time to reflect on your own use of Facebook, Google, Twitter, Amazon and others.  How much are you giving away of your own personal privacy?  Is the convenience worth it?  Are you in an echo chamber, or are you seeking out diverse viewpoints to fully inform your thinking and business oversight?

I have often tried to turn off cookies or use a browser other than Google, only to go back in frustration.   I get it, the status quo is just easier, for all of us.  Perhaps there’s no turning back.  It’s a tall order for our legislators and regulators to try to find a way to regulate an entire societal shift in behavior created by a small handful of companies, but let’s hope that they get good advice and start thinking about small steps toward reversing the unintended consequences of what we have all created in the last fifteen years.  We’ve all been Zucked, but we don’t have to stay that way. 

If this interests you,  McNamee’s book is an easy read and I encourage all directors to better understand the reality of what big technology companies are doing with their immense surveillance power and data collection techniques.     

If you are interested in a digital session at your next board meeting to discuss cutting edge emerging technologies, new vulnerabilities and societal shifts, contact Jen at jwolfe@consultwolfe.com

 Jennifer Wolfe advises boards and c-suite executives on digital disruption, the future of work and cyber security oversight. She has served as the CEO of Dot Brand 360 and has served as Managing Partner of prominent intellectual property and technology law firm, Wolfe, Sadler, Breen, Morasch & Colby. She is an NACD Governance Fellow, Certified in Cybersecurity Oversight, a Direct Women Institute and Stanford Director's College graduate. Her latest book, Blockchain in the Boardroom is a practical guide for directors and c-suite executives.