Google moves Google+ shutdown date forward after finding another bug (and keeping quiet about it)
Google announced that it would shut down its tumbleweed-ridden asocial network, Google+, earlier than intended after the discovery of another huge data breach affecting 52.2 million users, caused by an API bug.
Originally the search giant had planned to drop the axe on G+ in August of next year, but the date is now set for April.
Ironically the trigger was an update in November this year, when Google+ was already slated for the chop. The update included an API, an Application Programming Interface — basically a socket into which other applications can be slotted to make them work together with Google+. Except this one allowed some third-party applications access to all user information, including to users who had set their profiles to ‘no-public’ and to data that had been shared privately between Google+ users.
That sucks for Google, though it does mean it now gets to stop flogging a horse that to everyone else has been obviously dead since about 2013.
But what’s more important is how the company dealt with the breach.
Concealing failure, evading accountability
The pattern of concealment and evasion that goes along with this type of breach has become familiar from so-called traditional companies; everyone remembers Equifax’s catastrophic breach, though the full picture is only now emerging.
But when it comes to companies where CEOs wear T-shirts and where corporate mottos prescribe ‘don’t be evil’ (until recently), for some reason we expected a different standard of behavior.
However cool a company is, in fact, by the time it’s a billion-dollar, publicly-traded corporation, you’re going to get the same equivocation, hesitancy and lack of transparency you’d get at Antiquated Megacorp Holdings’ legal department.
Just look at Facebook’s Increasingly Bad Year: new revelations that 6.8 million people’s private photos were leaked jostle for newsfeed space with background stories about ontrundling legal troubles, billion-dollar fines and imploding reputations. Nice T-shirt though.
Google, it turns out, sat on its 2015 breach for fear of negative press coverage and, tellingly, increased governmental scrutiny — perhaps leading to regulation.
‘A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica,’ the Wall Street Journal reported (paywall).
In other words, the lesson other big tech firms are learning from Facebook’s excruciating, slo-mo car crash of data privacy and contempt for its users is to do anything but treat their own users with respect.
Yes, it’s what we’ve come to expect from big corporations; but it’s also a clear sign that the Valley’s biggest names have broken forever with the open-source, pay-it-forward culture that spawned and supported them. Why?
Part of the problem has been regulation, or rather, the lack of it. That’s not a problem that inheres in tech: imagine a company making cars or foodstuffs being totally unregulated and you’d expect to see similar things.
And of course, you don’t have to imagine. In the past, especially in America, those industries were unregulated and the results were similar. Failures of testing and investigation in more recent years that allowed companies to police themselves with one hand and pay themselves with the other led to scandals like Volkswagen’s emissions test fiasco.
In that case, VW cars carried onboard computers that were set up in the factory to run at low emissions when they were being tested, and at high fuel economy when they weren’t. Everyone was happy, except international regulators who discovered a gigantic spike in diesel particulate pollution in cities as VW happily gassed us all (including themselves) to make a buck.
But part of the problem is regulators. Washington in particular has woken up to the need for some kind of regulatory oversight in big tech companies, and its instincts are less authoritarian than in some other jurisdictions where lawmakers are more concerned with people watching movies and saying bad things than they are with giant corporations externalizing risk and hoovering up micro-rewards by the trillion. But it’s hamstrung by an embarrassing inability to understand the parameters of the problem.
Washington probably reflects the wider society fairly well here, in that its members have learned to use technology effectively in their everyday lives without having any idea of what’s going on behind their screens.
Senators pull top executives from Google and Facebook in to get answers, and then ask them the kinds of questions that make you wonder if they have a firm grasp of the internal combustion engine. If you’d like to inspect a flicker of human emotion passing across the otherwise blank visage of the ZuckerBot 2000TM, check out his response to Senator Hatch’s query about how Facebook makes money:
‘Whaddya call this young man?’
‘It’s a smartphone, pops.’
‘A what now?’
The people behind him are literally laughing.
The result has been that our culture has tended to see Silicon Valley as a land of wonder, source of endless magic boxes; best not to ask what’s inside them, and who cares anyway.
That’s not going to be possible for much longer. The risks to their users that Google and Facebook disregarded while they constructed business empires based on retail surveillance aren’t risks anymore. They’ve happened to us; Yahoo! Users, Facebook users, Google users. Whether or not these companies should be collecting our data, they obviously can’t be trusted with it once they have it.