If we want the tech industry to provide useful products that are also ethical, we have to make some changes as a society first.
Since last year’s Facebook data breach—where Facebook improperly passed on user information to Cambridge Analytica, a data analytics firm that assisted President Trump’s presidential campaign by creating targeted ads using millions of people’s voter data—ethics in tech has been on just about every social media user’s mind.
Privacy and social media often have trouble co-existing. The now-defunct social media platform, Path, is a good case study why:
Early Facebook insider Dave Morin left Facebook and started Path in 2010 after the Internet behemoth began to deprioritize user privacy. In sharp contrast to Facebook, Path marketed itself as “private by default.”
So it was quite the scandal when, in 2012, Path was caught uploading users’ address books without consent. The company was slapped with an 800k FTC fine and shut down a few years later.
Path’s demise shows how hard it is to run a successful social media company without collecting user data. Social media companies provide a product people love and it’s completely free. But they need money to run it. They need advertisers. And they need to give users ads that they will actually click and consume. The more information they collect, the more ad money they can generate and the better service they can provide.
In other words, they’re giving us exactly what we asked for.
The Facebooks of the world have little choice but to collect user data if they want to survive. If we want the tech industry to provide useful products that are also ethical, we have to make some changes as a society first. And tech companies have a responsibility to do better.
If we want ethical technology companies, we need to start embodying those ethics as a society.
I always say, the reason society sucks is because we all suck. The collective “we,” that is. Individually, we’re all perfectly decent human beings.
But as a group, we aren’t so great.
Think about it. Everyone puts their money in a 401(k) to retire. Well, a 401(k) is an investment account. So if I’m putting all my money in an oil company because they’ve seen 25% returns on their stock—that’s phenomenal. But at the same time, the oil company is destroying the planet. And you’re buying into these indexes that specifically hide this kind of information from people.
Here’s your paradox.
When the oil company doesn’t return the 25% and instead gives you 18%, you’re frustrated. You need your 401(k) to perform so you can retire. You put pressure on the oil company to deliver better returns.
So what does the oil company do? They build more oil rigs. They continue to drill and create externalities because that’s what we demand.
It’s the same with Facebook. We demand more features while investors demand more revenue. So Facebook is forced to collect more data to deliver on these promises. It’s simple business logic. But it’s also a Catch-22. They need data to make a better platform for users, but they lose trust when they compromise user privacy.
In other words, companies will remain unethical as long as we continue to reward them for it.
But tech companies should also be transparent about what data they’re collecting and how they’re using it.
In 2002, the Microsoft founder Bill Gates declared, “Users should be in control of how their data is used.”
This was a noble promise, but one that doesn’t always hold up in the modern era. Today, companies are collecting more data than they ever have before. And while there has been a lot of talk about democratizing data, data transparency is still fairly limited.
The first step is user awareness. Consumers should always have access to their own data, and the ability to control how much and what kind of data is collected.
For example, you might allow Google to know your location at a particular moment in time, but not to keep a record of your movements over a period of several months.
That said, it shouldn’t be entirely up to users to police companies about how their data is being used.
That’s why it’s so important for tech companies today to be transparent. Ideally, people have access to and control over their own data. But for that to happen, companies muster be open about exactly how they’re collecting that data, what types of data they’re collecting, and what algorithms they’re using.
More government regulation isn’t necessarily the answer.
A lot of people think we should simply regulate Facebook out of business, but that would be unwise.
Business is an ecosystem and too much regulation of major corporations like Facebook would disrupt that ecosystem. It would be like burning an entire forest to the ground to get rid of a few unhealthy trees.
Remember when the government tried to regulate us out of the financial crisis of 2008? It was a disaster. From the Federal Reserve’s disruptive manipulations of interest rates, to massive subsidies and regulations in housing, banking, and mortgages—government regulation promoted reckless financial practices and then made things worse by bailing out the worst miscreants. It doesn’t seem reasonable to expect the government to fare better with regulating tech—an equally complicated industry—than it did in finance.
Plus, the private sector tends to move at a much quicker pace than the government. Look at the financial crisis. Rather than the government, it was the private sector that spotted the issue of mortgages turning from good to bad.
In tech, the private sector likewise innovates way too fast for the bureaucrats to keep up. So there’s little to no reason to think the government could provide a timely solution.
That’s why it’s up to the tech companies to set higher ethical standards.
Getting there is possible, but it’s going to take a major mindset change from both the tech industry and society at large.