It’s not only possible—but a must—to build a company that’s ethical.
Ethics is one of the biggest issues facing the tech industry today.
Increasingly complex technologies require us to pay attention and demand accountability. As Harvard Law School professor Jonathan Zittrain said: “I’m concerned about the reduction of human autonomy as our systems—aided by technology—become more complex and tightly coupled […] and that there is no clear place for an ethical dimension to be considered.”
This is particularly true in the realm of AI.
A recent survey revealed that while 82% of company leaders agreed that ethics is the foundation of any AI program, an almost equal percentage (81%) said they lack confidence that their companies are prepared to address AI ethics.
That is, AI doesn’t police itself. And to make it run, it was once the norm for tech companies to make a lot of money by unethically exploiting user data.
While less ethical tech companies may make more ad money in the short term, exploiting user data isn’t a good long-term business strategy. Especially now. In the wake of numerous data scandals, consumers are demanding better. Facebook stock, for example, took a major hit after the Cambridge Analytica scandal.
Today, it’s not only possible—but a must—to build a tech company that’s ethical. Here’s why your success depends on it.
Don’t keep your users in the dark when it comes to data.
Older generations who didn’t grow up with Instagram and Facebook have always been concerned about what they’re putting online. But kids these days grow up in a world where they share everything and don’t think twice about privacy.
In other words, society’s attitudes around privacy are evolving.
At the same time, however, everyone has a limit. And in the wake of scandal after scandal, people of all ages are beginning to take issue with the extent to which companies have been using personal data.
Here’s the problem: All too often, we’re completely in the dark about what data is being collected and how it’s being used.
For example, Gmail can read the email in your inbox and advertise to you based on it—something people aren’t necessarily aware of when they sign up. Few people read the fine print, and Gmail doesn’t exactly come out and say: “Hey, we’re reading your email.”
And most social media users have no idea that they are the product.
For example, users put their photos on Facebook, and Facebook turns around and sells these photos to advertisers. And Facebook isn’t the only platform with that model—most social media companies do it.
If customers knew how their data was being used, they might reassess what they’re handing over for free.
Protect the user data you already have.
When it comes to tech ethics, it’s not enough to make consumers aware of what’s being collected. It’s incumbent upon the tech companies to ensure that users are aware of what they’re collecting.
Companies should also make sure they’re handling users’ sensitive information with care. For example, at my employee scheduling software company, Ximble, we build our platform to encrypt peoples’ most sensitive information. Passwords and other sensitive information are encrypted by default and never stored unencrypted.
But unfortunately, this isn’t always standard practice. Many companies are careless with both the quality of the data they collect and how they handle it.
And even when companies take steps to protect their customers’ data, they often sell it to third-party systems who are less ethical. For example, they might use an external system for billing, and in order to bill the client properly, they have to share the customer’s name and credit card number. They’re trusting the third-party to take good care of that information, which they may not always do.
In other words, it’s easy for you to inadvertently jeopardize your client’s information via third-party systems. So you must make a conscious effort to vet your partners.
All companies should take care to be more responsible now so they don’t wind up in a PR nightmare down the road. As we’ve all seen—it’s better to do things right up front than to have to apologize later.
Remember, real change starts with your company—not legislation.
The regulatory landscape evolves way too slowly to keep up with the rapid pace of AI technology.
But it’s catching up. So if tech companies don’t start to do a better job of policing themselves, they might end up in a situation where the government is punishing them for past behaviors.
On the other hand, if companies rely exclusively on the government to make sure they’re behaving ethically, it will lead to a number of potentially negative outcomes—like more bureaucracy and more overhead.
Moving forward, it’s up to the tech industry at large to do better.
So take care of the sensitive user information you have—and make sure you’ve got the security in place to protect that information. If you’re collecting peoples’ emails and phone numbers, think about what will happen when data gets out there, and suddenly your customers are bombarded with spam—or worse.
And in the wake of data breach scandals aplenty, that kind of behavior just won’t cut it.