Connect with us

Business

Private AI: Here’s What You Need To Know About Protecting Your Data In the Digital Age

Published

AI privacy concerns

These days, AI is almost always in the news in some form or fashion.

A couple of weeks ago, the New York Times published a piece on how AI can detect lung cancer. MIT technology review recently wrote about protecting ourselves from “malicious AI.” San Francisco banned the use of facial recognition technology by government agencies for fears of abuse. And Popular Science published a piece entitled: Can AI escape our control and destroy us?

But among the various concerns and promises of AI, privacy issues tend to dominate the public conversation.

And, yes, some companies have been unscrupulous with their use of AI. But AI itself is not the problem—it’s rather the abuse of AI by tech companies looking to make big bucks.

There’s a right way and a wrong way to use AI, and tech companies have a responsibility to make sure they’re not compromising their users’ data and, more importantly, their trust.

That said, if we’re all more educated about what AI is and how it works, we’ll be better able to maintain our privacy in the digital age.

Here’s why:

Most people don’t understand how AI is created.

When people think about AI, the first thing that likely comes to mind is some sort of hyper-intelligent robot—either benevolent (think Robin Williams in Bicentennial Man), or out to destroy humankind (like the robots in The Teriminator).

But that’s not really what AI is all about.

It’s most commonly used in the context of machine learning. In other words, AI uses data to identify patterns and make decisions with minimal human intervention. This involves looking at a lot of different examples of how things happen and doing more computations than you and I ever could.

So companies are constantly collecting our data to train AI. And the more data a company collects, the more powerful it becomes.

This has led to some really startling capabilities.

At I/O 2018, Google demonstrated how Google Assistant can make phone calls on your behalf. On stage, CEO Sundar Pichai played back a phone call recording that was placed by the Assistant to a hair salon. The voice sounded so natural that the person on the other end had no idea they were talking to a digital AI helper. The Assistant even used a super casual “Mmhmmm” early in the conversation.

While this shocked conference-goers, it will soon be the norm for AI to model human behaviors.

Everyone should have access to their own data.

If you’ve ever used an Apple product, you’ve probably scrolled through countless confusing user agreements only to click “accept” and move on. Who wants to read a bunch of legalese when you’re just trying to buy a movie on iTunes?

We don’t know what we’re agreeing to, and generally speaking, we don’t really care.

The same thing happens basically every time we’re online. We happily hand over a ton of data to companies on a daily basis about what we like to eat, where we shop, where we live and work, and who our friends are.

But if people knew how AI worked, they’d be more than a little concerned about what data they’re giving up.

User awareness is the first step, but it isn’t enough. Consumers should always have access to their own data, and the ability to control how much and what kind of data is collected. For example, they might say it’s OK for Google to know their location to help them get from Point A to Point B, but that it’s not OK for Google to keep a record of their comings and goings from months ago.

Some people are fine with Google telling them how to live their lives, and that’s beautiful. Other people aren’t. The companies who respect their users’ right to privacy are the ones who will build trust and outlast the bad actors.

Companies must make radical transparency a priority.

While some major tech players, like Elon Musk and Microsoft CEO Satya Nadella, are advocating for regulations to increase transparency—there hasn’t been enough momentum around this effort.

And it shouldn’t be entirely up to users to police companies like Google and Facebook to make sure their data is being used appropriately.

That’s why it’s so important for tech companies today to be honest and up front. It’s great if people have access to their own data, and have the option to control how long and how much of their data goes into the algorithms. But in order for that to happen, companies must be open about exactly how they’re collecting that data, what types of data they’re collecting, and what algorithms they’re using.

As a consumer, you should have the right to understand how your data is being used. And if platforms aren’t telling you, maybe you shouldn’t use them.

Ultimately, users should be educated and digitally literate on AI and how it can fundamentally change their lives. In fact, it might even be changing your life at this moment in ways you aren’t even aware of.

I am the CEO of Skiplist, a software company that accelerates development and drives value for organizations of all sizes.

Top 10

Copyright © 2019