Connect with us

Business

How Your Company Can Safely Incorporate AI—And Drive Business Value

Published

ChatGPT may have powered an AI inflection point last year, but the reality is, companies have been trying to get business value from AI for years.

To date, the surest way to get business value from AI has been in automating business processes—things like data transfer, replacing lost or stolen credit cards, extracting provisions from legal documents, etc. The rote, manual tasks that take humans a lot of time, and inhibit more impactful decision-making.

But more and more, we’re seeing AI’s business uses edge into more cognitively intense use cases: customer service, blog writing, identifying targeted sales opportunities, and even making health recommendations.

And as AI’s use cases become more sophisticated, its risks become even more pronounced.

We’re faced with a dynamic moment: The companies that find the quickest, most sustainable path to AI-driven business value will gain significant competitive edges—but only if they can mitigate AI’s very real risks.

What are the main risks associated with AI?

1. Inaccurate/toxic content. By now, it’s well-known that generative AI tools have a tendency to state inaccurate information as if it’s factual. AI tools also suffer from intrinsic human biases, meaning answers presented factually may in fact be restatements of prejudicial perspectives.

The risks associated with bad information are obvious: Bad information leads to bad decisions. Someone using generative AI for inexpensive medical advice could fail to get necessary treatment; someone using it for a blog post could write something inaccurate and damaging to their company/brand.

2. Legal/regulatory risk. At the time of writing, AI is a brand-new, lightly regulated space. But it won’t stay that way for long. Various bodies have proposed regulations at levels ranging from the state to the supranational, on both specific topic areas (e.g., data privacy) and entirely industry verticals (e.g., banking).

Adopting AI tools without regulatory awareness could put companies in a position of being unwittingly on the wrong side of new regulations.

3. Knowledge/competency disparities. Reaping maximum business value means educating as many people as possible on risks and best practices associated with AI. But it will most likely be easiest for technical personas to understand, and will be an uphill battle for people with less technical expertise.

How can companies mitigate those risks when implementing AI?

1. Set organizational principles. Organizational principles for AI implementation should respond to the risks outlined above. Design these principles around preventing AI-driven misinformation, staying ahead of regulatory risk, and sharing knowledge/best practices. Preventing AI risks will depend on more than just ethical ideas, but ethics systems are the foundation of specific, prescriptive actions.

2. Define governance owners. Establishing accountability measures is crucial to ensuring that ethical principles become more than words on a page. Decide which personas will define, own, and enforce the AI governance process.

3. Establish knowledge-sharing systems. Especially early on, it will be useful for companies to establish repeatable knowledge-sharing systems around AI adoption. Training materials, recurring seminars, and a living documentation repository are all useful knowledge-sharing devices.

4. Remain alert to—and share—regulatory shifts. Part of the governance body’s central responsibilities should be understanding the state of regulatory affairs around AI. They should know which regulations are being proposed, which could impact the organization most severely, and how to design or redesign best practices accordingly. Moreover, they should disseminate this information to the rest of the company in a digestible way.
The more apocalyptic concerns about AI—that it will steal all our jobs and/or lead to a Terminator-esque robot uprising—seem to have been overblown. But the risks associated with these more workaday concerns are entirely real, and, down the road, will lead to existential threats for companies who don’t proactively address them.

Rishin Patel has worked in the orthopaedic and pain medicine industry for over 10 years in management-level product development and business development roles. He has been at the forefront of initiating technological strategies through product development to enhance patient care. Rish received his BS in Biology and Biophysics from the Pennsylvania State University, his M.D. from the Temple University School of Medicine, and he completed his anesthesiology residency and fellowship in interventional pain medicine at the Hospital of the University of Pennsylvania. He continues to serve as an expert consultant for several local and national advisory boards dedicated to improving treatment outcomes for patients. Rish loves to travel with his wife and daughter and is also an avid golfer.

Top 10

Copyright © 2019