An illustration of a block person collecting cubes

Carlson School Research Explores Algorithms and Bias Within

Friday, April 8, 2022

BY GENE REBECK

 

They’re powerful business tools—but only when people are in control of the code.

 

Gedas Adomavicius
Gedas Adomavicius

Algorithms have become deeply intertwined with our lives. These crafted strings of digital code can take our online data—our shopping habits, the websites we like to visit, our entertainment preferences—and make

suggestions for products, services, and movies we might want to acquire. Algorithms are helping companies automate analysis and augment decision making, not only about marketing but also other objectives, such as production and supply chain management.

Carlson School research has demonstrated that algorithms can be powerful tools for business management. But that research also is revealing that they, too, need to be managed.

 

Powerful tools—with powerful biases

Ravi Bapna
Ravi Bapna

What is an algorithm? Put simply, it’s a set of digital instructions designed to analyze data and use that analysis to solve a specific problem. In a business context, these problems can include predicting what  customers will buy, or determining how long a certain production machine will be able to operate before requiring service or replacement.

“Companies in all kinds of sectors are undergoing a major process of digital transformation,” says Ravi Bapna, professor and the academic director of the Carlson Analytics Lab. One aspect of this is “how to convert massive amounts of data into an asset.” Companies “are data rich but insight poor. Analytics and algorithms are essentially the modern tool kit,” a set of tools “that companies need to start understanding and start using to make better decisions.”

Still, Bapna and other Carlson School scholars acknowledge that algorithms, like any tool, have limitations.

Alok Gupta
Alok Gupta

For instance, they can incorporate and reinforce certain human biases that can work against business and societal objectives. In a forthcoming paper, Gedas Adomavicius, chair of the Carlson School’s department of Information and Decision Sciences, and Mochen Yang, an assistant professor in the department, discuss how algorithmic biases can result in unfair decisions, and what can be done to mitigate those biases.

To take an all-too-common instance: gender bias in hiring, particularly for higher-level positions. Many companies use algorithm-based tools that automate resume screening to identify qualified job candidates. If a company has a history of hiring men instead of equally qualified women, a screening algorithm trained on historical hiring data will rate women lower. As Yang observes, “algorithms are pattern recognition tools,” and they identify patterns based on the company’s historical biases.

 

Mochen Yang
Mochen Yang

Predicting the future—more fairly

That noted, companies also want to create fairer screening processes. As Adomavicius and Yang argue in their paper, overcoming biases in algorithms will be complicated. “It’s not merely a technical problem,” Yang says. “It will take an integrated approach.”

This approach, the authors say, needs to consider the complexities in governing the design and use of algorithms to augment decision making, such as navigating different (and sometimes incompatible) fairness objectives and adopting fairness-aware practices for data collection and algorithmic model building. “As a society, we have to play a major role in providing frameworks for designing such systems,” Adomavicius says.

Alok Gupta, the Carlson School’s chair in Information Management, has conducted extensive research in the strengths and limitations of machine learning and artificial intelligence (AI), which are used in designing  and “training” algorithms. They also allow algorithms to “learn” how to make better-informed decisions by incorporating additional data.

“Algorithms rely on past instances of what has been experienced,” Gupta says. “When an algorithm encounters data from a previously unseen situation, it will have a hard time making robust decisions.”

The decision it makes will be based on “the past and whatever seemed to be the best approach for various parts of the problem.” But, he says, it “won’t be a creative solution such as one a human might come up with, nor necessarily the best one.” 

Like Adomavicius and Yang, Gupta notes that data often contains our implicit biases. But most of us, when we become aware that this is a problem, can recognize these biases and change our attitudes accordingly—for our own benefit and for that of our organization. Since algorithms don’t have “moral principles,” Gupta adds, “humans must be involved in decision making at some level.”

In other words, humans need to actively counteract the biases that humans themselves have incorporated into automated predictions and decisions. To paraphrase an investment truism, past results are no  guarantee of future success. This is crucial—particularly for businesses using algorithms for human interactions, such as hiring and marketing. In many instances, Bapna says, the decisions they make won’t “be accurate for anyone under 30, for instance, or for people in minority groups.”

Still, algorithms will continue to play an important role in business, so understanding and improving it is key. 

“This is one of the most important factors of modern management now,” he adds. “If you’re not doing this, you’re not going to be able to compete in the next five to 10 years.”

Spring 2022 alumni magazine cover

This article appeared in the Spring 2022 alumni magazine

Strength is determined by how we rise in the face of challenges. 

The Carlson School community remains an unstoppable force. 

Spring 2022 table of contents