We Don’t Have an AI Trust Issue, We have an Industry Reputation Issue

Trust in Artificial Intelligence products is crucial for adoption and use of AI. There has been significant amount of recent research on of trust in AI  (Rossi, 2018; Siau & Wang, 2018) . The topic has been widely discussed in practice as well (Khalegi, 2019). Trust in AI is in integral part of my Artificial Intelligence in Business courses at Ross. As part of coursework I typically ask students to post their reflections on class discussions. My brilliant student Yuko Lopez posted the following:

The more I consider the trust component of AI, the more convinced I am that the market will look through the AI solution directly to the organization and people behind the product. Companies that have reputations anchored in integrity, transparency and philanthropy will, in my view will be advantaged in launching critical AI solutions. Those that have built this social capital will have first mover advantage in certain applications that require impeccable reputations. Those that do not will have to partner / acquire or develop this equity.

Does Company/Industry Reputation Affect Trust in AI?

This is an interesting way to look at trust in AI. Much of the work on trust in AI has focused on characteristics of AI – explainability, bias mitigation, usability, reliability and so on. However, the truth is that consumers may have very little understanding of AI and how it works. Consumers are likely to form opinions about the AI based on their opinion of the technology company behind the AI rather than the AI itself.

I tried to explore this idea. Data in this field is hard to come by. However, the good folks at Pew release their American Trends Panel data in public domain. One of their data collection waves (Wave 35 to be precise) looked at public perception of AI and technology companies. 

I have done some preliminary exploration of this data below with some quick interpretation of the analysis. Please note that this is very quick and dirty work – so adjust your expectations accordingly.

Simple Logistic Regression model below of impact on Trust in AI Algorithms of the following:

  • Trust in the technology companies
  • Whether technology companies have a positive impact on society
  • Whether technology companies are more or less ethical than other companies 

Simple Logistic Regression

> m1 <- glm(AITrust ~ CompanyTrust + CompanyImpact + CompanyEthical, 
            data = pew, family = "binomial")
> summary(m1)

In case you are not familiar with regression output in R, this is how you should read it – focus on the column called “Estimate” – this is the estimate for the regression coefficient for the row. A positive value means that when the value in that row increases then trust in AI also increases. For example – when someone’s says that they trust technology companies “Most of the time” (in row 3 below), then they also have a higher trust in AI algorithms. The last column indicates whether the estimate is statistically significant – there you are looking for a value less than 0.05 (for a statistical significance of 95%). 

Coefficients:
                               Estimate  Pr(>|z|)
(Intercept)                    -0.86758  2e-16 ***
CompanyTrustJust about always   0.57562  0.0307 *
CompanyTrustMost of the time    0.47980  1.97e-05 ***
CompanyTrustSome of the time    0.19134  0.0521 .
CompanyImpactMore good than bad 0.18241  0.0118 *
CompanyEthicalLess ethical     -0.03858  0.6510
CompanyEthicalMore ethical      0.26135  0.0407 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

So there you have it. The output shows that trust in AI algorithms:

  • increases with higher trust in technology industry
  • increases with higher perception of technology industry having a positive impact on society
  • increases when technology industry is perceived as being more ethical.

We are not really breaking news here. It is intuitive – but this result is still valuable as it shows that our intuitive position has empirical evidence behind it. What we are saying is that one of the ways we have improve trust in AI is by improving trust in the technology industry itself. Conversely, when we say that we have an issue of low trust in AI, what we are really saying is that we have an issue of low reputation of technology industry overall.

Expanded Model with Controls

The model above was a simple one. The Pew data has additional information. So here we have another, expanded model with bunch of control variables – geographic region, age, gender, education, race, income and political ideology.

> m2 <- glm(AITrust ~ CompanyTrust + CompanyImpact + CompanyEthical + 
            Region + Age + Gender + Education + Race + Income + Ideology, 
            data = pew, family = "binomial")
> summary(m2)

Output of the expanded Logistic Regression model is shown below. Same interpretation process as for the simple model above.

Coefficients:
                                Estimate  Pr(>|z|)
(Intercept)                     -1.09834  2.78e-06 ***
CompanyTrustJust about always    0.56141  0.057627 .
CompanyTrustMost of the time     0.45125  0.000156 ***
CompanyTrustSome of the time     0.14316  0.170082
CompanyImpactMore good than bad  0.19837  0.010107 *
CompanyEthicalLess ethical       0.04099  0.653265
CompanyEthicalMore ethical       0.14982  0.264232
RegionNortheast                  0.21528  0.037757 *
RegionSouth                     -0.04290  0.630210
RegionWest                      -0.09420  0.330937
Age30-49                        -0.19044  0.099636 .
Age50-64                        -0.35114  0.002707 **
Age65+                          -0.46247  0.000124 ***
GenderMale                       0.13696  0.039683 *
EducationCollegeDegree          -0.21192  0.082911 .
EducationHighSchool             -0.18294  0.200318
EducationLess than high school  -0.32606  0.137012
EducationPostgraduate           -0.32310  0.010747 *
EducationSome college, no degree 0.07087  0.576908
RaceHispanic                     0.36090  0.024563 *
RaceOther                        0.15001  0.386746
RaceWhite non-Hispanic           0.33915  0.006820 **
Income$75,000+                   0.12048  0.122392
Income<$30,000                   0.09693  0.314637
IdeologyLiberal                  0.39869  7.98e-05 ***
IdeologyModerate                 0.48455  6.29e-08 ***
IdeologyVery conservative       -0.33844  0.015167 *
IdeologyVery liberal             0.32636  0.006674 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

So what do we see? Our results from the simple model holds except for the ethical part. Addition of new variables reduced the significance of that variable below the statistical significance threshold. This is nor surprising as the sample size of respondents that considered technology industry more or less ethical was quite small.

Observations for Control Variables

Looking at controls give us a few interesting observations:

  • Older respondents have lower trust in AI. This again is intuitive – nice to see it supported by evidence.
  • Men have higher trust in AI that women.
  • Higher education is associated with lower trust in AI
  • Hispanic and Non-Hispanic White racial categories have higher trust in AI
  • Income is not associates with different trust in AI. Higher income trended towards higher trust in AI but the effect was not quite statistically significant (close though).
  • In terms of political ideology, Liberals and Moderates have higher trust in AI compared to Conservatives.

Conclusions

As the quote from my student Yuko said right in the beginning: trust in AI is really about trust in the AI provider – the technology industry. We see that it hold true: reputation of the technology industry is a significant predictor of trust in AI. So our current AI trust issue may really be an industry reputation issue.

Word of caution: the analysis above does not preclude other drivers of trust in AI. So all other factors like explainability, reliability etc are likely still important. We don’t have data to test their effect – but we can for sure say that technology industry reputation is for sure a significant aspect of trust in AI.

The feature image at the top is borrowed from this link: A Missing Ingredient for Mass Adoption of AI: Trust.

References

Khalegi, B. (2019). A Missing Ingredient for Mass Adoption of AI: Trust. Element AI. https://www.elementai.com/news/2018/a-missing-ingredient-for-mass-adoption-of-ai-trust
Rossi, F. (2018). Building trust in artificial intelligence. Journal of International Affairs, 72(1), 127–134.
Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47–53.