This paper provides the first large-scale evidence of business-to-customer racial bias (B2C bias) on a digital platform, on which the perpetrators are individual employees who act on behalf of a company and the victims are customers. This is in contrast to existing studies of racial bias on digital platforms that focus on peer-to-peer marketplaces (e.g., eBay), in which both the perpetrators and the victims are individuals acting independently and on their own behalf. In particular, we present the first evidence of B2C bias in corporate social media customer service, a practice that has grown in popularity recently. Unlike traditional call centers, agents providing customer service on social media respond, on average, to fewer than half of the complaints they receive as per our analysis. We investigate the effect of a complaining customer’s racial identity, as revealed by the social media profile picture, on the chance of receiving a response. By analyzing more than 57,000 social media customer complaints to major U.S. airlines and leveraging a variety of analytics techniques, including text mining and facial recognition, we present quantitative evidence that African American customers are less likely to receive a response when they complain than otherwise similar White customers. Furthermore, our deep learning–based falsification test shows that the bias is absent without the visual cue that reveals racial identity. This study offers a practical yet powerful recommendation for companies: conceal all customer profile pictures from their employees while delivering social media customer service.

Related Goals: