Why you should care about tipping
In this article, I’ll explore how contemporary issues like tipping and tipping prediction used in consumer research need to account for biases, and how different techniques can mitigate these influences to promote social good.
Biases in Tipping
Since the coronavirus pandemic came to the forefront in 2019, tipping-based establishments flourished to meet the consequences of shutdowns and social distancing. This included incorporating convenient tipping experiences to promote compensation for workers who often rely on tipping and gratuities to earn a living wage.
A few years later, services that weren’t even known for their tipping culture now explicitly request patrons to consider adding an extra contribution to their purchases. Companies have taken the opportunity to leverage consumer insights to take tipping incentivization to unprecedented levels, with tipping expectations reaching record highs of 20-25%.
Unfortunately, different stereotypes and biases often drive tipping practices, which perpetuates social and economic inequalities.
Biases in AI-based Predictions
AI models incorporate user data to develop and promote product, design and service-related changes. In the case of tipping, for example, algorithms are used to predict tips using features such as users’ demographic data, time of day, type of product or service, past behavior (previous tipping amounts), and so on. Developers then use this information to design learning models and interactions across multiple decision points to shape behavioral outcomes (e.g., frequency and magnitude of tips, point-of-scale purchasing and other consumer preferences). For example, electronic payment providers are now encouraging tipping via touchscreen interfaces and sometimes even incorporating a default tipping amount during a sale.
But “bias in the machine” can have unintended consequences once error-prone models are deployed outside of the in-house training environment and in the real world. Errors are due to statistical bias (where the predictive model’s estimated parameters do not match the “true” or actual values) and variance (where the model is sensitive to differences in training data, which reduces its generalizability).
Combined, Human and Algorithmic-based Disparities are Amplified
We understand that people have biases that can impact how they tip, and that this skewed information is used in AI algorithms. These predictive models themselves also contain some degree of error. Together, these two issues can compound error, pose ethical challenges and increase the potential for furthering socioeconomic disparities in groups of people.
Explain how AI will combine these concepts
Thankfully, there are methods to improve data and model quality.
1. Make it an explicit goal to proactively address for bias in your training data.
2. Obtain comprehensive and representative training data.
- When working with human data, be aware of the types of individual differences and biases that might exist in terms of groups, location and time. Identify how your models can disadvantage or harm certain groups.
- Use exploratory methods to analyze your data and assess fairness, paying particular attention to bias triggers.
- Include data that may be indirectly (or seemingly unrelated) to the target outcome. In the case of tipping, for example, factors such as patrons’ or servers’ language fluency, mental health or general cognitive ability can be considered.
- Mitigate sample selection bias and oversample data from underrepresented groups of people.
- Synthesize data for minority groups.
- Assess data quality by verifying data and model predictions using experimental
3. Use fairness-aware machine learning techniques to improve model accuracy.
- Consider how you want to mitigate bias / promote fairness in your model: a) Association-based fairness, b) Fairness through unawareness, c) Group or individual fairness
- Employ fairness correction methods in the pre-processing phase of model training or as part of the training process itself.
‍
Key takeaways
- Pre-existing (human behavioral) and technical (statistical) biases can impact AI-based prediction outcomes, especially when it comes to tipping behavior. These biases have the potential for suggesting consumer targeting techniques that might be based on underrepresented or overrepresented data.
- Be mindful of promoting fairness/correcting for bias as much as possible in each step of the model generation process (data collection, preprocessing, training and validation). Compare different fairness-aware modeling methods and select the most optimal model.
- Being proactive about addressing any cognitive, behavioral and social biases can have a positive impact in promoting equanimity amongst those who subsist on tips, which is affecting many people following the COVID pandemic.