Bias in AI refers to unfair or prejudiced outcomes generated by artificial intelligence systems. This often occurs when training data reflects societal inequalities, or when algorithms aren’t properly calibrated. For HR and workplace tools, bias can lead to issues in hiring, performance reviews, and employee support. Understanding bias in AI is critical for teams adopting AI-driven platforms, ensuring ethical and inclusive technology use. This glossary entry breaks down the concept and offers context relevant to modern workplaces.