Artificial Intelligence: Bias and Race Discrimination in the Workplace

Face recognition software has become part of our daily lives. We use it to unlock our devices; social media platforms can tag you in a photo based on face recognition software, and even law enforcement uses face recognition software as part of its investigative tools. At the heart of this is AI. AI framework is ever present in people’s social life and is beginning to invade the workspaces.

Uber Eats UK is presently in a dispute against a former Uber driver for repeatedly failing the Uber Eats UK face recognition verification processes required to continue using the platform. The driver, Mr Manjang, in short, alleges that the face recognition software used by Uber Eats UK creates more false positives and false negatives in persons of colour, when compared to white counterparts, such that persons of colour are more likely to fail the face verification.

The heart of the complaint is about data algorithm bias. The controversy of data algorithm bias has plagued face recognition software since its inception. Whilst it is a complex topic, a key research finding is that face recognition algorithms are significantly bias against dark skinned people, particularly black people.

There are serious pitfalls for employers who elect to use AI face recognition software in the workplace.

The South African Constitution and the employment law framework expressly prohibits discrimination based on race. In the workplace, and under the Employment Equity Act, every employer bears a positive duty to take steps to promote equal opportunity in the workplace by eliminating unfair discrimination in any HR policy, procedure, practice. If an employer faces a challenge of discrimination based on race, it will be on the backfoot because in terms of the Employment Equity Act, the employer bears the duty to prove that such discrimination did not take place; or if it did, that its conduct was rational and not unfair or is otherwise justifiable.

In the context of using face recognition tools in the workplace, an employer may find the burden of proof too high a mountain to climb. This is because, in most instances, the bias is in the software itself, i.e. how the algorithm works. An employer could argue that it is the software that contains the bias, and it cannot be held responsible for the analysis performed by the software, however, under South African law, this may, still, constitute indirect discrimination where the employer uses the results of the face recognition software to develop or implement its HR policies, procedures, or practices in the workplace.

Therefore, employers must exercise caution before implementing face recognition tools in the workplace. As a starting point, employers should have a properly vetted and legally compliant policy in place for the proper use of face recognition tools, including a process for dealing with complaints and false positives and false negative results.

Before implementing any face recognition tools or other AI tools which may impact employees, employers should seek legal advice to ensure that the policies and procedures it intends to develop or implement does not have an unfair discriminatory effect on people of a particular group.

AI is inevitable and will influence every aspect of our lives. As we increasingly rely on AI in our decision making, we must guard against the repeat and reinforcement of historical and contemporary inequalities.

THE CONTENTS OF THIS ARTICLE ARE FOR INFORMATION PURPOSES ONLY AND DO NOT CONSTITUTE LEGAL ADVICE.
FOR LEGAL ADVICE PLEASE CONTACT OUR OFFICES.

Ongoing Compliance Areas:

  • Code of Good Practice on the Prevention and Elimination of Harassment in the Workplace
  • Cybercrimes Act (harassment, unauthorised access, etc.)
  • POPI Compliance (training, risk assessment, policy framework, 3rd party consent, etc.)

DIRECTORS

FATIMA SALIJEE
082-920-3452
fatima@sgvattorneys.co.za

GEORGE VAN DER MERWE
083-629-7119
george@sgvattorneys.co.za

CLIVE GOVENDER
082-997-8272
clive@sgvattorneys.co.za