Report

Gender Bias in the Use of Artificial Intelligence Deep Learning and Suggestions for Improvement
Type Basic Period 2023
Manager Meekyung Moon Date 2023-12-29
Fiie 03_인공지능 딥러닝 활용의 젠더 편향성 실태와 개선방안.pdf ( 1.8 MB )

Abstract

Gender Bias in the Use of Artificial Intelligence Deep Learning and Suggestions for Improvement

Meekyung Moon

Bok-tae Kim

Kyung-ju Kang

Yesol Kim

HyoEun Kim

EuSun Heo

 

 

. Introduction

Artificial intelligence chatbots ‘Iruda’ and ‘Tay’ learned and produced hateful remarks. In these cases, artificial intelligence working through algorithms brought consequences and impacts of inappropriate discrimination against race and gender. Because the discrimination caused by artificial intelligence reflects the discrimination against race and gender that exists in a society, the whole society should respond to such issues without any further delay. If the products of South Korea, a power of the fourth industry, cause these issues when used overseas, they would have huge negative impacts on the international status and related industries.

This study puts a focus on the rapid development speed of artificial intelligence deep learning technology and its wide social impact. With such a focus, the study analyzes the current state of gender bias in the process of developing artificial intelligence, and aims to seek ways to reduce the bias.

To this end, the study analyzed cases of bias, focusing on gender, caused by artificial intelligence, and examined laws and policies at home and abroad to reduce gender bias. Based on the implications drawn from this analysis, the study presented ways of improvement from legal and institutional aspects to build a social environment that is necessary to reduce gender bias.

As most of existing ways of reducing bias from the perspectives of the humanities and social sciences have simply presented a broad range of idealistic and normative directions, they have limitations in applying practically to system development. As such, this study sought to identify ways of improvement, including guidelines through engineering approach that can be understood in and applied to the field of developing artificial intelligence technology.

 

 

Research Flow

Research Contents

Research Methods

 

 

 

Problem posing

Research backgrounds and goals

Research contents and methods, and scope

Literature review

Launching report and expert counsel

Theoretical discussion

Artificial intelligence bias

Artificial intelligence and gender bias

Literature review, including existing papers

expert counsel

Research team meeting

Domestic and

overseas case analysis

Categorize AI gender bias case analyses

Draw implications from AI gender bias case analyses

Analyze laws and policies related to AI gender bias

Draw implications from of laws and policy analyses

Literature review, including existing papers and reports

Interim report and expert counsel

In-depth

interview

Bias processing methods by the phase of artificial intelligence configuration

Impacts of personal backgrounds, including experience of gender or discrimination, on the process of developing artificial intelligence

Seek ways of reducing bias

In-depth interview with those who have developed artificial intelligence

Policy suggestions

Ways of improvement in the technological development process

-Ways of improvement in the non-technological development process, including artificial intelligence gender bias ethics and guidelines for agents of AI technologies and industry

Ways of improvement in the non-technological development process

-Legal and institutional improvement

-Create a social environment to reduce gender bias

Case analysis and integrated analysis of in-depth interview

Final report and expert counsel

[Figure 1] Research Flow Chart

 

. Theoretical Discussion

Most of existing literatures that addressed bias in the humanities and social sciences excluded systemic explanations related to artificial intelligence. In this state, they raised issues centered on terms, including bias in data and bias in algorithms, and analyzed cases according to these issues.

Such literatures merely highlight social issues without systemic understanding of the development process of artificial intelligence technology. For this reason, they are perceived as a mere slogan in the big frame that bias or gender bias held by artificial intelligence should be mitigated, and thus have difficulty finding answers to the question about how to solve the issues. This shows that as technological engineering approach is interlinked to the approach of social sciences that handles social value, social value including gender bias mitigation should be considered in the utility of technology.

In this regard, this study aimed to raise understanding of terms and systems used at the level of artificial intelligence technology through theoretical discussion. Also, the study described major causes and phases of bias occurring in the system, and concepts of gender bias.

 

Composition of theoretical discussion

 

 

 

1. Artificial intelligence and deep learning

A. Artificial intelligence

B. Deep learning

2.Artificial intelligence machine learning and bias

A. Machine learning

B. Artificial intelligence bias

3.Artificial intelligence gender bias and need for its mitigation

 

 

.Analysis of Domestic and Overseas AI Gender Bias Cases

Focusing on the technological development process among all the development processes of artificial intelligence technology, we analyzed by categorizing major domestic and overseas gender bias cases by phase: AI planning and design phase-data processing phase-modeling phase, including algorithm generation and learning.

AI gender bias arises largely from algorithm design bias, data bias (collection, processing, exposure, etc.), and groundless belief in the objectiveness of artificial intelligence assuming that machines are less biased than humans.

Also, it is not easy to respond to this because it is difficult for humans to exactly understand the process and basis of the outcomes produced by AI. Because of the characteristics of AI technologies such as the issue of transparency or explainability, it is also not easy for the general public to get access to the relevant information. This happens commonly to all AI technologies and social issues.

According to the result of examining cases of AI gender bias, we identified fundamental causes for making it difficult to develop and sustain practical ways of improving the present conditions as follows: i) accumulation and impacts of historical and structural biases, ii) the technology industry that does not recognize this as main agenda of a society, and iii) a lack of problem awareness of the whole society.

 

 

Artificial intelligence planning and design phase

Genderization of AI

Female voice and images of AI secretaries, social robots, etc., and adaptive response to sexual harassment, etc.

Combination of the goals of sexual exploitation and profit-making and AI technologies

Pornography using deep-fake technology

Development of algorithm with a goal to track female identity, etc.

Gender target

research

Research on facial recognition algorithm that can identify sexual orientation with facial image only

Data processing phase

Bias in data itself

If gender-neutral or feminine words are automatically translated and associated to masculine words in machine learning or word embedding in the language processing area, different outcomes are produced according to the gender of automatic data labeling or automatic image generation algorithm.

Gender bias in setting the sample group for data collection

Gaps in the recognition rate of facial recognition programs between white men and black women

Male group bias in medical data

Modeling phase, including algorithm generation and learning

Algorithm design without consideration of actual conditions and improvement of negative gender bias

Gender gaps in the advertizement exposure of career development in Science, Technology, Engineering, and Math (STEM)

Computer vision model training that reinforces traditional gender stereotyping

Algorithm machine learning based on gender-biased data

Gender gaps in the advertizement exposure of high wage jobs concentrated in men

Unconditional minus points for women-related keywords in the AI recruitment process

Difficulty of algorithm transparency or explainability

Low credit limits set for women whose assets are in joint name with their husband

Search overexposure of sensational content when searching key words related to women of colored race by presenting the search on the top, etc.

 

Through this analysis, we drew implications to reduce AI gender bias and connected them to policy suggestions.

 

. Domestic and Overseas Responses to AI Gender Bias

To analyze gender bias in domestic and overseas AI-related laws and policies, we categorized AI-related laws and policies at home and abroad as well as those of international organizations into technological and non-technological development processes.

For the analysis, technological development process is defined as an approach from technological aspects that AI is planned, developed, and generated, while non-technological development process refers to approach from AI-related human rights, ethical issues, and so forth.

To identify causes of AI gender bias and ways of improvement, we classified technological development process by the phase of planning and design, data processing (collection, processing, management, etc.), and algorithm generation and learning. We then sought ways of improvement to mitigate gender bias by examining responses to cases corresponding to each phase of the classification.

For the category of the approach from non-technological development process, we restricted the scope of analysis to laws and guidelines which contain measures for human rights and ethical issues arising from AI, then presented implications.

 

Domestic responses

 

Laws

Technological development process

There is no provision on measures to prevent gender discrimination or control gender bias.

 

 

Laws

Non-technological development process

Though the “Framework Act on Intelligence Informatization (Law No. 17344)” includes concerns about inequality or gaps at the comprehensive level, there is no expressive statement about gender bias and gender discrimination.

Policies

Technological development process

Though there is the ‘R&D Strategy for Artificial Intelligence (AI) to Implement I-Korea 4.0,’ there is an insufficient policy approach to detailed development process, including data processing, algorithm generation, and learning modeling, and there is no consideration about gender elements in the overall policy.

Non-technological development process

There is no gender-related content.

Bills proposed by lawmakers

Technological development process

The ‘Bill on Algorithm and Artificial Intelligence’ clearly stipulates exclusion of discrimination for the reasons of gender, etc.

Non-technological development process

Though the ‘Bill on Artificial Intelligence Research and Development, Industrial Promotion, and Ethical Responsibility, etc.’ has a clear provision on the protection of human rights and dignity, there is no gender equality perspective.

Policy guidelines

Technological development process

‘Ethics Guidelines for an Intelligence Information Society by the Ministry of Science and ICT

Human Rights Guidelines for the Development and Use of Artificial Intelligence’ by the National Human Rights Commission

‘AI Guidelines Made Together by Feminists’ by Korean WomenLink

Non-technological development process

Human Rights Guidelines for the Development and Use of Artificial Intelligence’ by the National Human Rights Commission

‘AI Guidelines Made Together by Feminists’ by Korean WomenLink

 

AI bias-related bills proposed by international organizations

A.The Recommendation of the OECD Council on Artificial Intelligence

B.The UNESCO Recommendation for the Ethics of AI: ‘gender’ clearly stated as an area of policy activity

C.European Union’s Ethics Guidelines for Trustworthy AI

 

AI gender bias-related laws in major countries

A.The U. S.:
Federal Congress, Algorithmic Accountability Act
The State of California, Automated Decision Systems Accountability Act
The White House, Guidelines for Regulation Related to AI Applications
The Federal Trade Commission, Guidelines for Using Artificial Intelligence and Algorithm

B.Japan:
The Ministry of Internal Affairs and Communications, Guidelines for Developing Artificial Intelligence
The Cabinet Office of Japan, the Principle of Human-Centered Artificial Intelligence Society

C.The UK:
Information Commissioner's Office, Guidelines for Explainable Artificial Intelligence
Government Digital Service and Office for AI, A Guide to Using Artificial Intelligence in the Public Sector

 

 

.In-Depth Interview to Prepare Standards (Proposal) for Identifying Artificial Intelligence Gender Bias

We conducted in-depth interview i) to identify standards for reducing the bias and types of gender bias that may arise from the phase of AI technological research and development, ii) to think over how to mitigate gender bias, and iii) to propose guidelines for identifying gender bias from the perspective of AI system by understanding environmental issues surrounding developers.

Through the in-depth interview, we presented 1) checklist for artificial intelligence ethics gender bias, 2) guidelines for reducing gender bias in the artificial intelligence technology development phase for engineers, 3) guidelines for mitigating gender bias by artificial intelligence technology for developers and policy suggestions necessary to build an appropriate social environment, including gender balanced human resources development.

 

In-depth interviewees and composition

-Interviewees: 10 developers with experience of artificial intelligence development, including six men and four women

-Composition
Phase 1: Questionnaire survey (22 questions)
Phase2: In-depth interview about social and cultural environment
Phase 3: Survey regarding the competency and awareness of bias mitigation skills

 

Results of in-depth interview

-According to the result of the interview, we identified the main cause of artificial intelligence gender bias that enough attention was not paid to the issue in the stages of data collection and selection, and data processing. Fundamental responsibilities for this lie in the final decision makers or managers who set directions for artificial intelligence configuration.

-However, it is necessary to develop guidelines for mitigating gender bias at the technological level because it is difficult to apply bias mitigation skills and elements due to limitations of time and cost in the development process.

The in-depth interview revealed that the proportion of women to all developers in their 30s in the AI research and development field is remarkably low. Also, the interviewees thought that rather than gender itself, their experience of gender discrimination affected the development process. In particular, a gender discriminative working environment in the area of cutting-edge science and technology worked as the main factor influencing women’s career breaks.

 

 

. Conclusions

According to the categorization of domestics and overseas cases about gender bias of artificial intelligence and cause assessment, there should be responses from two aspects to reduce gender bias of AI.

First, response should be made to mitigate gender bias at the technological and engineering level in the whole stage of technological configuration and use. As confirmed earlier, the issue of AI gender bias can occur in each phase of configuring AI technologies. Therefore, it is suggested i) that the issue of AI gender bias be taken into account throughout the entire process of modeling AI technology configuration, including AI planning and design, data processing (collection, processing, management, etc.), and algorithm generation and learning, etc. and ii) that technical guidelines be prepared to reduce the bias in each stage.

Second, the ultimate causes of AI gender bias include gender stereotyping and discrimination in the society. Therefore, it is suggested i) that the gender bias issue of AI technologies be recognized as the issue of the whole society, ii) that laws and systems be improved considering implications from legislations and policies related to artificial intelligence under discussion in the international organizations, and iii) policies be established to create a social environment to reduce gender bias in the overall society.

In sum, policy suggestions are presented as below.

 

A summary of policy suggestions

 

 

 

1.Ways of Improvement in the technological
development process

(1)Develop and disseminate gender bias checklist for artificial intelligence (AI) ethics

(2)Develop and disseminate guidelines for engineers to reduce gender bias in the technological development process of artificial intelligence

(3)Develop and disseminate guidelines for developers to reduce gender bias for each artificial intelligence technology

2.Ways of Improvement in the non-technological
development process

Improve laws and systems

(1)Clearly stipulate ‘consideration of gender’ in the basic principles of the Framework Act on Intelligence Informatization (Article 3)

(2)Expressly state ‘Gender Impact Assessment’ in the Framework Act on Intelligence Informatization (Article 56)

(3)Include ‘prohibition of gender bias’ in the AI Ethical Impact Assessment

(4)Organize and operate an AI ethics review body from the gender sensitive perspective

(5)Operate an AI gender bias monitoring team at the government level

(6)Legislate verification standards for data and algorithm-related gender bias

 

 

 

Create an appropriate social environment

(1)Conduct education on ethics of artificial intelligence and gender bias

(2)Support female human resources training and career development in AI technologies and basic science

 

 

Research areas: Gender-equal cultureawareness, Law

Keywords: ethics of artificial intelligence, artificial intelligence gender bias, experience of gender discrimination