Diversity and Inclusion in AI: Lessons from Human and AI Collaboration
2024-7-3 23:0:24 Author: hackernoon.com(查看原文) 阅读量:1 收藏

In this paper we have presented our extensive research on exploring D&I in AI guidelines and our attempt to operationalise them in the process of specifying D&I in AI requirements. We have identified 23 unique themes related to D&I in AI considerations from our literature review. We introduced a user story template for articulating D&I in AI requirements, and we have conducted a focus group with four human analysts to develop user stories for two cases of AI systems to gain insights into the process of writing D&I in AI requirements. Furthermore, we have explored the utility and usefulness of using GPT-4 as an agent in the automation of generating D&I in AI requirements. Comparing the user stories developed by human analysts and those generated by GPT-4 we have gained insights into the pros and cons of the use of LLM in this activity and the complementary nature of this form of human-machine collaboration. There remains a need for further exploration of the influence of cultural and legal contexts on the implementation of diversity and inclusion requirements in AI. Investigations could delve into how variances in privacy laws, data protection regulations, and cultural perspectives impact AI system development across different regions. Moreover, research could be undertaken to assess the effects of various AI systems, such as recognition technology, on individuals from diverse backgrounds encompassing race, gender, and age.

  • Cachat-Rosset, G. and A. Klarsfeld, Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines. Applied Artificial Intelligence, 2023. 37(1): p. 2176618.

  • Nyariro, M., E. Emami, and S. Abbasgholizadeh Rahimi. Integrating Equity, Diversity, and Inclusion throughout the lifecycle of Artificial Intelligence in health. in 13th Augmented Human International Conference. 2022.

  • Bano, M., D. Zowghi, and F. da Rimini, User satisfaction and system success: an empirical exploration of user involvement in software development. Empirical Software Engineering, 2017. 22: p. 2339-2372.

  • Ahmad, K., et al., Requirements practices and gaps when engineering human-centered Artificial Intelligence systems. Applied Soft Computing, 2023. 143: p. 110421.

  • Ahmad, K., et al., Requirements engineering for artificial intelligence systems: A systematic mapping study. Information and Software Technology, 2023: p. 107176.

  • Ahmad, K., et al. What’s up with requirements engineering for artificial intelligence systems? in 2021 IEEE 29th International Requirements Engineering Conference (RE). 2021. IEEE.

  • Roche, C., P. Wall, and D. Lewis, Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI and Ethics, 2022: p. 1-21.

  • Hickok, M., Lessons learned from AI ethics principles for future actions. AI and Ethics, 2021. 1(1): p. 41-47.

  • Dattner, B., et al., The legal and ethical implications of using AI in hiring. Harvard Business Review, 2019. 25.

  • Hagendorff, T., The ethics of AI ethics: An evaluation of guidelines. Minds and machines, 2020. 30(1): p. 99-120.

  • Leavy, S., B. O'Sullivan, and E. Siapera, Data, power and bias in artificial intelligence. arXiv preprint arXiv:2008.07341, 2020.

  • Xivuri, K. and H. Twinomurinzi. A systematic review of fairness in artificial intelligence algorithms. in Responsible AI and Analytics for an Ethical and Inclusive Digitized Society: 20th IFIP WG 6.11 Conference on e-Business, e-Services and eSociety, I3E 2021, Galway, Ireland, September 1–3, 2021, Proceedings 20. 2021. Springer.

  • Felzmann, H., et al., Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 2020. 26(6): p. 3333-3361.

  • Arrieta, A.B., et al., Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 2020. 58: p. 82-115.

  • Fosch-Villaronga, E. and A. Poulsen, Diversity and inclusion in artificial intelligence. Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice, 2022: p. 109-134.

  • Zowghi, D. and F. da Rimini, Diversity and Inclusion in Artificial Intelligence. arXiv preprint arXiv:2305.12728, 2023.

  • Jobin, A., M. Ienca, and E. Vayena, The global landscape of AI ethics guidelines. Nature Machine Intelligence, 2019. 1(9): p. 389-399.

  • Syed, J. and M. Ozbilgin, Managing diversity and inclusion: An international perspective. 2019: Sage.

  • Klarsfeld, A., et al., Diversity in under-researched countries: new empirical fields challenging old theories? Equality, Diversity and Inclusion: An International Journal, 2019.

  • Mhlambi, S. and S. Tiribelli, Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms. Topoi, 2023: p. 1-18

  • Attard-Frost, B., A. De los Ríos, and D.R. Walters, The ethics of AI business practices: a review of 47 AI ethics guidelines. AI and Ethics, 2022: p. 1-18.

  • Mittelstadt, B., Principles alone cannot guarantee ethical AI. Nature machine intelligence, 2019. 1(11): p. 501-507.

  • Munn, L., The uselessness of AI ethics. AI and Ethics, 2022: p.1-9.

  • Lundgren, B., In defense of ethical guidelines. AI and Ethics, 2023: p. 1-8.

  • Schwartz, R., et al. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. March 2022; Available from: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP. 1270.pdf

  • World-Economic-Forum. A Blueprint for Equity and Inclusion in Artificial Intelligence. 29 June 2022; Available from: https://www.weforum.org/whitepapers/a-blueprint-for-equityand-inclusion-in-artificial-intelligence.

  • Department of Industry Science and Resources, A.G. Australia’s Artificial Intelligence Ethics Framework. 2019; Available from: https://www.industry.gov.au/publications/australias-artificialintelligence-ethics-framework.

  • Shams, R.A., D. Zowghi, and M. Bano, Challenges and Solutions in AI for All. arXiv preprint arXiv:2307.10600, 2023.

  • Cohn, M., User stories applied: For agile software development. 2004: Addison-Wesley Professional.

  • Cohn, M., Advantages of userstories for requirements. Inform IT Network, 2004.

  • Zhao, J., et al., Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457, 2017.

  • West, S.M., M. Whittaker, and K. Crawford, Discriminating systems. AI Now, 2019.

  • World-Economic-Forum, How to prevent discriminatory outcomes in machine learning. Global Future Council on Human Rights 2016–2018, 2018.

  • Voigt, P. and A. Von dem Bussche, The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 2017. 10(3152676): p. 10-

  • 1 st Author: Muneera Bano, PhD is Senior Research Scientist and member of Diversity and Inclusion team at CSIRO’s Data61. She is an award-winning scholar, is passionate advocate for gender equity in STEM. She is a Diversity Inclusion and Belongingness (DIB) officer at Data61 and a member of the 'Equity, Diversity and Inclusion’ committee for Science and Technology Australia. Muneera graduated with a PhD in Software Engineering from UTS in 2015. She has published more than 50 research articles in notable international forums on Software Engineering. Her research, influenced by her interest in AI and Diversity and Inclusion, emphasizes humancentric technologies.

    2nd Author: Didar Zowghi, (PhD, IEEE Member since 1995) is a Senior Principal Research Scientist and leads the science team for Diversity and Inc(lusion in AI at CSIRO’s Data61. She is an Emeritus Professor at the University of Technology Sydney (UTS) and conjoint professor at the University of New South Wales (UNSW). She has decades of experience in Requirements Engineering research and practice. In 2019 she received the IEEE Lifetime Service Award for her contributions to the RE research community, and in 2022 the Distinguished Educator Award from IEEE Computer Society TCSE. She has published over 220 research articles in prestigious conferences and journals and has co-authored papers with over 100 researchers from 30+ countries.

    3rd Author: Vincenzo Gervasi, PhD is an associate professor in the University of Pisa’s Computer Science Department. His research focuses on natural language processing applied to requirements engineering, formal specifications, and software architectures, fields in which he has published over 140 papers in international venues. Prof. Gervasi received his PhD in computer science from the University of Pisa and is a member of IFIP WG 2.9.


    文章来源: https://hackernoon.com/diversity-and-inclusion-in-ai-lessons-from-human-and-ai-collaboration?source=rss
    如有侵权请联系:admin#unsafe.sh