AI (Artificial Intelligence) has become a hot topic not only for those in the tech field, but across all industries in a globalized way. It has changed the way organizations operate, with many companies now adopting a 'ChatGPT' policy, as workers rely on it more and more for computing, drafting emails, and asking work related questions.
Join us here at Diversein as we embark on a three-part blog series diving into the complexities of artificial intelligence and it's manifestation in the D&I space. Diversein has also collaborated with two ambassadors from Women in AI, Begüm and Nabanita, to provide an inside perspective.
This first article is dedicated to the ethical considerations surrounding AI in the realm of diversity and inclusion.
For AI to successfully produce information, it must first be trained through a process called machine learning. This is when the AI is fed a large dataset containing different examples of input data and their corresponding outputs. The model learns from this data and identifies patterns to make predictions or categorize it.
Where does this data come from?
Well, it can come from anywhere on the internet, depending on the data scope. It can be from various private and public databases, meaning that user-generated records, such as social media platforms, blogs, etc, can also be mined from.
The mining of user-generated data not specifically designed for AI instruction brings up ethical considerations.
A principal concern is regarding privacy and data security. The data collected by AI must maintain certain guidelines on privacy protection, and the protection of individuals' privacy rights is paramount to ensure any kind of misuse or unauthorized access to what has been collected. There is also the question of consent: do those posting on social media platforms automatically consent to having their data used by AI?
Begüm: "When considering the collection of diverse data for training AI models, several ethical dilemmas come into play. One prominent concern is obtaining informed consent from individuals, recognizing their right to ownership over their personal data. Ensuring data security is paramount, as maintaining the trust of all contributors is essential. This concern is particularly heightened for underrepresented groups, where the potential for data manipulation or misuse poses unique risks. Safeguarding the data of marginalized communities becomes crucial to prevent any unintended biases or adverse impacts.
For example, rigorous measures should be taken to avoid using the data of such groups for predicting sensitive attributes. A comprehensive approach that respects individual rights, promotes transparency, and prioritizes data protection is vital to ethically navigate the complexities of diverse data collection for AI training."
There is a strong need for clear consent and reporting on data gathered by AI to ensure transparency. It's become a moral imperative for companies to guard sensitive information and ensure their users' trust. No one wants to feel they are a 'guinea pig' for next generations GPT generators.
However, the best way to learn is to have AI analyze a plethora of real-world, user-generated sources in order to try and capture the 'uniqueness' that makes us all human. Nabanita speaks a bit about the relationship between our human uniqueness and balancing privacy: "In terms of privacy, what I feel is: if you don't put forward your uniqueness into a conversation, it won't get acknowledged. I know we're private about who we are, but if you don't bring that forward, people will not know it and hence appreciate it. So that's one way of looking at privacy in relation to AI, and diversity and inclusion. I do think that sometimes being too private takes away the uniqueness or the vow factor of your diversity from people's eyes and they do not appreciate your uniqueness. To be able to build these into technology and AI, we need to embrace it first as humans."
Assuming the best case scenario, trained and ethical AI should uphold principles such as fairness, anti-discrimination, and accountability at all times. However, AI is only as 'smart' or 'human' as the datasets it was trained on, therefore AI systems have the potential to perpetuate biases or discriminate against certain groups.
In recent media, AI usage has developed a strong opposition by a select few, with anti-AI groups using fear-mongering to label it as a 'threat' to humankind.
However, Begüm refutes this mindset, saying: "AI is often pictured as a threat in media, however this is not necessarily true. When developed responsibly and with the right intentions, AI can serve as a powerful tool for positive societal impact. Take, for instance, the potential of well-trained AI models, grounded in representative and unbiased data. These models can contribute to more equitable decision-making processes compared to human biases."
AI's potential to tackle systemic discrimination and advance social justice is undeniable. When guided by ethical considerations, diverse datasets, transparency, and a commitment to inclusivity, AI can indeed be a force for positive change, contributing to a fairer and more equitable world."
Nabanita stresses however, the mandatory task of re-training AI to unleash the potential of AI on improving society: "I think the role of AI is massive in this space and because AI is fueled by all the different kinds of texts, articles and everything written in the internet. Therefore, it's upon us to design systems that consider the quirks of all these different aspects, but it's not easy. At the end of the day, AI is not the same as human intelligence. So it has to be re-trained continuously. I think these little steps are making a huge difference for us as a society, and through diversity and inclusion, it's making society a safer place for different kinds of people, given we keep the momentum into the future."
Some questions remain as we ponder the ethics of AI, Begüm mentions the controversy of AI generated images used in pornography, also known as Deepfakes, and the question of responsibility: "As an example related to this, what happens when GANs are used to produce nude images of people? Women are more exposed to this risk. When this happens, how do we label this image as AI-generated? Who will be held accountable for distribution of such image?"
This also brings up the questions 'Can we hold a machine/tool liable for it's creations?' and 'How can we, as human beings, ensure AI is used by others in a morally acceptable way?'.
Though we may not have all the answers now, pondering and innovating for these problematic situations is imperative to ensure the development of AI is used for the betterment of both science and mankind.
Thank you for reading our first post in the D&I and AI series, next week we will be investigating the topic of Bias. Please stay tuned and see you next time!
Having recently finished an internship in Human Resources, Amanda Mola is a Diversity and Inclusion Resource Manager at Diversein, creating content about the D&I industry and fostering inclusive intelligence. You can connect with her here: https://www.linkedin.com/in/amanda-mola-977282155/