Generative AI: What are its Ethical Impacts? – Estelle L

Generative AI: What are its Ethical Impacts? – Estelle L

Over the last year, the rise of Generative AI has been unmissable. At this point in time, I am sure we are all familiar with Chat GPT, OpenAI, or Goggle’s Gemini, to name a few. Traditional AI has been around for years, analysing existing data to solve problems and perform tasks. Examples of this would be facial recognition, voice assistants like Siri or Alexa, and even Google’s search algorithm. These AIs follow a set of rules, but they do not do, or create, anything new.

Generative AI creates entirely new data like images, text formats from a simple prompt, or maybe even your homework – which I do not condone, for any teachers reading! This AI can learn patterns to generate new data, and this leads to its current ever-expanding use, often in content creation or design. After our introduction and education in relation to the growth of AI, this is likely common knowledge for a large number of people. I can also comfortably admit that this just about reaches the limit of my knowledge of AI. I do not claim to be an expert, nor am I excessively familiar with programmes like Chat GPT. However, beyond the borders of the creative capabilities of AI lies another truth waiting to be uncovered – just how ‘ethical’ is it?

Above: AI generated Image (found on Google)

One of the controversies surrounding AI you may be familiar with. In the training of A.I Technologies, there has been outcry about infringement of copyright of thousands of authors whose books have been used to train Ai technology, without permission. In the US, this has lead to leading artificial intelligence company Anthropic agreeing to a pay out of $1.5 billion to authors and publishers. However, this payout does not subtract from the companies illegal acquiring of millions of books from online libraries, including pirated works.

Another concern is about possible political impact of Generative AI, as ‘bots’ have been used to create politically divide comments on Social Media’s, creating increasing engagement through posts. This is reminiscent of a tactic known as ‘rage baiting’, where users – not necessarily AI – will post or comment controversial opinions to gain clicks and engagement.

Above: AI Date Centre

However, most prevalent is AI’s extreme carbon footprint. A significant emitter of CO2, to train a single large natural language processing model (a form of AI training model) it costs around 600,000 pounds of CO2. This is equivalent from roughly 150 flights from London to Sydney, Australia. The environmental footprint of this is massive. To train AI, data centres are used; large temperature-controlled buildings that house computing infrastructure, and require vast amounts of water to cool down. Amazon has more than 100 data centres worldwide. And Generative AI is only contributing to this issue.

While AI is certainly helpful, even adding a ‘please’ or ‘thank you’ to your prompt costs more processing, and therefore more water. While we cannot alter illegal management of AI, or prevent copyright infringements, it is a simple ask to just rely on AI less. It is unlikely that most people would completely stop their use of Generative AI, but is asking OpenAI,or ChatGPT for a recipe you could find in a book, or to write your friend group into Love Island really necessary given its cost to the environment? As we further down a path of technological development and innovation, it is vital that we are cautious and aware of our surroundings and environment before throwing ourselves headfirst into something we do not understand.