March 4, 2024

Afrispa

Epicurean computer & technology

The programmer who utilised GPT-3 to “resurrect” the dead was penalised by OpenAI, but was this just?

[ad_1]

When it comes to our moral and societal ideals, machine-learning algorithms are wreaking havoc on how we conduct our lives. Virtual assistants, news recommendation systems, risk-prediction systems, and data-driven employment tools are just some of the new technologies that are threatening the privacy of the home and shaping our understanding of the world. However, many people are unsure about the ethics of machine learning.

My fascination with the story of Joshua Barbeau, a 33-year-old Belgian man who used a website called Project December to create an artificial intelligence (AI) that could converse with his deceased fiancée Jessica sparked my search for articles on the subject for the UCLouvain Ethics and Information and Communications Technology course.

Animated robots that speak like deceased humans

Barbeau might communicate with an artificial “Jessica” via a chatbot known as a deadbot. When I looked for information on this topic, I was surprised to find that it was seldom discussed from a normative perspective: why would it be good or wrong, morally acceptable and morally repugnant, to design a deadbot?

Hello, fellow humanoids!

Subscribing to our weekly AI roundup is easy

Allow me to provide some background before I attempt to answer these questions: Jason Rohrer, a game developer, launched Project December to let users pay for the ability to personalise chatbots with the personality they wish to connect with. GPT-3, an artificial intelligence language model developed by OpenAI, was used in the development of the project. A disagreement between Rohrer and OpenAI on the use of GPT-3 for sexual, romantic, self-harm, or bullying objectives was sparked by Barbeau’s instance.

When Rohrer accused the OpenAI group of being morally intolerant and said that persons like Barbeau were “consenting adults,” he shut down the Project December GPT-3 version.

Despite the fact that we may all have strong feelings about whether or not developing a machine-learning deadbot is morally acceptable, articulating its consequences is not a simple undertaking. As a result, it’s critical to take a methodical approach to addressing the case’s ethical dilemmas.

Is Barbeau’s approval sufficient to proceed with Jessica’s deadbot development?

Barbeau’s assent to the development of a deadbot that mimics Jessica, despite the fact that she was a genuine (though deceased) person, does not seem adequate. Even when they die, humans aren’t just objects that may be used by others as they see fit. To dishonour the memory of the deceased is considered a grave offence in our society. If we understand that death does not always mean that a person ceases to exist morally, then we have moral duties towards the deceased.

Similarly, the question of whether or not we should defend the basic rights of the deceased is still up for grabs (e.g., privacy and personal data). If you want to build a deadbot that can impersonate someone, you’ll need to gather a lot of personal information about them, such as their social network data (see Microsoft or Eternime’s proposals).

After all, if not even while the person is still alive, why should it be acceptable to use their data without their consent? When building a deadbot, it is logical to ask for Jessica’s permission, as her personality is being replicated.

A green light is given to an emulated person when they give the thumbs up

In other words, would Jessica’s agreement be sufficient to justify the development of her deadbot as ethical? What if her memory was harmed?

There is little doubt that the boundaries of consent are a contentious subject. Consider the “Rotenburg Cannibal,” who was condemned to life in jail despite the fact that his victim had consented to being eaten, as a paradigmatic example. Ethical arguments have been made that consenting to acts that may harm oneself, either physically or abstractly (such as selling one’s critical organs), is immoral.

In terms of how anything may harm the dead, it’s a really complicated question that I’m not going to go into in great detail. This does not entail, of however, that evil acts are unaffordable to the dead since they cannot feel hurt or offended in the same way as the living. Posthumous smear campaigns and other forms of contempt against the deceased may cause harm to the deceased’s honour, reputation, and dignity. Being disrespectful to the deceased, on the other hand, contributes to a society that is less fair and less tolerant of human dignity generally.

Machine-learning systems are very malleable and unpredictable, therefore the person replicated (though still alive) may not have much more than a “blank check” on its possible courses.

Accordingly, if the deadbot’s development or usage does not match what the mimicked person has consented to, their agreement should be invalidated. It should also be considered unethical if it is blatantly and knowingly infringing on their dignity.

Who bears the brunt of the blame?

There’s also the question of whether or not AI systems should aim to emulate any particular characteristic of humans (irrespective here of whether this is possible).

An ongoing AI debate between Rohrer and OpenAI has raised this issue for some time now. Artificial systems that can care for people or make political judgments may be a step in the right direction. Humans seem to have a distinct advantage over other animals and machines because of these abilities. Because of this, it is crucial to keep in mind that using AI to achieve techno-solutionist goals like replacing loved ones may lead to a diminishment of what makes us human.

Fourth, who is responsible for the consequences of a deadbot, particularly if they have detrimental implications.

Think about what might have happened if Barbeau’s mental health had been permanently affected by Jessica’s deadbot. Who would be the person in charge here? For AI experts, there are two ways to approach this difficult question: first, responsibility rests with those who are directly or indirectly involved in the system’s design and development as long as they do so in accordance with their own interests and worldviews; second, because the moral responsibilities of the systems’ outputs are context-dependent, they fall on everyone who interacts with them.

I get closer to the top spot. In this scenario, because the deadbot was co-created by OpenAI, Jason Rohrer, and Joshua Barbeau, I believe it is legitimate to examine each party’s amount of accountability.

When OpenAI specifically forbids using their technology for sexual, romantic, self-harm or bullying objectives, it would be difficult to hold them accountable.

Since Rohrer devised the technology that allowed the deadbot to be created, didn’t anticipate methods to prevent negative consequences, knew it didn’t conform with OpenAI’s principles, and benefited from it, it’s plausible to assume he bears a large amount of moral culpability.

As a result, Barbeau should be held liable for the memory loss Jessica suffered as a result of the modified deadbot.

Under some circumstances, it is acceptable

To answer our first question, “Is developing a machine-learning deadbot unethical?” we may say yes if we met these conditions:

It is forbidden to develop or use the system in a way that goes against the wishes of the person who was imitated, and those who profit from it are responsible for any negative consequences. The ability to account for past occurrences and actively prevent them from occurring in the future is necessary both in retrospect and in the forward sense.

When it comes to machine learning, ethics matter. An open public discussion is vital for better informing individuals and developing legislative measures for AI systems that are more transparent, socially fair, and in compliance with basic rights.

[ad_2]

Source link