Machine Patiency and the Ethical Treatment of Artificial Intelligence Entities
In recent years there has been explosive growth in the realm of artificial intelligence or AI. With that has come a body of ethical concerns regarding human implications. However, this paper explores our actions towards AI systems. The concept of machine patiency is the notion that humans may have moral obligations towards AI systems as they become sentient. Five bodies of knowledge are inspected to set the landscape for future machine patiency research. These are (1) the history of human encounters with sentient others, (2) topics from the philosophy of mind, (3) topics from moral philosophy, (4) niche specialists who study AI and ethics, and (5) the nascent field of complexism. The paper closes with a provisional affirmation of machine patiency as plausible on the basis of both natural charity and rational non-contradiction.
Deep neural networks have become remarkably good at producing realistic deepfakes, images of people that (to the untrained eye) are indistinguishable from real images. Deepfakes are produced by algorithms that learn to distinguish between real and fake images and are optimised to generate samples that the system deems realistic. This paper, and the resulting series of artworks Being Foiled explore the aesthetic outcome of inverting this process, instead optimising the system to generate images that it predicts as being fake. This maximises the unlikelihood of the data and in turn, amplifies the uncanny nature of these machine hallucinations.
This paper argues that generated photographic media, such as “deepfakes,” intensify the ambiguity of the digital image causing widespread shifts in perceptual orientations. The operational structure of the generative adversarial network suggests that the dataset is a central component to the development of this type of face-to-face image translation. Analyzed both from a photorealistic standpoint, and from the perspective of the mutability of digital imagery, the dataset is found to be the key element in understanding the ambiguous nature of the deepfake. It is proposed that an embrace of the plasticity of the image can offer a new approach to combating problems that have emerged in regard to deceptive visual media.
Inaugurated as a deep learning application on voice synthesis, appropriated by “deepfake” users to create fake Trumps and Obamas and by artists to explore non-anthropomorphic possibilities, voice cloning is not only a technology but a new cultural and artistic practice. Subverting the relation between voice and subjectivity, voice cloning affects the very notions of embodiment and truth. In the wake of media archeological investigation, this paper explores the epistemic properties of voice cloning analyzing its technical and cultural aspects and comparing them with their ancestors, media messages and outcomes.