Artificial Imagination: Is it just an illusion? (1/2)

Artificial Imagination

As present time felt obsolete compared to the new realm of AGI everywhere, I started wondering about the possible consequences of Artificial Imagination and Intelligence on human creativity and subsequently on the arts in the post-Anthropocene. From the technical realm of Samuel Doogan’s computing scope throughout the critical positioning of Johannes Bruder’s research, these discussions inquires topics such as whether AI could create art? What could be the philosophical, political and sociological "consequences" of such progress? And Could this change our anthropomorphic vision of creativity? Hoping you will enjoy the speculative ride.

Video showing “A journey trough all the layers of an artificial neural network” by Johan Nordberg based on the work of Google researchers Alexander Mordvintsev, Christopher Olah and Mike Tyka.

Samuel Doogan is undertaking a PHD on computational linguistics at the University of college Dublin where he researches AI’s creativity in the field of linguistic throughout generated forms of written texts such as stories, poetry, metaphors, films scripts, etc.

Juliette Pépin: Do you think an AI can be creative and ultimately make works of art?

Samuel Doogan: Well... We could have AI generated art which is indistinguishable from human made art, thus I personally don’t think it is very likely. Whereas, an AI could create works which are as valuable yet totally different from traditional forms. This AI would create things that a human would not even be capable to think of and therefore initiate a form of completely new art. But let me know a bit more what you mean by creativity?

JP: Well, if you legitimately prove that an AI can be creative could it then have a form of subjectivity applied to a creative practice. For instance, I often cross my knowledge irrationally to create new works, could an AI also achieve such “irrational” type of associations?

SD: That’s interesting and it happens that I have a colleague that works on this. Basically, and not to get too technical, he’s training a neural network that uses word and image vectors combined in a generative language which tries to create things like metaphors. You also have systems based on a purely descriptive form of world ontology which tries to merge two things together. For instance, the experiment “Horse-bird” consists of asking an AI to combine the ontology of a horse with the ontology of a bird and ultimately create something close to a Pegasus. To achieve that the system uses a genetic algorithm assessing which “blending” of the bird and horse are bests. To some extend it is criticizing itself to come up with the best solution. Thus, we have a long way to go before it reaches the subtlety level that humans uses when self-assessing.

Horse Bird  
Images showing Human made Pegasus in relation to the “Horse-Bird” experiment.

JP: But how can you judge an AI creativity? How is the value of its “art” assessed?

SD: You first need to define what is “creativity ” and there is a lot of debate about whether such thing can be described and used by a computational system. The two main commonly accepted criterions are so far Novelty and Quality. Regarding novelty, a lot would argue that making something completely new is not enough. I could go and do something random and it would be novel, but it could still be bad and uncreative. Yet, by trying to find ways to address and compute these inquiries a computer could eventually asses how creative something is. What I’m working on now is an AI which looks at films and aims to compute their novelty by using the descriptions and critics provided online by platforms such as IMDB or Wikipedia. It puts the analyzed film in a space where it asses it by comparisons to the collected data. Theoretically it is criticizing whether a film is being creative or not.

JP: This is maybe a form of skepticism but isn’t the data used for training AI a somehow unfair representation of the world?

SD: I would argue that same criticism can be addressed to humans in the sense that what we teach and learn is data, so it goes the same way with training an AI. It is our responsibility to train AI with content we think shape our world. On the other hand, once we train an AI to get a better understanding of the world by itself, then we will need to frame its independency for it to give a fair representation of the world. I suppose what I am trying to say is that if some tiny village in sub-Saharan country does not have access to this global phenomenon we are also responsible, and the question is where do the bias come in. Do we have a greater overview because we have more access to information then a computer does or are we more sued by this weight of knowledge then a computer might be?


Image from the Paper: “A Neural Algorithm of Artistic Style” by Leon A. Gatys, Alexander S. Ecker, Matthias Bethge. This example illustrate the common use of western art in data training sets.


JP: In the “quantity over quality” way of processing data some of the world aspects might end up being less represented by the big data, how do you acknowledge this factor in your training process?

SD: I mean it depends. If you train an AI by feeding in every single piece of art that you somehow managed to digitize then there is going to be a bias toward western art in that data set. But I think you will have to be careful as well in the sense that there are so many dimensions to art, as much as there are to language itself. For instance, if you are training a translation system you will first train it on one language to then be able to apply it to other. And I think it makes sense to separate these things to avoid the prejudices in quantity and the different types of input we might choose to feed in. So again, the human responsibility is essential in this regard.


Google, Amazon, Microsoft, Facebook, & IBM form “Partnership on Artificial Intelligence to Benefit People & Society”


JP: With such thing as the partnership on AI how do you deal with the “corporate” monopole of data?

SD: I think there is a double warning there. The first is that we provide these companies willingly and so there is something that can be done at a consumer level. Yet, I remain optimistic as technology also has the power to democratize especially with inventions such as open source software, blockchain, cryptocurrencies… which might be able to help deviate from this need to exchange information for a service. On the other hand, it is the way data is used by corporates to “sell” things to consumers which is problematic. Yet again, since quite recently data is starting to be used for things other than marketing. There is by the way a great introductory text called “Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are” where the author Seth Stephens-Davidowitz took Google data and used in in a sociological almost anthropological manner. But still I am indeed worried about the way data is used and traded by these companies especially as legislations are having a very hard time keeping up with the advancements of technologies. In this sense both art a politics should consider addressing the matter in the upcoming years…