Ghostwriting with AI: Who is the Author of the Text?

Category Machine Learning

tldr #

A team led by media informatics expert Fiona Draxler at LMU's Institute for Informatics has investigated questions concerning authorship and ownership of texts generated with large language models (LLMs). Using an experiment involving postcards, the results showed how perceived ownership changes when the text is written with or without the help of an LLM. In addition, the authors call for ways to encourage transparency of AI-generated text declarations to maintain its credibility and the readers' trust.


content #

Large language models (LLMs) radically speed up text production in a variety of use cases. When they are fed with samples of our individual writing style, they are even able to produce texts that sound as though we ourselves wrote them. In other words, they act as AI ghostwriters creating texts on our behalf.

As with human ghostwriting, this raises a number of questions on authorship and ownership. A team led by media informatics expert Fiona Draxler at LMU's Institute for Informatics has investigated these questions around AI ghostwriting in a study that was recently published in the journal ACM Transactions on Computer-Human Interaction.

LLMs are mainly used for speed and efficiency while creating texts

"Rather than looking at the legal side, however, we covered the human perspective," says Draxler. "When an LLM relies on my writing style to generate a text, to what extent is it mine? Do I feel like I own the text? Do I claim that I am the author?" .

To answer these questions, the researchers and experts in human-computer interactions conducted an experiment whereby participants wrote a postcard with or without the help of an AI language model that was (pseudo-) personalized to their writing style. Then they asked the test subjects to publish the postcard with an upload form and provide some additional information on the postcard, including the author and a title.

To determine a human's perspective of authorship, researchers conducted an experiment with postcards

"The more involved participants were in writing the postcards, the more strongly they felt that the postcards were theirs," explains Professor Albrecht Schmidt, co-author of the study and Chair of Human-Centered Ubiquitous Media. That is to say, perceived ownership was high when they wrote the text themselves, and low when the postcard text was wholly LLM-generated.

However, perceived ownership of the text did not always align with declared authorship. There were a number of cases in which participants put their own name as the author of the postcard even when they did not write it and also did not feel they owned it. This recalls ghostwriting practices, where the declared author is not the text producer.

Postcards were created with or without the help of an AI language model personalized to their writing style

"Our findings highlight challenges that we need to address as we increasingly rely on AI text generation with personalized LLMs in personal and professional contexts," says Draxler. "In particular, when the lack of transparent authorship declarations or bylines makes us doubt whether an AI contributed to writing a text, this can undermine its credibility and the readers' trust. However, transparency is essential in a society that already has to deal with widespread fake news and conspiracy theories." .

Participants who wrote their postcards themselves felt more ownership of the text compared to those whose was wholly generated by the LLM

As such, the authors of the study call for simple and intuitive ways to declare individual contributions that reward disclosure of the generation processes.


hashtags #
worddensity #

Share