
As we continue to review and reassess the evolving situation with AI, we continue to revise our policy about acceptable use of the technology.
When ChatGPT first came out, its output was barely readable and when writers used it, it was mostly for research, sparking ideas, outlining, and editing. We could see how some writers might find value in it. And, since there was no way to know for sure whether a writer used AI, or how much, we decided on a hands-off policy.
Plus, we’re a science fiction magazine. What’s more science fiction-y than AI?
But recent improvements in AI have shown that it is now able to create quite readable stories without much human interference. Due to this development, our policy has shifted. We will no longer accept stories created using ChatGPT or other generative AI tools.
Last year, the winner of a prestigious Japanese literary award admitted that a portion of her science-fiction novel was written by AI.
The novel had been lauded as “practically flawless” by judges.
Then, a couple of months ago, ChatGPT passed a milestone test for artificial general intelligence. This means that the AI has “human level” intelligence and can perform a broad range of cognitive tasks as well as an average human.
Meanwhile, publishers are both suing AI companies and embracing the technology. One company that’s doing both is New York Times, which decided to allow newsroom staff to use AI tools for editing and writing, even as it’s filed suit against ChatGPT maker OpenAI for copyright infringement.
We don’t know what effect AI will ultimately have on authors, on publishers, and on magazines.
But we do know that a lot of writers are — very justifiably — concerned about this.
And we also know that our mission at MetaStellar is to connect writers and readers. Human writers with human readers.
To protect this mission, we are adding an attestation to our upcoming round of submissions. We will be asking contributors to attest that they have not used ChatGPT or other generative AI tools to write their stories.
Similarly, we want to protect human artists.
For that reason, we only use Adobe Firefly for our images. Adobe trained the tool only on fully-licensed images, and pays artists for their work. It is the only major company that does so, the only major company that is approaching this technology ethically, and we applaud them for it.
Previously, we used Pixabay for stock images, which does not compensate artists. For example, Tumisu, the artist who created the Pixabay image used for this article, asks for PayPal donations to support their work.
However, if any contributor doesn’t want to have a Firefly image accompanying their stories, we are more than happy to use any other image that they provide.
We are also using original, human-painted art, on our anthology and book covers.
Love your statement about AI, its application to writing, and why you accept AI-assisted or AI-written stories. You make points I had not considered. Alas, I don’t use it. I write the old-fashioned way that Ray Bradbury taught me with the following, which I paraphrase: “You as a writer are a prism gathering the white light of experience, and in turn throwing your spectrum onto the page.”
I don’t use AI to write my books or stories, either — as of right now, at least, except for the usual stuff like grammar checkers — but I’m following the technology closely. Partly because it’s my day job, and partly because the world is changing and I’m too young to retire.
Over the past year, OpenAI has signed a bunch of contracts with content companies, like Associated Press, to get legal training data. Adobe has trained its AI 100% on fully licensed data (and is already paying artists – the first payment went out last September). And we still have the lawsuits that are playing out, and, possibly, new regulations as well. So, eventually, I think the ethical and legal sides will be resolved, one way or the other.
But, for writers, using ChatGPT (or Claude 2, or any of the alternatives) to generate publishable text is still a VERY labor-intensive process. If you just put in a simple prompt, you’ll get a simple results that reads like a summary of a Wikipedia article or a children’s story, moral and all. It’s not readable, and not publishable. To get it to turn out something useful takes hours, even weeks, of massaging the prompts. Some writers even create their own fine-tuned models by turning their past work into training data sets in the form of JSON files — by that point, you’re practically a data scientist. Most just use it for brain-storming, grammar fixing, rewording, and suggesting rough drafts of paragraphs that they then extensively rewrite.
Hardly seems worth the effort, right?
But the thing is, the tools are getting better and more user-friendly all the time. I don’t know what’s going to happen to my profession, or whether I’m going to have a job in five years. Or even one year.
If we’re lucky, AI will do to writing what it did to chess — a few moments of panic, then a net positive. Computer chess allowed players from disadvantaged backgrounds to hone their skills. The internet allowed them to play against each other. Nobody wants to watch a computer play — but everyone wants to watch humans play. Chess has never been as visible or as profitable.
Maybe we’ll do something similar with writing. We’ll have YouTube channels where we discuss our process. Where our audience can watch us hand-write our books and then revise them. Where we read our stories out loud. It might force us to move away from formulaic mass-produced page-turners and more to more quirky, more interesting, more personal narratives. It might make us become better, more thoughtful writers. That’s my hope, anyway.
Are you getting sponsorship, consideration, or payment from Adobe and/or any other software company that uses AI to produce creative content?
No, we’re not getting anything from Adobe or any other AI-related company. Or any other company.
We’re funded purely by donations. Our Patreon link is here: https://www.patreon.com/MetaStellar
(We also get a couple of bucks a month from affiliate income from Amazon when we link to books and we now have a “join” button on our YouTube channel, but mostly the income is from direct donations.)
I run an AI business. So, I’m not averse to using AI, as I’ve worked with it for creative ideas and research information mostly for many years now. It works! As long as the writer understands that the final draft will be completely and totally human generated, I find that AI still speeds up a very laborious process a tad (writers are still a solopreneur business, believe it or not!), and I believe if the mind behind the content is not human, then AI is virtually useless anyway. If you don’t understand the basics of content and what makes a story “work,” then you won’t be much good using AI in its current iteration. But if you are both a great writer and “final draft polisher,” I don’t see a problem with using AI to assist you through the very time-intensive process called “crafting.” To businesses everywhere, time is still “money.”
Two years ago — even last year — I would have agreed with you.
But the latest versions of gen AI — especially the new agentic systems — can replace the entire process.
Here’s how an agentic version of story generation would work:
* One AI generates a bunch of story ideas, based on current trends, news events, random facts, and a variety of other prompts
* A second AI reviews the list, and picks out the candidates most likely to catch reader attention
* Another AI picks the writing style best suited to tell that story
* Another AI, specially training on prompted on outlining, creates an outline
* Another AI critiques that outline
* The previous AI revises that outline until the critic AI is happy
* Another AI, trained on the selected writing style, writes the story
* A story critic AI reviews it, and sends it back to the previous AI for rewriting until its satisfied
* The story is published, distributed on social media, or submitted to magazine via an automation
* A marketing AI promotes the story via every channel
* Another AI analyzes which stories get the more likes or acceptances, and which stories don’t, and adjusts the selection and editing criteria.
* Then the cycle repeats
With each iteration, the AI system gets smarter and better able to quickly pick up on audience tastes. Its promotions get better targeted.
We’ve already getting similar systems being deployed for high-value use cases — software development, for example. Or sales and marketing. I think it will dramatically transform video game creation. Instead of needing a giant team to make a game, indie developers will be able to create one working on their own. We’re probably going to see a wave of startups powered mostly — or completely — by agentic AI. There’s not much action yet on the fiction writing side because there’s not much money in fiction.
And, today, it does takes some skill to create this agentic system. Next year? Or maybe next month — you’ll be able to ask ChatGPT to build you an agentic system from scratch.
AI is already good enough to create new stuff without human input. We’ve seen this in Go — AlphaGo came up with strategies that humans hadn’t come up with over hundreds of years playing the game. We’re seeing it happening in science, with AI coming up with novel compounds and figuring out which ones are the most promising and should go to testing.
And, with fiction, AI doesn’t have to hit it out of the park with every story. Even if only 1% of its stories are good, that doesn’t matter because the AI can generate a million stories, and a hundred different marketing campaigns for each one, then double down on the stuff that works.