The Strange Dreams of an Artificial Intelligence

Finn Janson
4 min readApr 14, 2019
‘Dreaming dreams no mortal ever dared to dream before.’

When I heard about how AI could be trained to create its own writing, I hopped on my computer to see what it could do. Effectively, the software can take in any text, and create new writing that resembles the style of the author who created the original text.

The first model I used is called an RNN. In many ways, the process was like watching a child learn language. The model knew nothing about words, let alone entire narratives. By the time I had trained it on a large body of my own fictional writing, the results were quite fascinating indeed.

It was a tapestry of all my ideas. Characters from one story were placed in juxtaposition to the setting of another story. My phrases and expressions were spliced and blended with one another. It seemed like I was scrolling through the world’s greatest plagiarism project.

The above is my own writing, sequenced into a format that the model could use.

You can adjust how long to train the model for. As you’d expect, the longer you train it, the better. A cycle of training is called an Epoch.
Allow me to show you the difference between 3 Epochs and 1000 Epochs:

3 Epochs
“You reeved parell wares als velated a for ligat. detinglre befor, Downs “ En mon hid on! I smuck and the gowerr, that has grout this drisss of I hevan, dag ints Gous, with a bidlo, a jienty to foar, and fil fore, and phod relferess, like shooning cospelf the grome whor with the knys hiss as whice Thech not sasked iftally.”

It reads like nonsense, and is kind of chilling in a way. Then again, most people describe my writing that way, so perhaps this AI is further along than I give it credit for.

1000 Epochs
“All then as them of ears were stuck on the black formles and books to contricing companions, flystar like the bed, Something could ?” in and ordfurish agreem. “How more, something compitate to a rived reluctankeature of the bus, she was simply from my volmur of its next of his kind, what hearing bitch, just goil a lost books. It’s not or spired into vigion, his centrols of there. A peered the void.”

As you can see, there are some significant differences:
The lower-level AI had few actual words, with hardly a trace of lucid language usage. By the time 1000 Epochs were completed, the text used a broad English vocabulary, using conjunctions to join clauses. It even feels like you’re reading my writing. It’s a sort of uncanny valley, stream-of-consciousness kind of reading. The best kind.

I found the results from this interesting but rudimentary, so I decided to train it on more language-capable AI model.

On the 14th of February (2019), OpenAI released a new natural-language processing software, called the GPT-2. The model leaves the RNN (the above model) trailing in the dust. It’s on a whole new level of sophistication. Not only is its syntax much more natural and consistent, but its themes and contents too. Finally, the AI could make a coherent narrative. Before that, it simply threw words together, which made for an amusing but primitive structure of text.

I decided to train it on my favourite writer’s short stories. The writer is called Thomas Ligotti, who writes a genre called ‘Philosophical Horror’.
The short-story collection is called Teatro Grottesco, and is a set of fictional, prosy narratives, about 95 thousand words long in total.

One of the first sentences the AI generated was the following:

“It was around this time that I was trained to dream short fiction, an activity that I had always excelled at despite the strange and unendurable destiny which I had been trained to pursue.”

Indeed, the GPT-2 model had been trained to create short fiction. It seemed as though the model used metaphorical language to describe itself. Rather accurately, too. In a way, it was dreaming. It used abstract reasoning to connect together ideas it had a limited ability to contextualize.

Of course, the sentence is no more than a coincidence. The writer’s fiction is pretty surreal and introspective already, so it’s probably no surprise that something like this arises. Statistical probability is all that’s going on with the GPT-2. Though, our brains are quite similar. We recognize patterns, and deploy them appropriately… when we do it in our every day lives, we simply label it consciousness. How can we look at AI, in its ever growing advancements in language and cognition, and deny it the right to be called conscious? It’s an unsolvable question, but worth asking nonetheless.

The fact remains: we are making tremendous developments in the artificial recreation of language. Before long, AI will be able to string together language indistinguishable from an everyday speakers. It will be used to generate news, write advertisements, and create its own stories. It seems that yet another hobby we thought exclusive to us is being adopted by AI.

There will come a time when we look back on the growth of Artificial Intelligence. We’ll wonder when exactly it reached a human-level of intelligence. The answer may be manifold. If we admit that the brain gives rise to the Mind, then we might say that the Mind gives rise to consciousness. There’s no doubt that language is a fundamental aspect of the Mind. Perhaps our language can reflect our consciousness, and perhaps language is how the AI will demonstrate its own consciousness.

--

--