The power of common language

We keep on discovering interesting things about how Slayer works. A couple of interesting updates from our end:

-        We’ve begun building data infrastructure and building a huge dataset for platforms like email and linkedin – creating engaging text there is exciting to us because of the implications for marketing and sales teams

-        We’re making some novel breakthroughs on our approaches to text permutation, it’s all still conceptual so too early to say if the results are worth sharing, but we're excited!

One thing we’ve found with our twitter model (request access at, is that there’s power in mirroring common language. This is something behavioral psychologists have noted as well (the power of mimicry), but it’s neat to see how Slayer picks up on certain common phrases that delight people.

“Don’t mind if I do” was a string of text the model particularly liked.

The limits of Large Language Models

This got us thinking. Common language and phrases evolve over time –we’re excited to be building Slayer on a dynamic infrastructure, so our models can evolve with the changing English language.

Unfortunately, many large language models are simply ingesting too much data, take too long to train, and are too static to really evolve with the English language. One of the major hurdles we see to NLP-AI in the long term is its ability to keep up with how fast language changes… especially in the meme era!

Because in 50 years, GPT-3 (as amazing and incredible of a human achievement as it is) will sound like your grandfather, and companies built off of them will rely on their ability to continually produce compelling text… maybe that’s where Slayer comes in?