Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. Let us explore this topic and how it relates to the human experience.
Language models struggle to balance multiple, potentially contradictory instructions when it comes to sentence structure and flow.
Quick little thing I noticed when collabing with Claude on a blog post this week.
I'll often add a prompt about the voice and tone the AI should use when writing. For example,
Write in a style that balances formality with accessibility. Use a mix of sentence lengths, including some very short sentences for emphasis. Vary sentence structure to maintain reader engagement. Embrace occasional sentence fragments to create rhythm and impact. Use simple, direct language when possible.
This allows me to get more interesting writing that sounds more natural. By the way, you can iterate with an AI to get to the voice you want and then just ask it to create a prompt for you to use in the future. That's how I got this one.
What I noticed from this is that you really can't have multiple instructions, and seemingly contradictory one as was the case with this example.
When I ask it to use a mix of sentence lengths, I really only got a bunch of short ones. The paragraph flow was completely gone.
Imagine an entire post written like this:
Income diversification is natural in this space. Ad revenue. Sponsorships. Digital products. Coaching services. Multiple streams mean greater stability. It's a hedge against economic uncertainty.
Chop slop.
Still, it was one part of what I wanted to a degree. I ended up using some of these phrases to switch up the flow, but there was no way I could have the whole post in that voice. I ended up doing the exercise with Clause again, changing the voice prompt for longer, flowing sentences, and then piecing the post together with parts from each.
Could this be an inherent limitation of the LLMs we've got now?
Thinking about how the text is generated in a forward pass through the LLM, I think it might be. I think it's also the reason we can distinguish AI-generated text so well. Variability/organicism is the part of it that it's missing.
LLMs are missing some meta-process above and outside the token generation process that could switch how the token generation behaves. For example, one mode for normal sentences, and another for short sentences to switch up the flow.
Discover assets like this one.