Language models struggle to balance multiple, potentially contradictory instructions when it comes to sentence structure and flow.
Comparison of GenAI diagrams from StabilityAI's models. Analyzes layout, visual distinctions, text quality, and accuracy. Highlights strengths and weaknesses, emphasizing the importance of model selection for technical visualizations.
Case study exploring the two leading text-to-image models from OpenAI and StabilityAI. We generate some images together and compare and contrast the results of the same prompt applied to each model.
We want to get better at the things we do when we use AI, not rely on them so much that our skills atrophy and we end up dependent and worse-off than we were before using AI.
Using Stability's Image Control Sketch route to turn a simple pencil sketch into a something nearing real-life.
This is the story of systems illustrated in three tweets.
It’s easy to get caught up in the AI hype. Instead of just asking what AI can do, we need to reflect on what AI should be doing.
And this isn’t just a theoretical exercise — it’s a question that will define the future of work, productivity, and innovation.
Quick little thing I noticed when collabing with Claude on a blog post this week.
I'll often add a prompt about the voice and tone the AI should use when writing. For example,
Write in a style that balances formality with accessibility. Use a mix of sentence lengths, including some very short sentences for emphasis. Vary sentence structure to maintain reader engagement. Embrace occasional sentence fragments to create rhythm and impact. Use simple, direct language when possible.
Stability has three classes of text-to-image models, each with different price points and image capabilities. Let's explore the differences between them so we can better understand when to use one over the others.
In this case study, we'll look at the two leading text-to-image models from OpenAI and StabilityAI. These two AI companies have models serving various different domains, but beyond language models, text-to-image seems to be the second-best use case for generative AI right now.
As creative industries evolve with the times, a new partnership is emerging – one between human artists and artificial intelligence. This collaboration is reshaping how we approach art, design, music, and storytelling. But what does this mean for creativity as we know it?
The goal of this study is to explore Stability's Control Sketch route now available in the Stability API.
This route calls upon an image-to-image model which takes an image and text prompt as input and returns a new image that has combined both aspects of the reference image and prompt.