Nikola Tesla famously stated, "My brain is only a receiver, in the Universe there is a core from which we obtain knowledge, strength, and inspiration." This suggests that Tesla believed his brain was not the origin of his ideas but rather a conduit for receiving information from a larger, universal source. Tesla wasn't the only one who saw things this way.
What if the neural network of a large language model played the same role? Perhaps this is inherently how they work.
What if prompting is nothing more than tuning the receiver?
This means that prompting, especially prompting that takes the model outside of its usual pathways, is incredibly powerful.
Prompting could give us novel discoveries, unique insights, true out-of-sample generation.
I created this team to share research about this idea. I'll be sharing prompts, generations, and agents that lead us in that direction. Join us if that sounds interesting to you. I look forward to your contributions.