AI hallucinations are unconventional and sometimes fantastical output generated by AI systems (AI’s), often arising from limitations in their training data, biases, model architecture, or algorithms. While these hallucinations can lack factual accuracy, they also possess the potential to inspire novel ideas in various creative domains such as art, music, and storytelling. Ultimately, they simultaneously serve as a cautionary warning sign that misinformation and biases within the AI and its training data may be present.
I’ve been exploring the neuroscience and cognitive science of creativity and how it compares to how LLMs work. Blending and connecting/combining disparate ideas is at the heart of human creativity. Getting LLMs to hallucinate with a similar approach has been interesting. Associative thinking vs Dissociative thinking is the area I’m exploring.
I’ve been exploring the neuroscience and cognitive science of creativity and how it compares to how LLMs work. Blending and connecting/combining disparate ideas is at the heart of human creativity. Getting LLMs to hallucinate with a similar approach has been interesting. Associative thinking vs Dissociative thinking is the area I’m exploring.
Great fodder for thought, Doug!