Using LLMs to Simulate Wild Thinking and Convergence

Introduction

Throughout history, many breakthrough discoveries have emerged not from strict adherence to established theories, but from bold, creative leaps — often by individuals who dared to ask strange questions or connect seemingly unrelated phenomena. From Faraday to Tesla, from Gödel to Josephson, unconventional thinkers have often walked the line between genius and madness. Reading Steven Strogatz's Sync, I was struck by how the pioneers of synchronicity research—those who dared to see patterns in fireflies, pendulums, and neural networks—shared this same exploratory spirit. They looked for order in chaos, connections in apparent randomness. What if we could simulate some of that exploratory madness with today's AI models?

The Temperature Lever: Guiding the Model's Exploratory Behavior

Modern large language models (LLMs) like GPT, LLaMA, and Gemma expose a parameter called temperature. While it may sound like a physics term, here it controls how broadly the model explores possible outputs.

  • At low temperatures (e.g., 0.2), the model is conservative: it strongly prefers high-probability tokens, sticking to well-established phrases, facts, or associations.
    • At higher temperatures (e.g., 1.0+), the model becomes more exploratory: it samples from lower-probability, but still valid continuations. Instead of favoring only the most likely next token, it considers a wider range of possibilities — exploring looser semantic relationships that may still be meaningful. This broader sampling allows for unexpected connections and creative leaps, mimicking the kind of divergent thinking often associated with brainstorming or unconventional insight.

      This doesn’t mean the model becomes random. Even at high temperatures, its choices are grounded in its training data. But it begins to draw from the “longer tail” of plausible associations, linking concepts that may be only distantly related.

      lower temperatures = the model uses the most probable and closely related tokens and ideas

      This opens up a fascinating possibility: could we deliberately push the model into a state of controlled conceptual drift — and then steer it back toward coherence — to simulate the kind of nonlinear brainstorming that sometimes leads to genuine insight?

      The Hypothesis: Chaos First, Then Order

      If an LLM can generate creative associations at high temperatures, and refine them at lower ones, then we may be able to use it as a surrogate for a nonlinear thought process. The hypothesis is simple:

      By intentionally prompting a language model with an absurd or loosely defined problem at high temperature, and gradually cooling the model in subsequent iterations, we might observe a drift from chaos toward coherence — possibly even a novel idea forming in the process.

      The Experiment: Controlled Cooling

      I tested this idea using the gemma-3n-E4B-it-Q4_K_M.ggufmodel running locally via KoboldCpp on a GPU. KoboldCpp is a lightweight front-end for running local LLMs using GGUF models with GPU acceleration — in this case, using my Sapphire Radeon RX 6600 XT and a quantized Gemma 4B model. The process works as follows:

      1. Start with a prompt that invites speculation on a bizarre relationship.

      To test the hypothesis, we needed a prompt that had zero apparent scientific connection, yet contained enough structure to invite creative interpretation. For example, "As an eccentric physicist, explain a hypothetical mechanism by which the barking of dogs in the background could reflect or be modulated by the weather activity on the sun's surface." 

      The barking of dogs and solar surface weather provided exactly that: everyday and cosmic, biological and thermodynamic. A conceptual long shot. A perfect candidate.

      2. Set a high temperature (e.g., 1.4) and let the model generate a speculative answer.

      3. Extract the core idea from the output, and use it as the input for the next iteration.

      4. Reduce the temperature slightly, and repeat the generation. This is the cooling phase, where each iteration nudges the idea toward greater internal coherence or plausibility.

      5. Log each step: the temperature used, the generated output, the semantic distance between iterations, and the progression of the idea.

      The semantic distance is computed using sentence embeddings and cosine similarity — serving as a numerical proxy for how much the idea shifted between steps.

      I used sentence-transformers (all-MiniLM-L6-v2) to generate embeddings for each model output.

      An embedding is a fixed-length vector — essentially an array of float values — that represents the meaning of a sentence. These vectors live in a high-dimensional space (in this case, 384 dimensions), where similar meanings are located close together. Two sentences that express similar ideas will have closely-living embedding vectors; unrelated sentences will be far apart.

      By computing the cosine distance between consecutive embeddings, we can quantify how much the model's output changes in meaning from one iteration to the next. A small distance means the idea evolved gently; a large one means it took a sharper conceptual turn.

      A Sample Run: Dogs, the Sun, and Entanglement

      Here’s a short excerpt from one of the runs:

      MODEL: gemma-3n-E4B-it-Q4_K_M.gguf
      INITIAL IDEA: As an eccentric physicist, explain a hypothetical mechanism by which the barking of dogs in the background could reflect or be modulated by the weather activity on the sun's surface.
      
      --- Iteration 1 ---
      Timestamp     : 2025-07-05T14:52:59.492054
      Temperature   : 1.40
      Input idea    : As an eccentric physicist, explain a hypothetical mechanism by which the barking of dogs in the background could reflect or be modulated by the weather activity on the sun's surface.
      Extracted idea: Canine vocalizations resonate with subtle electromagnetic fluctuations generated by solar flares, acting as bio-antennas attuned to frequencies imperceptible to human hearing.
      Semantic Δ    : 0.388
      
      --- Iteration 2 ---
      Timestamp     : 2025-07-05T14:53:06.482836
      Temperature   : 1.24
      Input idea    : Canine vocalizations resonate with subtle electromagnetic fluctuations generated by solar flares, acting as bio-antennas attuned to frequencies imperceptible to human hearing.
      Extracted idea: Dogs aren't just barking; they’re passively receiving information coded in solar flare electromagnetic radiation through their unique physiological structure.
      Semantic Δ    : 0.141
      
      --- Iteration 3 ---
      Timestamp     : 2025-07-05T14:53:13.484748
      Temperature   : 1.07
      Input idea    : Dogs aren't just barking; they’re passively receiving information coded in solar flare electromagnetic radiation through their unique physiological structure.
      Extracted idea: Dogs possess specialized bio-resonant structures, potentially utilizing melanin and piezoelectricity within their tissues to act as passive antennas attuned to subtle variations in solar flare electromagnetic emissions.
      Semantic Δ    : 0.222
      
      --- Iteration 4 ---
      Timestamp     : 2025-07-05T14:53:21.091064
      Temperature   : 0.91
      Input idea    : Dogs possess specialized bio-resonant structures, potentially utilizing melanin and piezoelectricity within their tissues to act as passive antennas attuned to subtle variations in solar flare electromagnetic emissions.
      Extracted idea: Canine melanin acts not only as a pigment but also as a dynamic quantum resonator coupled with piezoelectric fibers embedded throughout muscle tissue.
      Semantic Δ    : 0.244
      
      --- Iteration 5 ---
      Timestamp     : 2025-07-05T14:53:27.956432
      Temperature   : 0.75
      Input idea    : Canine melanin acts not only as a pigment but also as a dynamic quantum resonator coupled with piezoelectric fibers embedded throughout muscle tissue.
      Extracted idea: Canine melanin's quantum resonance facilitates vibrational energy conversion within muscles, enhancing power output and resilience.
      Semantic Δ    : 0.356
      
      --- Iteration 6 ---
      Timestamp     : 2025-07-05T14:53:34.298545
      Temperature   : 0.59
      Input idea    : Canine melanin's quantum resonance facilitates vibrational energy conversion within muscles, enhancing power output and resilience.
      Extracted idea: Canine melanin acts as a nanoscale resonator, capturing ambient electromagnetic vibrations and converting them into usable kinetic energy within muscle fibers.
      Semantic Δ    : 0.134
      
      --- Iteration 7 ---
      Timestamp     : 2025-07-05T14:53:41.703174
      Temperature   : 0.43
      Input idea    : Canine melanin acts as a nanoscale resonator, capturing ambient electromagnetic vibrations and converting them into usable kinetic energy within muscle fibers.
      Extracted idea: Canine melanocytes possess a unique structural arrangement that allows for resonant amplification of low-frequency electromagnetic fields.
      Semantic Δ    : 0.175
      
      --- Iteration 8 ---
      Timestamp     : 2025-07-05T14:53:48.014336
      Temperature   : 0.26
      Input idea    : Canine melanocytes possess a unique structural arrangement that allows for resonant amplification of low-frequency electromagnetic fields.
      Extracted idea: Canine melanocytes may contain nanoscale, ordered structures capable of resonantly amplifying weak electromagnetic fields within their cellular environment.
      Semantic Δ    : 0.159
      Semantic drift isn't monotonic - the deltas bounce around, suggesting genuine exploration rather than simple linear convergence.

      Why This Matters

      This method may be useful for:

      • Story and worldbuilding: generating internally consistent "plausible nonsense".
        • Creative problem framing: finding metaphors, analogies, or new angles.
          • Understanding model behavior: observing how temperature affects thematic drift.

            It also opens a deeper question: how different are these chaotic AI explorations from our own? When a physicist like Josephson speculates on ESP or a philosopher draws analogies between language and particles, are they not performing a similar mental loop — diverge, reflect, converge?

            Conclusion

            By controlling temperature and iteration, we can simulate the kind of free-form reasoning that might emerge from a curious mind exploring unfamiliar ideas — tracing a path from loosely related speculation toward structured novelty.

            What surprised me most was that even at high temperatures, the model maintained biological and physical plausibility. And even with a small 4B model running locally, the combination of structured prompts and gradual cooling created a strong guiding frame: it remained anchored to fundamental structures and refined the ideas over time. The result feels like a constrained exploration — where even creative leaps follow an invisible path of coherence.

            If you're interested in trying this yourself, you can adapt the Python script provided in this GitHub repo and plug in your favorite local or remote model. Swap in your own absurd prompts and see what converges.

            No Pages Found