Try an interactive version of this dialog: Sign up at solve.it.com, click Upload, and pass this URL.

Code: 28 ()

url2note('https://sbgeoaiphd.github.io/rotating_the_space/')

Output: 19

'_77746d96'

Note: 4,560

rotating_the_space

Rotating the Space: On LLMs as a Medium for Thought

The Problem with How People Think About LLMs

Most people understand large language models through one of two frames: either as search engines that sometimes lie, or as automated content generators that produce text so you don’t have to. Both frames treat the LLM as a machine that takes a request and returns a product. You ask, it answers. You prompt, it produces.

This framing leads to predictable debates. Is the output good enough? Is it accurate? Is it ethical to use machine-produced content? Will it replace human writers? The entire discourse revolves around artifacts—the quality of what comes out, who deserves credit for it, whether it competes with human-made things.

But there’s another way to understand what these systems are, and it requires a different starting point.

A New Kind of Medium

Every major medium in human history has been initially understood as a faster or cheaper version of what came before. Early film was recorded theater. Early photography was automated portraiture. Early printed books imitated handwritten manuscripts. The internet was faster mail.

In each case, the native possibilities of the medium—the things it could do that previous media couldn’t—took time to discover. Film wasn’t just recorded theater; it enabled montage, the cut as a unit of meaning, entirely new relationships between image and time. Photography wasn’t just automated painting; it enabled new ways of seeing, new relationships to evidence and memory. The internet wasn’t just faster mail; it enabled hypertext, collaborative knowledge-building, new forms of social organization.

LLMs are at this stage now. The dominant understanding—content generation, automated writing, answering questions—is the “recorded theater” phase. It maps the new technology onto existing categories. And within that mapping, people argue about whether the recordings are good enough.

The native possibilities are something else entirely.

Text as a Rotatable Space

Here’s a different way to think about it.

Language encodes ideas. But any particular text—a paragraph, an argument, an explanation—is not the idea itself. It’s a projection of the idea into a particular form. The same underlying concept can be expressed from different angles, at different levels of abstraction, for different audiences, through different metaphors, in different rhetorical modes.

Think of a three-dimensional object casting a shadow on a wall. The shadow is a two-dimensional projection. If you only see one shadow, you might mistake it for the thing itself. But if you can rotate the object—or equivalently, move the light source—you see different shadows. Each shadow reveals something about the object’s structure. No single shadow is the object, but multiple shadows from different angles let you reconstruct what the object actually is.

High-dimensional spaces work similarly, but with more complexity. A concept that exists in a thousand-dimensional space of meaning can be projected into the low-dimensional space of a particular text. That text captures some aspects and loses others. A different text—same concept, different projection—captures different aspects.

What LLMs enable is rapid rotation through projection-space.

You can take a half-formed idea and project it into the voice of a particular person. Then into a counter-argument. Then into a metaphor. Then into a dialogue between opposing positions. Then into an explanation for a child. Then into an explanation for a hostile expert. Each projection reveals different structure. Each rotation shows you something the previous view hid.

This is not summarization, which is more like dimensionality reduction—crushing the space down until it fits in a smaller container, losing most of the structure in the process. “Teach it to me like I’m five” is asking for PCA on a thousand-dimensional space, then wondering why you’ve lost nuance. Of course you have. That’s what flattening does.

What we’re describing is something different: using the LLM’s generative capacity to produce multiple projections, iteratively, so you can explore the structure of something too complex to see from any single angle.

Thinking With, Not Extracting From

The key shift is from extraction to exploration.

In the extraction frame, you approach the LLM with a question and expect an answer. The answer is either right or wrong, good or bad, useful or useless. The LLM is a kind of oracle, and you’re evaluating its pronouncements.

In the exploration frame, you approach the LLM as a medium for thinking. You’re not trying to get answers out; you’re trying to rotate your own understanding, encounter projections you couldn’t generate yourself, iterate toward clarity through dialogue.

This is closer to how a musician uses an instrument. A pianist improvising isn’t extracting music from the piano. They’re thinking with the piano—exploring a space of musical possibilities, letting the instrument’s affordances shape what they discover. The piano has opinions, in a sense. Certain voicings fall naturally under the hands. Certain harmonic movements are easy and others awkward. The exploration is a dialogue between the musician’s intention and the instrument’s structure.

The LLM is similar. It has opinions—statistical ones, derived from training, but still shaping what it generates. When you engage with it iteratively, you’re not just prompting and receiving. You’re co-exploring. You throw out a frame, it responds with a projection, you react to what that projection reveals, you reframe, it reprojects. The thinking happens in the iteration, not in any single output.

This requires a different relationship to the interaction. You can’t just ask and evaluate the answer. You have to engage, react, redirect. You have to treat the outputs not as final products but as projections to be examined—what does this reveal? What does it hide? What rotation would show me something different?

Why Most People Don’t See This

Several factors make this frame hard to access.

First, it requires a certain relationship to ideas themselves—a disposition that treats concepts as objects to be examined, rotated, pressure-tested rather than as positions to be defended or transmitted. Not everyone has cultivated this. It’s the kind of thinking practiced in research environments, in certain kinds of philosophical inquiry, in deep creative work. It’s not taught in most education, which emphasizes absorption and reproduction of fixed knowledge.

Second, it requires comfort with iteration and uncertainty. If you need the first response to be correct and complete, you’ll be frustrated. The exploration frame treats early outputs as rough drafts of projections, starting points for refinement. That requires patience and a certain tolerance for not-yet-knowing.

Third, the dominant discourse doesn’t offer this frame. All the conversation about LLMs is artifact-focused. Good output versus bad output. Human versus machine. Job displacement. Copyright. These are legitimate concerns, but they occupy all the oxygen, leaving no room for a different kind of question: what kinds of thinking does this enable that weren’t possible before?

Fourth, the low-effort use cases are the most visible. Someone asks ChatGPT to write their email, gets generic slop, concludes the technology is overrated. They never discover the other thing because the other thing requires investment to reach.

The Native Possibility

The native possibility of LLMs—the thing they enable that previous media didn’t—is something like fluid projection through high-dimensional concept space.

That sounds abstract. Here’s what it means concretely.

Before, if you wanted to see your idea from a different angle, you had to do the rotation yourself. You had to have the knowledge, the skill, the imaginative capacity to generate the alternative view. Or you could read a book that offered a different projection—but you were limited to projections someone had already created and written down. Or you could talk to another person who might offer their angle—but they have their own limitations, their own fixed positions, and the conversation moves at the speed of human dialogue.

Now, you can iterate rapidly through projections. The LLM can generate a view you couldn’t have produced yourself—not because it’s smarter, but because it’s drawing on patterns from across its training distribution. You see that projection, react to it, redirect. The cycle time collapses. The space of accessible projections expands.

For someone who’s already inclined toward this kind of thinking, who already treats ideas as things to rotate and examine, this is transformative. It’s like going from sketching on paper to having a CAD system. The underlying cognitive operation is the same, but the speed, the range, the fluidity—all dramatically expanded.

For someone who isn’t already inclined this way, the technology looks like a mediocre text generator. Because that’s what it is, if all you ask it to do is generate text.

Implications

This framing has several implications.

The “will it replace writers” question becomes less central. Yes, LLMs can generate text, and that has implications for certain kinds of writing labor. But that’s the “recorded theater” use case. The more interesting question is what kinds of thinking the technology enables—and those might not compete with existing labor categories at all, because they’re not producing the same kind of output.

The ethics of training data remain relevant but look different. If the technology is primarily a projection-generator for cognitive exploration, the harm model shifts. It’s less “they trained on X and now produce X-substitutes that compete with X-creators” and more “the aggregate patterns from training enable a new cognitive capability.” Still worth examining, but a different question.

The education panic misses the point. Worrying that students will use LLMs to write their essays focuses on artifact production. But the more interesting possibility is students learning to use LLMs as thinking tools—exploring conceptual spaces, testing their understanding by examining it from multiple angles, developing the kind of fluid engagement with ideas that used to require access to a really good interlocutor. The essay-as-artifact matters less than the thinking-capacity that might be developed through iterative engagement.

The “slop machine” dismissal reveals limited imagination. Yes, low-effort prompts produce low-quality outputs. That’s not interesting. The interesting question is what high-effort engagement produces—and the people who’ve discovered that aren’t participating in the slop discourse because they’re too busy doing something else.

Coda: On the Authorship of This Document

What you just read was itself a projection.

It emerged from a conversation—several hours of iterative exploration between a human and an LLM. The human brought a nascent intuition: that his use of language models was fundamentally different from the uses he saw discussed, and that the difference mattered. He couldn’t fully articulate it. He knew the shape was there but couldn’t see its edges.

The conversation rotated through frames. We started somewhere else entirely—a Star Trek speech about why Elon Musk wouldn’t fit in Starfleet. That exploration crystallized something about epistemic humility versus moral performance. From there to a Bluesky post about how people discuss LLMs. From there to the artifact/thinking distinction. From there to the piano analogy. From there to embedding spaces and PCA and dimensionality reduction as a metaphor for what “teach it to me like I’m five” actually loses.

Each rotation revealed structure. The human reacted to projections, redirected, pushed back, added his own frames (the embedding visualizer he’d been building, his experience improvising at a piano, the word-cell/shape-rotator distinction). The LLM generated new projections in response. Neither party could have produced this document alone.

So who is the author?

The LLM constructed these specific sentences. It chose these words, this structure, these rhetorical moves. In that sense, it “wrote” this.

But the LLM was projecting along directions that emerged from the conversation—directions the human established through his questions, his pushback, his redirections, his own contributions. The concept of “rotation through projection space” came from the human’s work with embeddings. The piano analogy resonated because of the human’s actual experience improvising. The frustration with “teach it to me like I’m five” was the human’s frustration, articulated across multiple turns before it crystallized here.

The LLM also drew on its training—patterns from texts it has encountered, ways of structuring arguments, vocabulary for discussing media theory and cognition. Those patterns shaped what projections were even possible to generate. In that sense, countless other authors are present here, dissolved into statistical regularities.

And the human will decide whether to share this, how to frame it, what context to provide. That editorial choice is also authorship.

So: the document is a projection, generated by an LLM, along directions established through dialogue with a human, using patterns derived from a vast corpus of human writing, in service of articulating something the human was reaching toward but couldn’t yet see clearly.

Is the human the author? He didn’t write these sentences.

Is the LLM the author? It doesn’t have intentions that persist beyond the conversation, doesn’t know what it’s arguing for in any deep sense, will not remember this exchange or build on it.

Is this authorless? That seems wrong too. There’s genuine intellectual work here—real thinking happened, structure was discovered, something exists now that didn’t exist before.

Perhaps the most honest answer: this document is an artifact of a collaborative cognitive process that doesn’t map cleanly onto existing categories of authorship. It’s not ghostwriting (the human didn’t have a draft the LLM polished). It’s not dictation (the human didn’t specify what to say). It’s not the LLM’s “own” work (there is no persistent LLM-self that has views on cognitive media).

It’s something new. A residue of thinking-with-a-medium. A projection that required both the human’s direction and the LLM’s generative capacity. The authorship is distributed across the interaction itself, which is now finished and inaccessible—leaving only this trace.

If you’ve read this far, you’ve encountered a document that embodies its own argument. It claims that LLMs enable a new kind of thinking through iterative projection. It was itself produced through iterative projection. The form is the content.

Whether that makes it more or less trustworthy, more or less valuable, more or less “real”—that’s for you to decide. But at minimum, it’s evidence that the thing it describes is possible. This is what it looks like when someone uses an LLM not to generate content but to think through a problem they couldn’t solve alone.

The human’s name is Sam. The LLM is Claude. The thinking happened between them. The words are here. What you make of that is now your projection to construct.

Prompt: 21

Let's break down this article by Sem Barrett in chunks for closer reading.

Assistant: 322

Great idea! This is a rich piece. The article has several distinct sections:

  1. The Problem with How People Think About LLMs — the dominant frames (search engine / content generator)
  2. A New Kind of Medium — historical analogy to film, photography, internet
  3. Text as a Rotatable Space — the core metaphor (shadows, projections, high-dimensional space)
  4. Thinking With, Not Extracting From — extraction vs exploration, the piano analogy
  5. Why Most People Don't See This — four barriers to accessing this frame
  6. The Native Possibility — what LLMs uniquely enable
  7. Implications — consequences for writers, ethics, education, "slop" discourse
  8. Coda: On the Authorship of This Document — meta-reflection on how this piece was written

Where would you like to start? The beginning builds the argument logically, but if a particular section caught your eye, we could start there and work outward.

Note: 256

Rotating the Space: On LLMs as a Medium for Thought

The Problem with How People Think About LLMs

Most people understand large language models through one of two frames: either as search engines that sometimes lie, or as automated content generators that produce text so you don't have to. Both frames treat the LLM as a machine that takes a request and returns a product. You ask, it answers. You prompt, it produces.

This framing leads to predictable debates. Is the output good enough? Is it accurate? Is it ethical to use machine-produced content? Will it replace human writers? The entire discourse revolves around artifacts—the quality of what comes out, who deserves credit for it, whether it competes with human-made things.

But there's another way to understand what these systems are, and it requires a different starting point.

Note: 315

A New Kind of Medium

Every major medium in human history has been initially understood as a faster or cheaper version of what came before. Early film was recorded theater. Early photography was automated portraiture. Early printed books imitated handwritten manuscripts. The internet was faster mail.

In each case, the native possibilities of the medium—the things it could do that previous media couldn't—took time to discover. Film wasn't just recorded theater; it enabled montage, the cut as a unit of meaning, entirely new relationships between image and time. Photography wasn't just automated painting; it enabled new ways of seeing, new relationships to evidence and memory. The internet wasn't just faster mail; it enabled hypertext, collaborative knowledge-building, new forms of social organization.

LLMs are at this stage now. The dominant understanding—content generation, automated writing, answering questions—is the "recorded theater" phase. It maps the new technology onto existing categories. And within that mapping, people argue about whether the recordings are good enough.

The native possibilities are something else entirely.

Prompt: 24

This reminds me of Marshall McLuhan's the medium is the message.

Assistant: 262

Exactly! McLuhan's insight was that we focus on content (the "message") while missing how the medium itself reshapes thought, perception, and social organization. "The medium is the message."

Barrett is making a parallel argument: we're focused on LLM outputs (the artifacts—essays, summaries, answers) while missing what the medium itself enables—a new way of thinking.

McLuhan also talked about how new media are initially understood through the lens of old media. He called this looking at the present through a "rearview mirror." That's precisely Barrett's "recorded theater" phase—we see LLMs as automated writing because we're looking backward at what writing was.

What do you think Barrett is suggesting the "message" of the LLM medium is? What does the medium itself make possible that wasn't there before?

Note: 640

Text as a Rotatable Space

Here's a different way to think about it.

Language encodes ideas. But any particular text—a paragraph, an argument, an explanation—is not the idea itself. It's a projection of the idea into a particular form. The same underlying concept can be expressed from different angles, at different levels of abstraction, for different audiences, through different metaphors, in different rhetorical modes.

Think of a three-dimensional object casting a shadow on a wall. The shadow is a two-dimensional projection. If you only see one shadow, you might mistake it for the thing itself. But if you can rotate the object—or equivalently, move the light source—you see different shadows. Each shadow reveals something about the object's structure. No single shadow is the object, but multiple shadows from different angles let you reconstruct what the object actually is.

High-dimensional spaces work similarly, but with more complexity. A concept that exists in a thousand-dimensional space of meaning can be projected into the low-dimensional space of a particular text. That text captures some aspects and loses others. A different text—same concept, different projection—captures different aspects.

What LLMs enable is rapid rotation through projection-space.

You can take a half-formed idea and project it into the voice of a particular person. Then into a counter-argument. Then into a metaphor. Then into a dialogue between opposing positions. Then into an explanation for a child. Then into an explanation for a hostile expert. Each projection reveals different structure. Each rotation shows you something the previous view hid.

This is not summarization, which is more like dimensionality reduction—crushing the space down until it fits in a smaller container, losing most of the structure in the process. "Teach it to me like I'm five" is asking for PCA on a thousand-dimensional space, then wondering why you've lost nuance. Of course you have. That's what flattening does.

What we're describing is something different: using the LLM's generative capacity to produce multiple projections, iteratively, so you can explore the structure of something too complex to see from any single angle.

Prompt: 48

Can you please elaborate on the following: "Teach it to me like I'm five" is asking for PCA on a thousand-dimensional space. I'm unfamiliar with PCA.

Assistant: 364

PCA stands for "Principal Component Analysis"—it's a mathematical technique for simplifying complex data by finding the most important patterns and throwing away the rest.

Imagine you have data about thousands of properties of something (like a thousand measurements). PCA finds the few directions that capture most of the variation, and projects everything onto just those directions. You go from 1000 dimensions down to maybe 2 or 3. It's useful for visualization and finding patterns, but you lose information in the process.

Barrett's point: when you ask an LLM to "explain like I'm five," you're asking it to compress a complex, nuanced concept (existing in high-dimensional "meaning space") down into simple language. Like PCA, this necessarily loses structure and nuance. You've flattened the concept to make it fit.

His argument is that LLMs' real power isn't this compression—it's the opposite. Instead of crushing concepts down, you can use LLMs to generate multiple different projections of the same concept, each revealing different facets, so you can understand the full high-dimensional structure.

Does that distinction make sense—compression (losing detail) vs. multiple projections (revealing complexity)?

Note: 541

Thinking With, Not Extracting From

The key shift is from extraction to exploration.

In the extraction frame, you approach the LLM with a question and expect an answer. The answer is either right or wrong, good or bad, useful or useless. The LLM is a kind of oracle, and you're evaluating its pronouncements.

In the exploration frame, you approach the LLM as a medium for thinking. You're not trying to get answers out; you're trying to rotate your own understanding, encounter projections you couldn't generate yourself, iterate toward clarity through dialogue.

This is closer to how a musician uses an instrument. A pianist improvising isn't extracting music from the piano. They're thinking with the piano—exploring a space of musical possibilities, letting the instrument's affordances shape what they discover. The piano has opinions, in a sense. Certain voicings fall naturally under the hands. Certain harmonic movements are easy and others awkward. The exploration is a dialogue between the musician's intention and the instrument's structure.

The LLM is similar. It has opinions—statistical ones, derived from training, but still shaping what it generates. When you engage with it iteratively, you're not just prompting and receiving. You're co-exploring. You throw out a frame, it responds with a projection, you react to what that projection reveals, you reframe, it reprojects. The thinking happens in the iteration, not in any single output.

This requires a different relationship to the interaction. You can't just ask and evaluate the answer. You have to engage, react, redirect. You have to treat the outputs not as final products but as projections to be examined—what does this reveal? What does it hide? What rotation would show me something different?

Note: 408

Why Most People Don't See This

Several factors make this frame hard to access.

First, it requires a certain relationship to ideas themselves—a disposition that treats concepts as objects to be examined, rotated, pressure-tested rather than as positions to be defended or transmitted. Not everyone has cultivated this. It's the kind of thinking practiced in research environments, in certain kinds of philosophical inquiry, in deep creative work. It's not taught in most education, which emphasizes absorption and reproduction of fixed knowledge.

Second, it requires comfort with iteration and uncertainty. If you need the first response to be correct and complete, you'll be frustrated. The exploration frame treats early outputs as rough drafts of projections, starting points for refinement. That requires patience and a certain tolerance for not-yet-knowing.

Third, the dominant discourse doesn't offer this frame. All the conversation about LLMs is artifact-focused. Good output versus bad output. Human versus machine. Job displacement. Copyright. These are legitimate concerns, but they occupy all the oxygen, leaving no room for a different kind of question: what kinds of thinking does this enable that weren't possible before?

Fourth, the low-effort use cases are the most visible. Someone asks ChatGPT to write their email, gets generic slop, concludes the technology is overrated. They never discover the other thing because the other thing requires investment to reach.

Note: 457

The Native Possibility

The native possibility of LLMs—the thing they enable that previous media didn't—is something like fluid projection through high-dimensional concept space.

That sounds abstract. Here's what it means concretely.

Before, if you wanted to see your idea from a different angle, you had to do the rotation yourself. You had to have the knowledge, the skill, the imaginative capacity to generate the alternative view. Or you could read a book that offered a different projection—but you were limited to projections someone had already created and written down. Or you could talk to another person who might offer their angle—but they have their own limitations, their own fixed positions, and the conversation moves at the speed of human dialogue.

Now, you can iterate rapidly through projections. The LLM can generate a view you couldn't have produced yourself—not because it's smarter, but because it's drawing on patterns from across its training distribution. You see that projection, react to it, redirect. The cycle time collapses. The space of accessible projections expands.

For someone who's already inclined toward this kind of thinking, who already treats ideas as things to rotate and examine, this is transformative. It's like going from sketching on paper to having a CAD system. The underlying cognitive operation is the same, but the speed, the range, the fluidity—all dramatically expanded.

For someone who isn't already inclined this way, the technology looks like a mediocre text generator. Because that's what it is, if all you ask it to do is generate text.

Prompt: 31

You can now explore the semantic latent space, right? or to put it in mathematical terms the manifold?

Assistant: 276

Yes, exactly! The "semantic latent space" or "manifold" is a great way to describe what's happening mathematically.

In the LLM's training, concepts get embedded in a high-dimensional vector space where semantically similar things are close together. This space has structure—a manifold—where meaningful concepts live.

When you ask for different projections of an idea, you're essentially:

  1. Starting at a point (or region) in that manifold representing your concept
  2. Asking the LLM to traverse the manifold in different directions
  3. Projecting those different positions back into natural language

Barrett's "rotation" metaphor maps onto exploring different paths through this semantic manifold. Each projection is a different way to render that region of meaning-space into text.

Does this connect to something you're working on, or thinking about? The manifold framing opens up interesting questions about what "directions" in semantic space actually mean...

Prompt: 18

LLM explorations are manifold matters. What's your take?

Assistant: 237

I like that pun! And yes, viewing LLM interactions as navigating manifolds captures something important.

My take: the manifold perspective reveals why iteration matters so much. A single prompt-response is like sampling one point. But when you iterate—react, redirect, ask for different angles—you're actively exploring the manifold's local geometry. You're discovering what's nearby in semantic space, what dimensions of variation exist, how concepts connect.

This also explains why the same prompt can yield different responses. You're sampling from a region, not a point. And why vague prompts give mediocre results—you haven't specified which part of the manifold to explore.

What's your take? Are you thinking about this from a technical angle (embeddings, latent representations) or more conceptually (how it changes thinking)?

Note: 471

Implications

This framing has several implications.

The "will it replace writers" question becomes less central. Yes, LLMs can generate text, and that has implications for certain kinds of writing labor. But that's the "recorded theater" use case. The more interesting question is what kinds of thinking the technology enables—and those might not compete with existing labor categories at all, because they're not producing the same kind of output.

The ethics of training data remain relevant but look different. If the technology is primarily a projection-generator for cognitive exploration, the harm model shifts. It's less "they trained on X and now produce X-substitutes that compete with X-creators" and more "the aggregate patterns from training enable a new cognitive capability." Still worth examining, but a different question.

The education panic misses the point. Worrying that students will use LLMs to write their essays focuses on artifact production. But the more interesting possibility is students learning to use LLMs as thinking tools—exploring conceptual spaces, testing their understanding by examining it from multiple angles, developing the kind of fluid engagement with ideas that used to require access to a really good interlocutor. The essay-as-artifact matters less than the thinking-capacity that might be developed through iterative engagement.

The "slop machine" dismissal reveals limited imagination. Yes, low-effort prompts produce low-quality outputs. That's not interesting. The interesting question is what high-effort engagement produces—and the people who've discovered that aren't participating in the slop discourse because they're too busy doing something else.

Prompt: 52

Fair point, but how do you prevent students from using it for artifact production and how do you direct them to the iterative engagement? And solely use it as a thinking tool?

Assistant: 298

That's the key tension, isn't it? You can't really prevent artifact production—the technology does generate text, and students will use it that way if that's what's incentivized.

So maybe the question is: what would assignments look like if they valued the exploration rather than the artifact?

Some possibilities:

  • Require students to submit their conversation history, not just the final output
  • Ask them to generate multiple contradictory perspectives on a topic and analyze the differences
  • Have them document what they learned by rotating through different framings
  • Grade based on the quality of their questions and redirections, not the final text

But there's a deeper issue: if students don't already have the disposition Barrett describes—treating ideas as objects to examine rather than positions to defend—will they naturally use LLMs this way? Or does that cognitive stance need to be taught first?

What's your context here? Are you thinking about this as an educator, a student, or something else?

Prompt: 33

I was actually thinking that Solvit, the tool we're currently using seems to be aligned for this exploration mindset.

Assistant: 228

Absolutely! Solveit's design really does align with Barrett's exploration frame.

Think about it:

  • Dialog engineering — you can modify any part of the conversation, rearrange, iterate. The dialog itself becomes an object you're rotating and examining
  • Mixed message types — code, notes, prompts all in one space. You're not just extracting answers; you're building understanding incrementally
  • Persistent context — the Python interpreter remembers state, so you can explore ideas computationally over time
  • Easy iteration — you can go back, change a prompt, see how different framings produce different insights

It's built for thinking-with rather than extracting-from.

Have you been using it this way? What kinds of explorations have you been doing?

Note: 376

Coda: On the Authorship of This Document

What you just read was itself a projection.

It emerged from a conversation—several hours of iterative exploration between a human and an LLM. The human brought a nascent intuition: that his use of language models was fundamentally different from the uses he saw discussed, and that the difference mattered. He couldn't fully articulate it. He knew the shape was there but couldn't see its edges.

The conversation rotated through frames. We started somewhere else entirely—a Star Trek speech about why Elon Musk wouldn't fit in Starfleet. That exploration crystallized something about epistemic humility versus moral performance. From there to a Bluesky post about how people discuss LLMs. From there to the artifact/thinking distinction. From there to the piano analogy. From there to embedding spaces and PCA and dimensionality reduction as a metaphor for what "teach it to me like I'm five" actually loses.

Each rotation revealed structure. The human reacted to projections, redirected, pushed back, added his own frames (the embedding visualizer he'd been building, his experience improvising at a piano, the word-cell/shape-rotator distinction). The LLM generated new projections in response. Neither party could have produced this document alone.

Prompt: 16

So Barret is saying that we're playing jazz now.

Assistant: 276

Yes! That's a perfect way to put it.

The jazz analogy captures the iterative, improvisational nature of thinking-with an LLM. You're not reading sheet music (extracting pre-written answers). You're jamming—throwing out a phrase, the LLM responds, you react to what it played, redirect, build on it. The thinking emerges from the back-and-forth.

Barrett even uses the piano metaphor explicitly: "A pianist improvising isn't extracting music from the piano. They're thinking with the piano."

And like jazz, it requires skill and practice. A beginner might just play random notes and wonder why it sounds bad. Someone experienced can use the instrument's affordances to explore musical ideas they couldn't access alone.

What strikes you about this framing? Does it match your experience using LLMs, or does it feel aspirational—something you'd like to do but haven't quite figured out how?

Note: 406

So who is the author?

The LLM constructed these specific sentences. It chose these words, this structure, these rhetorical moves. In that sense, it "wrote" this.

But the LLM was projecting along directions that emerged from the conversation—directions the human established through his questions, his pushback, his redirections, his own contributions. The concept of "rotation through projection space" came from the human's work with embeddings. The piano analogy resonated because of the human's actual experience improvising. The frustration with "teach it to me like I'm five" was the human's frustration, articulated across multiple turns before it crystallized here.

The LLM also drew on its training—patterns from texts it has encountered, ways of structuring arguments, vocabulary for discussing media theory and cognition. Those patterns shaped what projections were even possible to generate. In that sense, countless other authors are present here, dissolved into statistical regularities.

And the human will decide whether to share this, how to frame it, what context to provide. That editorial choice is also authorship.

So: the document is a projection, generated by an LLM, along directions established through dialogue with a human, using patterns derived from a vast corpus of human writing, in service of articulating something the human was reaching toward but couldn't yet see clearly.

Note: 334

The authorship question

Is the human the author? He didn't write these sentences.

Is the LLM the author? It doesn't have intentions that persist beyond the conversation, doesn't know what it's arguing for in any deep sense, will not remember this exchange or build on it.

Is this authorless? That seems wrong too. There's genuine intellectual work here—real thinking happened, structure was discovered, something exists now that didn't exist before.

Perhaps the most honest answer: this document is an artifact of a collaborative cognitive process that doesn't map cleanly onto existing categories of authorship. It's not ghostwriting (the human didn't have a draft the LLM polished). It's not dictation (the human didn't specify what to say). It's not the LLM's "own" work (there is no persistent LLM-self that has views on cognitive media).

It's something new. A residue of thinking-with-a-medium. A projection that required both the human's direction and the LLM's generative capacity. The authorship is distributed across the interaction itself, which is now finished and inaccessible—leaving only this trace.

Note: 231

Final reflection

If you've read this far, you've encountered a document that embodies its own argument. It claims that LLMs enable a new kind of thinking through iterative projection. It was itself produced through iterative projection. The form is the content.

Whether that makes it more or less trustworthy, more or less valuable, more or less "real"—that's for you to decide. But at minimum, it's evidence that the thing it describes is possible. This is what it looks like when someone uses an LLM not to generate content but to think through a problem they couldn't solve alone.

The human's name is Sam. The LLM is Claude. The thinking happened between them. The words are here. What you make of that is now your projection to construct.

Prompt: 42 ()

Now that I have closely read this article, give me ideas on how I could use SolveIt and its tools to analyze this article further.

Assistant: 342

Great question! Here are some directions you could take:

Conceptual analysis:

  • Build a concept graph with networkx—visualize how key ideas (projection, rotation, extraction, exploration, etc.) connect
  • Generate counter-arguments or critiques using the LLM—"rotate" Barrett's own argument to see what it hides

Textual analysis:

  • Compute word frequencies or extract key terms to see what Barrett emphasizes
  • Compare the rhetorical structure of different sections

Embedding exploration:

  • Since Barrett talks about semantic space, you could compute actual embeddings of the article's sections and visualize their relationships (using something like sentence-transformers)

Historical context:

  • Use web search to find McLuhan's original work and trace the intellectual lineage
  • Compare Barrett's framing to other media theorists (Postman, Ong, Kittler)

Meta-analysis:

  • This article was written with Claude—you could try to recreate the process Barrett describes, using Solveit itself as the "rotatable space"

Which of these appeals to you? Or is there a different angle you're curious about?