Chopping wood and generative AI, part 2
Comparing the results of manual and automated processes
(If you like this post, selecting the ❤️ to bless the Algorithm Angels.)
In Chopping wood and generative AI, part 1, I related the difference in experience between cutting my own firewood and purchasing it to the difference between writing stories from scratch—especially the inner, mystical accounts that I’m interested in—and producing stories with an automation technology like generative AI. As that post concluded, it’s really a matter of choosing the blend between using automation tools for higher productivity and gaining the experience of going through the process yourself.
In this post, let’s now want to compare the difference in the output, to which we again turn to the matter of woodchopping.
And speaking of output, this is an appropriate moment to use a clearly strange image from Substack’s AI image generator, using the simple prompt, “woodchopping.” According to this image, woodchopping apparently involves magic and trees that grow at right angles!
The desirable sameness of commercial firewood
Every year, usually in late summer or early fall before the temperatures shift into a cold pattern, several of my neighbors receive a delivery of pre-cut firewood from one of the local woodcutters, as I’ve done on occasion when, like when I was nursing a lower back injury, I wasn’t able to put in my own supply.
Having cut so much of my own, what strikes me most about commercial firewood is its relative uniformity: every piece is 16 inches long and falls into a somewhat narrow range of girth. Obviously, the professional woodchoppers are well-skilled in bucking logs into rounds of consistent length and put those rounds through hydraulic splitters in such a way as to produce similar-sized pieces. Furthermore, they clearly don’t bother with smaller stock like branches; I suspect such material is fed into a chipper to produce a secondary (and desirable) product.
The pieces of firewood in each delivery are usually also of the same type, like oak, fir, madrone, larch, etc., or are mixed in a more or less known ratio. (In the Sierra Nevada foothills, where I live, it’s typically black oak and/or madrone.) Because different types of wood have different fuel values—that is, different densities of combustible hydrocarbons and therefore different heating capacities—the same volume of a higher fuel-value wood is worth more than a lower-value wood, and thus it costs more to purchase.
This sameness has other advantages, too. Wood is sold by the “cord,” a 4x4x8 foot stack. The standard length for a piece is 16 inches because three of them laid end to end make 48 inches or four feet.

When the girth of the pieces is more or less similar (firewood is a somewhat imprecise product), customers trust that any given cord of the same type of wood has more or less the same fuel value as any other cord. If a woodchopper used smaller pieces to fill the gaps between larger ones, a cord might have marginally more fuel value, but it’s unlikely that customers would actually pay more because they’re not really sure what they’re getting.
Uniformity also makes the wood easier to stack, especially if you use ready-made steel stacking frames designed around the 16-inch standard. And when you go to burn it, the uniformity means that you can pretty much expect that the same number of pieces will heat your home to (literally) the same degree.
The variances in my “custom” firewood
In contrast, the firewood that I produce for myself is much more varied. As said in the previous post, I like to use as much of the tree as I can, down to 1-inch branches, and I’ll even use dead twigs for kindling. My process, too, is such that I end up with a lot of variation in the length of pieces. To put it bluntly, I’m something of a hack—as I’m writing this, I just put a piece of wood in my stove that’s only five inches long, a remnant of my bucking rounds that are usually somewhere between 10 and 14 inches, which are easier to split by hand than 16-inch rounds. I’ve even used slabs that are only a few inches thick, especially when they come off a larger log.
My stacks end up looking something like the photo below, and you can see that I’m not adverse to using some half-rotten pieces that would never make it into commercial cords.
I also make use of pieces with knobs and bumps and other odd shapes that are more difficult to stack and would, if included in a commercial cord, create larger gaps that might cause customers to cry foul.

Furthermore, I’ll mix in whatever wood happens to be available, such as rounds of live oak that are small enough to burn but too difficult to split and misshapen chunks of deadwood that might even have a half-rotted side or been home to an ant colony at one point. Such pieces never end up in commercial firewood and are probably consigned to burn piles.

Yes, all this variance makes everything more difficult to handle. But wood is wood: it still burns and produces heat just the same. And, in the end, it provides a much more interesting pile than what I could buy.
The desirability of variance in creative work
Doubtless you’ve seen plenty of AI-generated images and artwork by now. I see plenty of this on Substack because the authoring interface has a built-in image generator. But have you noticed that there’s a certain kind of sameness to those images? And the same is true for AI-generated text: there’s a uniformity, a sameness.
Now, let me point out there are domains where such sameness is beneficial or even necessary, such as code for computer software. Computers, in their fundamental hardware, are beyond stupid: there’s nothing there to even compare with intelligence or thinking. The heart of a CPU, for example, is a piece of circuitry called the Arithmetic Logic Unit (ALU). The ALU blindly and passively computes every possible result for any given set of operands: add, subtract, multiply, divide, AND, OR, NOR, XOR, and so on. It never decides on its own which output is meaningful at any given time. That decision comes instead from the human intelligence that’s embedded into the bits of software that are part of a CPU. Similarly, every layer of computation built on top of the ALU and CPU is simply another layer of human intelligence expressed in software. Absolutely none of it comes from the machine itself.
Computers, in fact, were designed from the beginning to always produce the same output for the same inputs, the technical term for which is idempotency (from the Latin idem, same, and potens, power). A computer, no matter how much we want to pretend otherwise, never “thinks” for itself or creatively decides that, because today is a Saturday or because the Vikings lost a playoff game, that two plus two is now the eighth digit of pi. (Such inconsistencies happen only where there’s a breakdown of the physical circuitry.) This means that if you want a computer to do a certain thing, you have to tell it to do that thing in the same way every time. Creative expression has no place here.
And that’s where generative AI, I think, shows its primary weakness, because its strength, like commercial woodchopping, is in the production of sameness, even as it even as its programmers try to give it a semblance of variation. It’s like using synthesized instruments for music: although sampled instruments sound pretty good, by and large, they yet lack the subtle variations of a real instrument in the hands of a genuine human.
For creative and imaginative domains like fiction, such sameness is detrimental. I think back to my youth when Star Trek: The Next Generation was still airing new episodes. I lost interest sometime during Season 6 because the shows started to take on a certain similarity. The characters were all pretty much developed by then, and it seemed like the producers started recycling old plotlines. In short, it started to feel stale, like a soap opera or a box of old crackers. I figure the producers felt the same way and thus ended the show after Season 7 to devote their energies to spin-offs with fresh characters, fresh settings, and fresh challenges.
What’s true of screenwriting is also true of novels. Human readers, by and large, want the uniqueness of an author’s voice and sensibilities rather than sameness. Readers want to be surprised in new ways, even if surprises occur within well-established patterns or tropes (as within genres or even sub-genres like Harlequin Romances). I think creative writing is called “creative” because of this expectation.
To some extent generative AI can create what appears to be surprises by assembling pieces from existing material (its training data), but they’ll be surprising only to those readers who’ve not already seen such material elsewhere, like TV viewers who started watching Star Trek: The Next Generation in Season 6. Readers, on the other hand, who begin to see patterns of sameness in AI-generated novels will, I think, quickly lose interest and seek out something more genuinely human.
Because of the idempotent nature of computers, the output from generative AI, even if trained to produce variances, simply cannot match the uniqueness of expression that’s possible from human beings.
In her article, “The Antithesis of Inspiration: Why ChatGPT Will Never Write a Literary Masterpiece,” (Poets & Writers, Jan/Feb 2024), Eileen Pollack points out that although she is impressed with what AI can do, she yet stands “convinced that no artificially intelligent ‘author’ will ever produce a work of literary genius.”
Put simply: A computer will never suffer the shitty childhood that endows many humans with the treasure trove of unique material upon which they can draw for the remainder of their writing lives. Even if they don’t focus on their own triumphs and travails, writers rely on their idiosyncratic observations of the world around them to provide the eccentricity of detail that allows their prose to seem so vibrant, persuasive, and three-dimensional. Of course, writers also derive inspiration from listening to other people describe their experiences, reading other people’s books, watching movies and TV shows, visiting art galleries, and surfing YouTube, which an AI program might do to accumulate its raw material. Such a program might even be able to use its “imagination” to combine the flotsam and jetsam in its memory in unique ways. But humans can mine their embodied experiences in an offline world to produce fresher, more varied, and more accurate descriptions than a computer.
I agree with Pollack (though I think an “interesting” childhood is just as valuable as a “shitty” one). I don’t think an AI system running in a datacenter will ever match individual human sensibilities.1 The simple fact is this: every individual human being occupies a unique place in all the universe and therefore has a truly unique point of view in the present moment. That person has also followed a path, and has thus gained experience that others do not share.2 And if you’re willing to at least entertain the doctrine of reincarnation, that path stretches back possibly into eons upon eons, increasing the uniqueness exponentially.
Again, therefore, AI-generated material, which arises from a kind of aggregate averaging of human experience just as commercial firewood follows a standard, can never be as intricately rich as what a human can produce. As Dan Blank puts it:
You are an experience. A human being that creates moments and experiences for those who connect with your writing, creative work, or with you in other ways.
Generative AI as a useful starting point
These limitations, however, don’t make the use of generative AI an either-or question. For all but the most daring experimentalists, probably 90% of what writers produce (and that number is simply a guess) will be remarkably similar if for no other reason than that people who share a language share a great many conventions in their communication. Consequently, that more commonplace 90% (or whatever percentage) is what AI can likely produce quite readily.3
That commonplace 90%, however, has never been the differentiating factor between authors: the differentiation comes instead through the non-commonplace 10% that embodies the unique experiences of the author, as Pollack suggests. That’s where the author’s humanity shines through and where AI’s non-humanity falters. And it’s why writing craft will still matter—as the Wall Street Journal article referenced in part 1 suggests, AI augments existing skill rather than making up for deficiencies of skill. You can’t amplify non-existent signals, and by the time you amplify weak signals enough to be useful you end up with a lot of noise.
That 10%, moreover, even though it’s a fraction of the whole, is yet enough to carry uniqueness into every sentence, which flows into paragraphs, which in turn influence the direction of the story, the arc of the characters, and all the other twists and turns of the plot. That 10% is where we find the great literary surprises, the quotable lines, and the unforgettable characters.
In other words, there is likely a place for generative AI to provide frameworks within which authors can express their uniqueness. It’s like a musician making use of more or less standard backing tracks (with sampled instruments) while inventing melodies or guitar riffs of their own, or even a painter using commercial paints rather than concocting his or her own.
Will I be using AI for writing, then?
At this point, I can’t say that I’ll be making use of generative AI myself because my first priority, as explained in part 1, is the experience of writing spiritual, devotional, and mystical realism. I’m not at present concerned with the production of sellable stories or novels. As you can read through the posts here on Deus in Fabula, I’m engaged in a quest for a kind of writing that, from what I can tell, is rare in fiction and non-fiction alike. If I asked AI to do some of this work for me, what training data would it even be drawing upon? I figure that at best it might produce the sort of vagueness that’s found in Lanza and Kress’s Observer as discussed in Three literary accounts of mystical union, part 1.
When I get to the point of wrapping a longer story around key inner experiences of a character, I might experiment with AI to generate a general framework in the same way that I do use an electric chainsaw for cutting wood. But beyond that, I’m not sure I ever want to give away the privilege of immersing myself in the creation of the story.
(If you like this post, selecting the ❤️ to bless the Algorithm Angels.)
See also “Five Ways to Write Better Copy Than ChatGPT” by Robert Bly.
This uniqueness is true even of subatomic particles according to the Pauli Exclusion Principle. In Autobiography of a Yogi, Paramhansa Yogananda also states that, like humans, “every atom in creation is inextinguishably dowered with individuality” (where “atom” means particle, as described by the Sanskrit anu). Every individual, he says elsewhere, is also “essential to the universal structure, whether in the temporary role of pillar or parasite.” At the same time, it should be noted that God-realized saints—those who are one with God and thus with all Creation—can and do share the unique experience of others.
At least in wording, sentence phrasing, dialogue, and accepted story patterns like the three-act structure.
Related to my article on "Chopping wood and generative AI," I found this relevant post:
https://seanjkernan.substack.com/p/13-signs-you-used-chatgpt-to-write