Can an AI assistant handle the tedious parts of academic writing?

A real-world demo of Claude Code

Migrating a manuscript template, debugging LaTeX, and managing git — all in one conversation
tools
R
Author
Affiliation

Ryan Peterson

University of Iowa

Published

April 28, 2026

NoteAuthor Note

This post deviates from our usual AI use policy as an experiment using Claude Code, the result of which will become clear as you read.

What if you could offload the parts of academic writing that have nothing to do with writing? Not the thinking, not the modeling, not the prose — but the LaTeX errors, the git housekeeping, the YAML frontmatter surgery that eats an afternoon every time you switch journals.

I recently put this to the test with Claude Code, Anthropic’s AI coding assistant. Over a single conversation, I used it to clean up a git repo, migrate a manuscript from one journal template to another, and debug the resulting build errors. Here’s how it went.

The setup

I’m working on a paper targeting MDPI’s journal Entropy, but the manuscript (RBIC_Multimodal.Rmd) was still using the rticles::elsevier_article template from an earlier submission plan. The repo also had some generated figure files tracked in git that shouldn’t have been. Routine housekeeping, but the kind that quietly devours time.

Claude Code runs in your terminal (or IDE) and has direct access to your project files, shell, and git. You describe what you want, it proposes a plan, and you approve or redirect. It’s a conversation, not a one-shot prompt.

Task 1: Stop tracking build artifacts

The RBIC_Multimodal_files/ directory — full of generated PDFs from knitr — was being tracked in git. These get regenerated every build, so they just add noise to diffs.

I asked Claude about it, and it laid out the standard three-step fix:

# Add to .gitignore
echo "RBIC_Multimodal_files/" >> .gitignore

# Remove from git's index (but keep the local files!)
git rm -r --cached RBIC_Multimodal_files/

# Commit and push
git commit -m "Stop tracking RBIC_Multimodal_files/"
git push

The key here is the --cached flag — it untracks the files without deleting them from disk. Claude explained this clearly and then, after I confirmed, executed it. Eight PDFs removed from the repo, .gitignore updated, done.

Note

Nothing here is beyond a quick Stack Overflow search. But Claude handled it end-to-end — checking what was tracked, editing .gitignore, running the commands, committing — without me switching contexts.

Task 2: Elsevier to MDPI Entropy

This is where things got more interesting. Switching rticles templates isn’t just changing one line in the YAML. The author/affiliation format is different, the citation engine changes (CSL to natbib), extra metadata fields are required, and you need a Definitions/ folder with the MDPI class files.

I asked Claude to help ensure the proper template was in use. It:

  1. Spawned a sub-agent to research rticles::mdpi_article requirements, YAML fields, and Entropy-specific settings
  2. Read my existing Rmd frontmatter
  3. Rewrote the YAML from scratch

Here’s a simplified before/after:

Before (Elsevier):

output:
  rticles::elsevier_article:
    keep_tex: true
author:
  - name: "Ryan A. Peterson"
    affiliation: a,b
    footnote: "Corresponding Author"
address:
  - code: a
    address: "Department of Biostatistics..."
csl: biometrics.csl

After (MDPI):

output:
  rticles::mdpi_article:
    extra_dependencies: longtable
    keep_tex: true
author:
  - name: Ryan A. Peterson
    affil: "1,2,*"
affiliation:
  - num: 1
    address: |
      Department of Biostatistics...
    email: ryan-peterson@uiowa.edu
journal: entropy
type: article
status: submit

Claude also copied the Definitions/ folder from the installed rticles package, added required back-matter fields (acknowledgement, funding, conflictsofinterest), and removed packages that conflict with mdpi.cls (like endfloat and the custom caption width).

Task 3: Debugging the build

I rendered the document in RStudio and fed the errors back to Claude. Three rounds of fixes followed.

Round 1: Missing Ghostscript

! epstopdf Error: Required program gs not found

The MDPI logos are .eps files, and pdfLaTeX needs Ghostscript to convert them. Claude proposed two options: install Ghostscript, or pre-convert the logos to PDF so collaborators don’t hit the same issue.

I pointed out that option 2 is better for the team:

“It seems like [option 2] is the better option because if others are rendering this document on their machines, they may run into a similar issue.”

Claude agreed, installed Ghostscript via conda (which I already had), converted the three EPS logos to PDF, and then patched mdpi.cls to drop the .eps extensions from \includegraphics calls. Now pdfLaTeX finds the PDFs automatically — no Ghostscript required at build time.

Round 2: A sneaky bibliography entry

The next error looked like a math issue:

! Missing $ inserted.
l.22 ...95/3/10.1093/biomet/asn034/2/asn034.pdf]}}

I told Claude I’d seen this kind of thing before with tables and escape characters. But it traced the actual source to a .bib entry with an eprint field containing a URL-like path full of underscores. Under natbib, those underscores get interpreted as LaTeX subscript operators. The doi and URL fields already covered the same reference, so removing eprint was the clean fix.

Tip

This was the moment that sold me. I had a plausible (but wrong) hypothesis about the error source. Claude didn’t anchor on my suggestion — it searched the .bib file, matched the error text, and found the real cause.

Round 3: Unused packages

Package gensymb Warning: Not defining \perthousand.

I wasn’t sure whether gensymb was actually used anywhere in the paper. Claude searched the entire Rmd for any gensymb commands (\degree, \celsius, \micro, etc.) — found nothing but the \usepackage line itself. Removed it.

The collaboration pattern

What I found most useful wasn’t any single capability — it was the iteration loop:

  1. I describe the goal
  2. Claude proposes a plan
  3. I approve or redirect
  4. Claude executes
  5. I report results (or errors)
  6. Repeat

I stayed in control throughout. Claude asked before running destructive commands. When I redirected (the EPS portability issue), it adapted immediately. When I told it the undefined references were expected (those chunks have eval=FALSE while I re-run an analysis), it moved on without trying to “fix” them.

Key takeaways

  1. Claude Code is a collaborator, not a button. It works best with back-and-forth. The human provides judgment; the AI handles execution and research.

  2. It handles tedious format migrations well. YAML rewriting, class file patching, bibliography fixes — exactly the kind of work that’s straightforward but time-consuming.

  3. It debugs iteratively. Each error got diagnosed and fixed in one round, not blindly retried.

  4. Human oversight matters. I caught the portability issue with EPS conversion. I knew the undefined references were expected. The AI didn’t need to know everything — it just needed to listen when I told it.

  5. It’s git-aware. It reads status, writes descriptive commit messages, and pushes when asked — but only when asked.

One more thing

At the end of our session, I asked Claude to generate a Quarto reveal.js presentation summarizing everything we’d done. It wrote 20 slides with accurate quotes from our conversation, code blocks from the actual commands, a mermaid diagram of the workflow, and custom SCSS theming.

Then I asked it to fix three issues with the first draft. It did.

Then I asked it to write this blog post.

It did that too.


NoteAuthor Note

It felt incorrect to say that I – Ryan Peterson – authored this post, because the “I” used throughout – Claude generating text through my perspective – is not me. The only text written by me is contained in the two “Author Note” boxes.

We decided to leave the text as it is, without our usual review process, so that it stands as a genuine experiment of what Claude is capable of from a pure writing perspective. This post therefore represents an exception to a key GMWG value to be human first:

We pledge to only use AI as a supporting writing tool

It also demonstrates the importance of such a pledge.

In this and future posts, any AI generated content will be clearly denoted as such with a dotted border. For example:

This text is AI-generated…

…This text is not.


Thanks for reading

© 2025 Glassbox Modeling Working Group

Creative Commons License
Content on this site is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0) .