How I Use AI

Text

LLMs play the role of interlocutor and editor in my writing. I may ask LLMs to read my arguments and act as though they are someone with opposing perspectives in order to consider how best to respond to possible objections. Additionally, I may ask LLMs to edit chunks of text to make them more concise (mostly because I have a tendency to be verbose).

All text on this blog, and in publications in which I am listed as an author, is written by me. You can verify this because there are several typos across my blog. While I use LLMs in the ways mentioned above, I will never copy-and-paste from an LLM, nor will I ask an LLM to write a blog post, section of a paper, or email for me.

Images

I will not use LLMs to generate images in my blog or any publications. I may use AI to help me construct code for visualizations (see the Code section below).

Code

I will absolutely use LLMs to write code. My philosophy is that LLMs are another tool to improve code quality (such as an IDE, a linter, or intellisense). I personally have had much success using LLMs to contribute to codebases written in languages in which I am not proficient (e.g. Go and Typescript). That being said, I take the time to review the code myself and take full responsibility for what is written. I personally am not a fan of vibe coding as I like to have a high level understanding of what each part of the code is doing. Additionally, the automcomplete functionality gets in the way when I know the language very well, and at times I may turn that off in order to focus on doing what needs to be done.

As an example, I may write functionality in python or pseudocode and ask LLMs to translate into Go. Where I don’t understand syntax, I may ask LLMs to explain (e.g. by using analogies to python). Cursor is particularly good at this with the CMD+K and Option+Return for asking quick questions.

I also use LLMs to write code for which I find consider there is low benefit to memorizing (e.g. how to write a Dockerfile, or docker CLI commands). Docker is an especially good example. Since my role does not concern creating or managing infrastructure, I find learning more than what I already know about docker to be a low return on investment. As such, most Dockerfiles that I use in personal projects are written by LLMs. Another example is very extensive APIs, such as matplotlib. My approach to using LLMs here is to create a very basic version of the plot I’d like to share, and then use LLMs to add certain features which would require more time than I would like to spend reading documentation. Consider the plot in this post. I used ChatGPT to help me make the embedded plot and move the legend outside of the plotting region. While I could have read matplotlib’s documentation on how to do this, it was much more efficient (for my purposes) to ask an LLM to accomplish this and move on with writing my post.

Research

I may use LLMs to write aspects of code for research (as explained above), but I will not use LLMs to do research or summarize research papers. If I reference a paper, it is because I have read at least some parts of that paper and can justify why I am citing said paper.

I will use LLMs to remind me of mathematical details (e.g. how are robust standard errors computed using linear algebra). The benefit here is staying in a flow state. I know which books on my shelf contain this information, should I ever need to reference them, but rather than get the book, and flip through the pages to find the right passage, I instead ask LLMs to remind me. In this way, an LLM is like a desk mate in my laboratory in grad school.