Assignment

  1. Read Language models can only write ransom notes by Allison Parrish and review the The Foundation Model Transparency Index. What questions arise for you about using LLMs in your work at ITP?

After reading Allison Parrish’s, Language models can only write ransom notes, and digging into Janelle Shane’s book, *You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place,* I found myself more driven to try and create some AI weirdness of my own.

No doubt I’m not alone in my desire to “stump” A.I’s in humorous ways. As I’ve spent more time using LLM’s and testing all kinds of generative A.I tools on a near daily basis, I’ve found this desire to stump A.I’s (I’m being intentionally vague here with the term) grows stronger everyday.

I’ve found Janelle Shane’s book extremely helpful for understanding what A.I’s are generally good at, and where even with today’s developments, they still struggle.

Some key takeaways from Janelle’s overview about A.I models

  1. Experiment with prompting a large language model in some way other than a provided interface (e.g. ChatGPT) and document the results in a blog post. Consider how working with an LLM compares to generating text from the other methods including but not limited to markov chains and context free grammars. Here are some options:

I decided to use Ollama and get it working locally. After we spoke about it in class, I thought giving the LLaVA model a try might be fun to see what responses I could get about the contents of images.

I started with some topical content for the week, but was sad that the model’s responses were quite accurate, and didn’t provide the comic relief I was ultimately seeking …