After attempting (and struggling with) successfully calling and loading images via are.na's API and performing an image similarity task, I decided to experiment more with my own dataset.
In my 100 Days of Making Class, I've been drawing Speculative Chairs each day on my iPad. I realized recently that I've been slowly creating my own dataset, which might be interesting to analyze using some of the machine learning models we’re exploring in class. Fortunately, Dan's examples of clustering using CLIP and UMAP to visualize embeddings are exactly what I needed to process and visualize this data!
I started with Dan’s example for creating an embeddings database, and testing semantic search after the embeddings were created. I used 50 of my own chair sketches and gathered 50 images of chairs I'd saved as inspiration, to compare and contrast the embeddings between these two sets of images.
I slightly adjusted Dan's original code to work with my locally stored images and added a similarity score label to display under the top 20 images for each search.
// Similarity score label
let scoreLabel = createDiv(Score: ${(similarity * 100).toFixed(1)}%);
scoreLabel.parent(resultDiv);
scoreLabel.style('margin-top', '5px');
scoreLabel.style('font-size', '12px');
I searched the same seven terms across both sets of images to identify any discernible patterns.