We introduce CuratorNet, a neural network architecture for visually-aware recommendation of art images.
CuratorNet is designed with the goal of maximizing generalization: the network has a fixed set of parameters that only need to be trained once, and thereafter the model is able to generalize to new users or items never seen before, without further training. This is achieved by leveraging visual content: items are mapped to item vectors through visual embeddings, and users are mapped to user vectors by aggregating the visual content of items they have consumed.
In this repository, we provide a TensorFlow implementation of CuratorNet.
The full paper pre-print is available at arXiv, and is part of the ComplexRec workshop proceedings at the ACM RecSys 2020 conference.
If you find this repository useful for your research, please consider citing our paper:
@inproceedings{curatornet2020,
author = {Manuel Cartagena and Patricio Cerda and Pablo Messina and Felipe Del Río and Denis Parra},
title = {CuratorNet: Visually-aware Recommendation of Art Image},
year = {2020},
url = {https://arxiv.org/abs/2009.04426},
booktitle = {Proceedings of the Fourth Workshop on Recommendation in Complex Environments},
keywords = {visual art, deep learning, recommender systems},
location = {Virtual Event, Brazil},
}
In /experiments
, you can find all notebooks necessary for replicating our results.
In /src
, you can find the CuratorNet implementation.
Execute the following commands from the main folder:
virtualenv -p python3 ./env
source ./env/bin/activate
pip install -r requirements.txt
ipython kernel install --name "CuratorNetKernel" --user
jupyter notebook
Train CuratorNet:
- Execute notebook
experiments/training.ipynb
Precompute embeddings:
- Execute notebook
experiments/precomputation.ipynb
Evaluate CuratorNet:
- Execute notebook
experiments/evaluation.ipynb
Compute performance metrics:
- Execute notebook
experiments/metrics.ipynb
In each notebook:
Kernel -> Change kernel -> CuratorNetKernel
Cell -> Run All
CuratorNet leverages neural image embeddings obtained from pre-trained CNNs. We train CuratorNet to rank triplets that associate a user with a pair of images: one where we have positive feedback from said user, and one where we do not.
CuratorNet draws inspiration from VBPR and Youtube’s Recommender System: we optimize for ranking using BPR loss, and seek generalization to new users without introducing additional parameters or further training. We also propose a set of sampling guidelines to generate triplets for training our model, which improves the performance of CuratorNet and VBPR with respect to random negative sampling.
To test our approach, we use an anonymised dataset of art purchases in an e-commerce website. Each transaction associates a user's ID with the ID and visual embedding of the painting that she bought. This dataset can be downloaded to train and evaluate CuratorNet, as seen in the table below. Although the artwork itself is not included, CuratorNet's architecture enables usage with any other dataset once trained.