Close

Presentation

Paper
:
Shuffler: A Large Scale Data Management Tool for Machine Learning in Computer Vision
Event Type
Paper
Tags
Data
Student Paper
Applications
Workflows
Awardses
BSPA
TimeTuesday, July 303:30pm - 4pm
LocationCrystal C
DescriptionIn the academic research community, datasets in the computer vision (CV) field are primarily static. Once a dataset has been accepted as a benchmark for a CV task, researchers will not alter it in order to make their results reproducible. At the same time, when exploring new tasks and new applications, datasets tend to be an ever-changing entity. A practitioner may combine existing public datasets, filter images or objects in them, change annotations or add new ones to fit a task in hand, visualize sample images, or perhaps output statistics in the form of text or plots. In fact, the dataset changes as the practitioner experiments with both the data and algorithms. Considering that ML and deep learning call for large volumes of data, it is no surprise that the resulting data and software management associated with research dealing with alive datasets can become quite complex. At the moment of this publication, there is no flexible, publicly available instrument that would facilitate manipulating image data and their annotations throughout a ML pipeline. In this work, we present Shuffler, an open source tool that makes it easy to manage large CV datasets. It stores annotations in a relational, human-readable database. Shuffler defines over 40 data handling operations with annotations that are commonly useful in supervised learning applied to CV and supports some of the most well-known CV datasets. Finally, it is easily extensible, making the addition of new operations and datasets a fast and easy to accomplish task.