graviti logo产品公开数据集关于我们
Demo演示登录
1.2K
0
23
FLIC
概要
讨论
代码
活动
v0.0.1
65ea962·
Jun 29, 2021 1:47 PM
·2Commits
second commit

Overview

We collected a 5003 image dataset automatically from popular Hollywood movies. The images were obtained by running a state-of-the-art person detector on every tenth frame of 30 movies. People detected with high confidence (roughly 20K candidates) were then sent to the crowdsourcing marketplace Amazon Mechanical Turk to obtain groundtruthlabeling. Each image was annotated by five Turkers for $0.01 each to label 10 upperbody joints. The median-of-five labeling was taken in each image to be robust to outlier annotation. Finally, images were rejected manually by us if the person was occluded or severely non-frontal. We set aside 20% (1016 images) of the data for testing.

Data Format

FileSizeDescription
FLIC.zip287MB5003 examples used in our CVPR13 MODEC paper.
FLIC-full.zip1.2GB20928 examples, a superset of FLIC consisting of more difficult examples (see below). NOTE: please do not use this as training data if testing on the FLIC test set. It is a superset of the original FLIC dataset and will lead to overfitting. Choose a sensible split where no two frames from the same movie shot cross the train/test divide.

Citation

Please use the following citation when referencing the dataset:

  @inproceedings{modec13,
    title={MODEC: Multimodal Decomposable Models for Human Pose Estimation},
    author={Sapp, Benjamin and Taskar, Ben},
    booktitle={In Proc. CVPR},
    year={2013},
  }
数据预览
查看数据
🎉感谢Data Decorators的贡献
数据集信息
应用场景暂无
标注类型Box2DClassificationKeypoints2D
任务类型Pose Estimation
LicenseMIT
更新时间2021-04-07 08:32:46
数据概要
数据格式Image
数据数量18.29K
已标注数量20293
文件大小1GB
版权归属方
GRASP Laboratory
标注方
未知
了解更多和支持
立即开始构建AI
免费开始联系我们