Machine Learning Datasets Machine Learning Datasets
  • GitHub 
  • Slack 
  • Documentation 
Get Started
Machine Learning Datasets Machine Learning Datasets
Get Started
Machine Learning Datasets
  • GitHub 
  • Slack 
  • Documentation 

Machine Learning Datasets

  • folder icon closed folder iconDataset Visualization
  • Storage & Credentials
  • API Basics
  • Getting Started
  • Tutorials (w Colab)
  • Playbooks
  • Data Layout
  • folder icon closed folder iconShuffling in ds.pytorch()
  • folder icon closed folder iconStorage Synchronization
  • folder icon closed folder iconHow to Contribute
  • Datasets
    • Speech Commands Dataset
    • 300w Dataset
    • Food 101 Dataset
    • VCTK Dataset
    • LOL Dataset
    • AQUA Dataset
    • LFPW Dataset
    • ARID Video Action dataset
    • The Street View House Numbers (SVHN) Dataset
    • NABirds Dataset
    • GTZAN Music Speech Dataset
    • Places205 Dataset
    • FFHQ Dataset
    • CARPK Dataset
    • SQuAD Dataset
    • CACD Dataset
    • ICDAR 2013 Dataset
    • RAVDESS Dataset
    • Flickr30k Dataset
    • dSprites Dataset
    • Kuzushiji-Kanji (KKanji) dataset
    • PUCPR Dataset
    • KMNIST
    • EMNIST Dataset
    • GTSRB Dataset
    • Free Spoken Digit Dataset (FSDD)
    • USPS Dataset
    • CSSD Dataset
    • MARS Dataset
    • ATIS Dataset
    • HICO Classification Dataset
    • COCO-Text Dataset
    • NSynth Dataset
    • not-MNIST Dataset
    • CoQA Dataset
    • RESIDE dataset
    • ECSSD Dataset
    • FGNET Dataset
    • Electricity Dataset
    • DRD Dataset
    • Caltech 256 Dataset
    • AFW Dataset
    • ESC-50 Dataset
    • HASYv2 Dataset
    • Pascal VOC 2012 Dataset
    • PACS Dataset
    • GlaS Dataset
    • QuAC Dataset
    • TIMIT Dataset
    • WFLW Dataset
    • LFW Deep Funneled Dataset
    • UTZappos50k Dataset
    • Visdrone Dataset
    • 11k Hands Dataset
    • KTH Actions Dataset
    • LFW Funneled Dataset
    • WIDER Face Dataset
    • LFW Dataset
    • Pascal VOC 2007 Dataset
    • Chest X-Ray Image Dataset
    • PlantVillage Dataset
    • Office-Home Dataset
    • WISDOM Dataset
    • Omniglot Dataset
    • DAISEE Dataset
    • HMDB51 Dataset
    • Optical Handwritten Digits Dataset
    • Fashionpedia Dataset
    • UCI Seeds Dataset
    • STN-PLAD Dataset
    • WIDER Dataset
    • Caltech 101 Dataset
    • DRIVE Dataset
    • PPM-100 Dataset
    • FER2013 Dataset
    • LSP Dataset
    • Adience Dataset
    • NIH Chest X-ray Dataset
    • UCF Sports Action Dataset
    • CelebA Dataset
    • Wiki Art Dataset
    • FIGRIM Dataset
    • MNIST
    • COCO Dataset
    • Kaggle Cats & Dogs Dataset
    • ANIMAL (ANIMAL10N) Dataset
    • Image Hotspots Widget
    • ImageNet Dataset
    • CIFAR 10 Dataset
    • Lincolnbeet Dataset
    • CIFAR 100 Dataset
    • LIAR Dataset
    • OPA Dataset
    • Fashion MNIST Dataset
    • Sentiment-140 Dataset
    • Google Objectron Dataset
    • Stanford Cars Dataset
    • DomainNet Dataset
    • MURA Dataset
    • SWAG Dataset
    • HAM10000 Dataset
    • GTZAN Genre Dataset
    • Tiny ImageNet Dataset
  • folder icon closed folder iconTensor Relationships
  • folder icon closed folder iconDeep Lake Docs Home
  • folder icon closed folder iconQuickstart

Docy

Machine Learning Datasets

  • Folder icon closed Folder open iconDataset Visualization
  • Storage & Credentials
  • API Basics
  • Getting Started
  • Tutorials (w Colab)
  • Playbooks
  • Data Layout
  • Folder icon closed Folder open iconShuffling in ds.pytorch()
  • Folder icon closed Folder open iconStorage Synchronization
  • Folder icon closed Folder open iconHow to Contribute
  • Datasets
    • Speech Commands Dataset
    • 300w Dataset
    • Food 101 Dataset
    • VCTK Dataset
    • LOL Dataset
    • AQUA Dataset
    • LFPW Dataset
    • ARID Video Action dataset
    • The Street View House Numbers (SVHN) Dataset
    • NABirds Dataset
    • GTZAN Music Speech Dataset
    • Places205 Dataset
    • FFHQ Dataset
    • CARPK Dataset
    • SQuAD Dataset
    • CACD Dataset
    • ICDAR 2013 Dataset
    • RAVDESS Dataset
    • Flickr30k Dataset
    • dSprites Dataset
    • Kuzushiji-Kanji (KKanji) dataset
    • PUCPR Dataset
    • KMNIST
    • EMNIST Dataset
    • GTSRB Dataset
    • Free Spoken Digit Dataset (FSDD)
    • USPS Dataset
    • CSSD Dataset
    • MARS Dataset
    • ATIS Dataset
    • HICO Classification Dataset
    • COCO-Text Dataset
    • NSynth Dataset
    • not-MNIST Dataset
    • CoQA Dataset
    • RESIDE dataset
    • ECSSD Dataset
    • FGNET Dataset
    • Electricity Dataset
    • DRD Dataset
    • Caltech 256 Dataset
    • AFW Dataset
    • ESC-50 Dataset
    • HASYv2 Dataset
    • Pascal VOC 2012 Dataset
    • PACS Dataset
    • GlaS Dataset
    • QuAC Dataset
    • TIMIT Dataset
    • WFLW Dataset
    • LFW Deep Funneled Dataset
    • UTZappos50k Dataset
    • Visdrone Dataset
    • 11k Hands Dataset
    • KTH Actions Dataset
    • LFW Funneled Dataset
    • WIDER Face Dataset
    • LFW Dataset
    • Pascal VOC 2007 Dataset
    • Chest X-Ray Image Dataset
    • PlantVillage Dataset
    • Office-Home Dataset
    • WISDOM Dataset
    • Omniglot Dataset
    • DAISEE Dataset
    • HMDB51 Dataset
    • Optical Handwritten Digits Dataset
    • Fashionpedia Dataset
    • UCI Seeds Dataset
    • STN-PLAD Dataset
    • WIDER Dataset
    • Caltech 101 Dataset
    • DRIVE Dataset
    • PPM-100 Dataset
    • FER2013 Dataset
    • LSP Dataset
    • Adience Dataset
    • NIH Chest X-ray Dataset
    • UCF Sports Action Dataset
    • CelebA Dataset
    • Wiki Art Dataset
    • FIGRIM Dataset
    • MNIST
    • COCO Dataset
    • Kaggle Cats & Dogs Dataset
    • ANIMAL (ANIMAL10N) Dataset
    • Image Hotspots Widget
    • ImageNet Dataset
    • CIFAR 10 Dataset
    • Lincolnbeet Dataset
    • CIFAR 100 Dataset
    • LIAR Dataset
    • OPA Dataset
    • Fashion MNIST Dataset
    • Sentiment-140 Dataset
    • Google Objectron Dataset
    • Stanford Cars Dataset
    • DomainNet Dataset
    • MURA Dataset
    • SWAG Dataset
    • HAM10000 Dataset
    • GTZAN Genre Dataset
    • Tiny ImageNet Dataset
  • Folder icon closed Folder open iconTensor Relationships
  • Folder icon closed Folder open iconDeep Lake Docs Home
  • Folder icon closed Folder open iconQuickstart

SWAG Dataset

Estimated reading: 5 minutes

SWAG dataset

What is SWAG Dataset?

The SWAG (Situations With Adversarial Generations) dataset comprises 113,000 multiple-choice questions covering a wide range of grounded scenarios and is taken from two consecutive video captions in the ActivityNet Captions database, as well as the Large Scale Movie Description Challenge. This dataset is created using adversarial filtering. SWAG dataset can be used in the research toward commonsense NLI.

Download SWAG Dataset in Python

Instead of downloading the SWAG dataset in Python, you can effortlessly load it in Python via our Deep Lake open-source with just one line of code.

Load SWAG Dataset Training Subset in Python

				
					import deeplake
ds = deeplake.load('hub://activeloop/swag-train')
				
			

Load SWAG Dataset Testing Subset in Python

				
					import deeplake
ds = deeplake.load('hub://activeloop/swag-test')
				
			

Load SWAG Dataset Validation Subset in Python

				
					import deeplake
ds = deeplake.load('hub://activeloop/swag-val')
				
			

SWAG Dataset Structure

SWAG Data Fields

For the training and validation set

  • video_id: tensor that contains video id.
  • fold_ind: tensor that contains fold id.
  • start_phrase: tensor containing the start phrase of the context.
  • gold_ending: tensor containing better ending.
  • distractor_0: tensor containing the first distractor. The answer with the first distractor is considered to have the best quality.
  • distractor_1: tensor containing the second distractor.
  • distractor_2: tensor containing the third distractor.
  • distractor_3: tensor containing the fourth distractor. The answer with the fourth distractor is considered to have the lowest quality.
  • gold_source: tensor containing labels gold and gen. gen indicates generated best answer and gold indicates the real answer which is considered as the second best.
  • gold_type: label containing values ‘pos’ and ‘unl’
  • distractor_0_type: label containing values ‘pos’ and ‘unl’
  • distractor_1_type: label containing values ‘pos’ and ‘unl’
  • distractor_2_type: label containing values ‘pos’ and ‘unl’
  • distractor_3_type: label containing values ‘n/a’, ‘pos’ and ‘unl’
  • sentence_1: tensor containing the first sentence.
  • sentence_2: tensor containing the second sentence.
For the test set
 
  • video_id: tensor that contains video id.
  • fold_ind: tensor that contains fold id.
  • start_phrase: tensor containing the start phrase of the context.
  • gold_source: tensor containing labels gold and gen. gen indicates generated best answer and gold indicates the real answer which is considered as the second best.
  • ending0: tensor containing first ending.
  • ending1: tensor containing second ending.
  • ending2: tensor containing third ending.
  • ending3: tensor containing fourth ending.
  • sentence_1: tensor containing the first sentence.
  • sentence_2: tensor containing the second sentence
SWAG Data Splits
  • The SWAG dataset train set was composed of 73,000 multiple-choice questions about grounded situations.
  • The SWAG dataset validation set was composed of 20,000 multiple-choice questions about grounded situations.
  • The SWAG dataset test set was composed of 20,000 multiple choice questions about grounded situations for the (blind) test.

How to use SWAG Dataset with PyTorch and TensorFlow in Python

Train a model on SWAG dataset with PyTorch in Python

Let’s use Deep Lake built-in PyTorch one-line dataloader to connect the data to the compute:

				
					dataloader = ds.pytorch(num_workers=0, batch_size=4, shuffle=False)
				
			
Train a model on SWAG dataset with TensorFlow in Python
				
					dataloader = ds.tensorflow()
				
			

SWAG Dataset Creation

Source Data
Data Collection and Normalization Information

The dataset was created by taking two consecutive video captions from the ActivityNet Captions website and the LSMD challenge. These datasets differ somewhat in nature and allow us to attain larger coverage. A constituency parser is utilized to break the second sentence into noun and verb phrases for every pair of captions. Each question has a gold manually-validated ending and three distractors.

Additional Information about SWAG Dataset

SWAG Dataset Description

  • Homepage: https://rowanzellers.com/swag/
  • Repository: https://github.com/rowanz/swagaf/tree/master/data
  • Paper: Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi: Swag: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
  • Point of Contact: https://rowanzellers.com/#contact
SWAG Dataset Curators

Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi

SWAG Dataset Licensing Information

MIT Licence

SWAG Dataset Citation Information
				
					@inproceedings{zellers2018swagaf,
    title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference},
    author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin},
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    year={2018}
}
				
			

SWAG Dataset FAQs

What is the SWAG dataset for Python?

The SWAG dataset (Situations With Adversarial Generations) is made up of 113,000 multiple-choice questions about grounded situations. It is a large-scale dataset for the tasks of grounded commonsense inference, unifying natural language inference, and physically grounded reasoning.

What is the SWAG dataset used for?

The SWAG dataset is used to train NLP models that can handle multiple choice questions.

How to download the SWAG dataset in Python?

Load the SWAG dataset with one line of code using Activeloop Deep Lake the open-source package made in Python. Check out detailed instructions on how to load the SWAG dataset training subset in Python, load the SWAG dataset testing subset in Python, and load the SWAG dataset validation subset in Python.

How can I use SWAG dataset in PyTorch or TensorFlow?

You can train a model on SWAG dataset with PyTorch in Python or train a model on the SWAG dataset with TensorFlow in Python. You can stream the SWAG dataset while training a model in PyTorch or TensorFlow with one line of code using the open-source package Activeloop Deep Lake that is written in Python.

Datasets - Previous Stanford Cars Dataset Next - Datasets GTZAN Genre Dataset
Datasets - Previous Stanford Cars Dataset Next - Datasets GTZAN Genre Dataset
Leaf Illustration

© 2022 All Rights Reserved by Snark AI, inc dba Activeloop