|
|
|
This page is a user guide which explains how a multimodal dataset can be created and what is required for it.
|
|
|
|
|
|
|
|
The source code documentation is in the repository "multimodalDatasetBuilder/docs".
|
|
|
|
|
|
|
|
[[_TOC_]]
|
|
|
|
|
|
|
|
## Introduction
|
|
|
|
|
|
|
|
In the multimodal dataset creation, sentences of documents are enriched by images which in best case represent the context of these sentences. Such an image is called "main image". A multimodal sentence with a main image will also have at least one focus word. A focus word is defined as a word that is complex and depictable/concrete at the same time. The complex word identifier classifies if a word is complex. It can be turned off. Then every word is classified as complex. The depictability/concreteness property of a word is mainly derived from the concreteness values file. These concreteness values are calculated over the image dataset beforehand. For every focus word in a sentence, the main image of the sentence will be saved in a version in which the focus word is highlighted.
|
|
|
|
|
|
|
|
At the end of the pipeline, every document with at least one multimodal sentence will be saved in a MongoDB database. The database contains two collections/tables. In the first collection, the documents are saved. For each sentence of a document, there is also an id (SHA-512) value of that sentence and the information if the sentence is multimodal. The second collection contains the multimodal sentences. For each sentence, the path to the main image is saved and a dictionary with the (multimodal) focus words and the path to their highlighted images. These informations can be accessed through the [API](./api).
|
|
|
|
|
|
|
|
## Required Packages
|
|
|
|
|
|
|
|
Most of the required packages can be installed with pip using the "requirements.txt" file from the repository:
|
|
|
|
|
|
|
|
```
|
|
|
|
pip install -r requirements.txt
|
|
|
|
```
|
|
|
|
|
|
|
|
CLIP can be installed with:
|
|
|
|
|
|
|
|
```
|
|
|
|
pip install git+https://github.com/openai/CLIP.git
|
|
|
|
```
|
|
|
|
|
|
|
|
More informations about CLIP can be found [here](https://github.com/openai/CLIP).
|
|
|
|
|
|
|
|
The POS-tagger from nltk can be installed via the interactive mode of python:
|
|
|
|
|
|
|
|
```
|
|
|
|
>>> import nltk
|
|
|
|
>>> nltk.download("averaged_perceptron_tagger")
|
|
|
|
```
|
|
|
|
|
|
|
|
## Required Files
|
|
|
|
|
|
|
|
### Text Documents
|
|
|
|
|
|
|
|
Documents are required to create a multimodal dataset. Every document (e.g., book, news article ...) must be in a seperate file that has a ".txt" ending. The first line of a text file will be interpreted as its title.
|
|
|
|
|
|
|
|
For example, over 50.000 simple Wikipedia articles can be found [here](https://github.com/LGDoor/Dump-of-Simple-English-Wiki). In the repository, there is a script "scripts/simpleWikiArticles.py" which seperates the simple Wikipedia articles into individual files. For that, the "corpus.txt" file must be next to the script and a directory "articles" must be created next to these files.
|
|
|
|
|
|
|
|
### Images
|
|
|
|
|
|
|
|
Images are used to enrich the sentences of the documents. They must be saved as "jpg" files.
|
|
|
|
|
|
|
|
The image dataset that is used must be related to the concreteness values and the cached image features. For example, if the image dataset changes then the other two must also change.
|
|
|
|
|
|
|
|
For example the [MS COCO dataset](http://images.cocodataset.org/zips/train2014.zip) can be used as an image dataset.
|
|
|
|
|
|
|
|
### Concreteness Values
|
|
|
|
|
|
|
|
The concreteness values of words are somehow used as a filtering step in this multimodal dataset creation. Having these values, it is easier to estimate if there is an image in the given dataset that represents a given word and thus can be highlighted according to that word.
|
|
|
|
|
|
|
|
A json file that contains a dictionary of words and their concreteness values is required for the multimodal dataset creation. In the repository, there is such a file "data/concreteness/concretenessValuesMscoco.json" containing the concreteness values of the "MS COCO train2014 dataset".
|
|
|
|
|
|
|
|
That concreteness file can be created with the [implementation](https://github.com/victorssilva/concreteness) of [Visual Concreteness](https://arxiv.org/abs/1804.06786).
|
|
|
|
|
|
|
|
## Setup Database
|
|
|
|
|
|
|
|
## Parameter Explanation
|
|
|
|
|
|
|
|
The "main.py" file has a lot of parameters which are explained here.
|
|
|
|
|
|
|
|
These are the required parameters. They only have to be set if their default values aren't right.
|
|
|
|
|
|
|
|
* `-d, --documents` - the path to the directory where the documents are
|
|
|
|
* `-i, --images` - the path to the directory where the images are
|
|
|
|
* `-m, --mcimages` - the path to the directory where the highlighted images from miniCLIP will be saved
|
|
|
|
* `-c, --concreteness` - the path to the json file containing the concreteness values. The file must be changed if the image dataset changes
|
|
|
|
* `-f, --cached_image_features` - the path to the file containing the cached image features. If a file with that name exists the cached features will be loaded. Otherwise such a file will be created with the specified name. The image features must be changed if the image dataset changes
|
|
|
|
|
|
|
|
The following parameters are optional.
|
|
|
|
* `--resized_main_images` - on default, the size of the main images will not change. If a directory is specified the main images of the sentences will be resized and saved there. The resized images will have the same size as the highlighted images which is 224x224 pixels
|
|
|
|
* `-t, --concreteness_threshold` - defaults to 20. Filters out the words from the concreteness file which have a score lower than this threshold
|
|
|
|
* `--cwi` - defaults to "on". The status that specifies if the cwi is "on" or "off"
|
|
|
|
* `--multimodal_sents_only` - defaults to "off". The status that specifies if only the multimodal sentences of the documents will be saved
|
|
|
|
* `--max_docs` - defaults to None. Specifies the maximum number of documents that will be saved
|
|
|
|
* `--max_sents` - defaults to None. Specifies the maximum number of sentences that a document is allowed to have to be further processed
|
|
|
|
* `--rnd_seed` - defaults to None. Is responsible for the shuffling of the documents before they are processed
|
|
|
|
* `--candidate_imgs` - defaults to 5. Specifies how many "good" images that are "similar" to a given sentence there must be for sentence to be processed further
|
|
|
|
* `--sent_img_similarity` - defaults to 0.225. The threshold that defines the minimum required similarity value between a sentence and an image
|
|
|
|
* `--focus_word_img_similarity` defaults to 0.25. The threshold that defines the minimum required similarity value between a focus word and a candidate image
|
|
|
|
* `--db_name` - defaults to "multimodalDB". The name of the database
|
|
|
|
|
|
|
|
## Example Run |
|
|
|
\ No newline at end of file |