... | ... | @@ -6,9 +6,31 @@ This page explains how a multimodal dataset can be created and what is required |
|
|
|
|
|
### Required Packages
|
|
|
|
|
|
Most of the required packages can be installed with `pip` using the `requirements.txt` file from the repository:
|
|
|
|
|
|
```
|
|
|
pip install -r requirements.txt
|
|
|
```
|
|
|
|
|
|
CLIP can be installed with:
|
|
|
|
|
|
```
|
|
|
pip install git+https://github.com/openai/CLIP.git
|
|
|
```
|
|
|
|
|
|
More informations about CLIP can be found [here](https://github.com/openai/CLIP).
|
|
|
|
|
|
|
|
|
The POS-tagger from nltk can be installed via the interactive mode of python:
|
|
|
|
|
|
```
|
|
|
>>> import nltk
|
|
|
>>> nltk.download("averaged_perceptron_tagger")
|
|
|
```
|
|
|
|
|
|
### Required Files
|
|
|
|
|
|
### Text Documents
|
|
|
#### Text Documents
|
|
|
|
|
|
#### Images
|
|
|
|
... | ... | |