... | ... | @@ -8,7 +8,15 @@ The source code documentation is in the repository "multimodalDatasetBuilder/doc |
|
|
|
|
|
In the multimodal dataset creation, sentences of documents are enriched by images which in best case represent the context of these sentences. Such an image is called "main image". A multimodal sentence with a main image will also have at least one focus word. A focus word is defined as a word that is complex and depictable/concrete at the same time. The complex word identifier classifies if a word is complex. It can be turned off. Then every word is classified as complex. The depictability/concreteness property of a word is mainly derived from the concreteness values file. These concreteness values are calculated over the image dataset beforehand. For every focus word in a sentence, the main image of the sentence will be saved in a version in which the focus word is highlighted.
|
|
|
|
|
|
At the end of the pipeline, every document with at least one multimodal sentence will be saved in a MongoDB database. The database contains two collections/tables. In the first collection, the documents are saved. For each sentence of a document, there is also an id (SHA-512) value of that sentence and the information if the sentence is multimodal. The second collection contains the multimodal sentences. For each sentence, the path to the main image is saved and a dictionary with the (multimodal) focus words and the path to their highlighted images. These informations can be accessed through the [API](./api).
|
|
|
Example sentence from the [simple Wikipedia article "Zetland (lifeboat)"](https://github.com/LGDoor/Dump-of-Simple-English-Wiki):
|
|
|
|
|
|
The _boat_ was damaged in 1864, and was to be scrapped - however, following protest it was given to the town's people.
|
|
|
|
|
|
The main image of the sentence is on the left and its highlighted version according to the focus word _boat_ is on the right. The image doesn't show the _Zetland_ but to be fair the image was only retrieved for the beforementioned sentence. Considering this, the image can represent the context of the first part of the sentence a bit - an _old_ (looking) _boat_.
|
|
|
|
|
|
 
|
|
|
|
|
|
At the end of the pipeline, every document with at least one multimodal sentence will be saved in a MongoDB database. The database contains two collections/tables. The first collection saves documents and informations like: title of a document, sentences of a document, SHA-512 value of sentences, boolean if a sentence is multimodal and in case which focus words it has. The second collection contains informations about the multimodal sentences. For each sentence, the path to the main image is saved and a dictionary with the focus words and the path to their highlighted images. These informations can be accessed through the [API](./api).
|
|
|
|
|
|
## Prerequisites
|
|
|
|
... | ... | |