you can check the documents/books and the images of the multimodal sentences respectively.
\ No newline at end of file
you can check the documents/books and the images of the multimodal sentences respectively.
---
For the further examples let's assume that the required parameters have the same value as their default values.
Let's say we want that the main images of the sentences have the same size (224x224 pixels) as the highlighted images. Then, we can resize the main images which are used and save them in "../data/resizedImages" with
If we want to define a word from the concreteness values file to be concrete/depictable if it has a value of at least 50 we can do that with
```
python main.py --concreteness_threshold 50
```
The image retrieval with CLIP can be influenced with the parameters "--candidate_imgs", "--sent_img_similarity" and "--focus_word_img_similarity". The choice of the first two parameters is based on [paper](https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2022-wangetal-lrec.pdf). The last parameter then bases on the second one. Especially, increasing the last two ones might result in more suitable images but less multimodal sentences