{"id":22,"date":"2020-11-23T17:33:48","date_gmt":"2020-11-23T16:33:48","guid":{"rendered":"http:\/\/www.meerqat.fr\/?page_id=22"},"modified":"2024-12-18T10:47:09","modified_gmt":"2024-12-18T09:47:09","slug":"publications","status":"publish","type":"page","link":"https:\/\/www.meerqat.fr\/?page_id=22","title":{"rendered":"publications"},"content":{"rendered":"\n<ul><li><strong>Entity-Aware Cross-Modal Pretraining for Knowledge-based Visual Question Answering<\/strong><br><em>Omar Adjali, Paul Grimal, Olivier Ferret, Sahar Ghannay, Herv\u00e9 Le Borgne<\/em><br>ECIR 2025<\/li><li><strong><a href=\"https:\/\/aclanthology.org\/2024.emnlp-main.922\/\">Multi-Level Information Retrieval Augmented Generation for Knowledge-based Visual Question Answering<\/a><\/strong><br><em>Omar Adjali, Paul Grimal, Olivier Ferret, Sahar Ghannay, Herv\u00e9 Le Borgne<\/em><br>EMNLP 2024<\/li><li><strong><a href=\"https:\/\/arxiv.org\/abs\/2310.08584\">Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video<\/a><\/strong><br><em>Shashanka Venkataramanan, Mamshad Nayeem Rizve, Jo\u00e3o Carreira, Yuki M. Asano, Yannis Avrithis<\/em><br>ICLR 2024 (oral, top 1.2%) [<a href=\"https:\/\/shashankvkt.github.io\/dora\">project page<\/a>] [<a href=\"https:\/\/huggingface.co\/datasets\/shawshankvkt\/Walking_Tours\">dataset<\/a>] <strong><a href=\"https:\/\/iclr.cc\/virtual\/2024\/oral\/19752\">Outstanding Paper Awards (Honorable mention)<\/a><\/strong><\/li><li><strong><a href=\"https:\/\/arxiv.org\/pdf\/2312.09670.pdf\">Probing Pretrained Language Models with Hierarchy Properties<\/a><\/strong><br><em>Jesus Lovon-Melgarejo, Jose Moreno, Romaric Besan\u00e7on, Olivier Ferret, Lynda Tamine<\/em><br>ECIR 2024 [<a href=\"https:\/\/github.com\/jeslev\/hierarchy_properties_plms\">code<\/a>] + TALN 2024<\/li><li><strong><a href=\"https:\/\/arxiv.org\/pdf\/2401.05736.pdf\">Cross-modal Retrieval for Knowledge-based Visual Question Answering<\/a><\/strong><br><em>Paul  Lerner, Olivier Ferret, Camille Guinaudeau<\/em><br>ECIR 2024 [<a href=\"https:\/\/github.com\/PaulLerner\/ViQuAE\">code<\/a>]<\/li><li><strong><a href=\"https:\/\/arxiv.org\/abs\/2311.05538\">Embedding Space Interpolation Beyond Mini-Batch, Beyond Pairs and Beyond Examples<\/a><\/strong><br><em>Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis<\/em><br>NeurIPS 2023<\/li><li><strong><a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3591106.3592227\">Explicit Knowledge Injection for Knowledge-Aware Visual Question Answering<\/a><\/strong><br><em>Omar Adjali, Paul Grimal, Olivier Ferret, Sahar Ghannay, Herv\u00e9 Le Borgne<\/em><br>ICMR 2023 [<a href=\"https:\/\/github.com\/OA256864\/MEERQAT_Entity\">code<\/a>]<\/li><li><a href=\"https:\/\/hal.science\/hal-04350288\/\">MEERQAT-IRIT at SemEval-2023 Task 2: Leveraging Contextualized Tag Descriptors for Multilingual Named Entity Recognition<\/a><br>Jes\u00fas Lov\u00f3n-Melgarejo, Jos\u00e9 G Moreno, Romaric Besan\u00e7on, Olivier Ferret, Lynda Lechani<br>SemEval-2023<\/li><li><strong><a href=\"https:\/\/arxiv.org\/abs\/2301.04366\">Multimodal Inverse Cloze Task for Knowledge-based Visual Question Answering<\/a><\/strong><br><em>Paul Lerner, Olivier Ferret, and Camille Guinaudeau<\/em><br>ECIR 2023 [<a href=\"https:\/\/github.com\/PaulLerner\/ViQuAE\">code<\/a>] + TALN 2023<\/li><li><strong><a href=\"https:\/\/aclanthology.org\/2022.coling-1.125\/\">Can We Guide a Multi-Hop Reasoning Language Model to Incrementally Learn at each Single-Hop?<\/a><\/strong><br><em>Jesus Lovon, Jose G. Moreno, Romaric Besan\u00e7on, Olivier Ferret and Lynda Tamine Lechani<\/em><br>COLING 2022 [<a href=\"https:\/\/github.com\/jeslev\/incremental_reasoning\">code<\/a>]<\/li><li><a href=\"https:\/\/hal.universite-paris-saclay.fr\/hal-03650618\/document\"><strong>ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named Entities<\/strong><\/a><br><em>Paul Lerner, Olivier Ferret, Camille Guinaudeau, Herv\u00e9 Le Borgne, Romaric Besan\u00e7on, Jos\u00e9 G. Moreno, Jes\u00fas Lov\u00f3n Melgarejo<\/em><br>SIGIR 2022 [<a href=\"https:\/\/github.com\/PaulLerner\/ViQuAE\">code<\/a>]<\/li><li><a href=\"https:\/\/arxiv.org\/abs\/2103.15375\" data-type=\"URL\" data-id=\"https:\/\/arxiv.org\/abs\/2103.15375\"><strong>AlignMixup: Improving representations by interpolating aligned features<\/strong><\/a><br><em>Shashanka Venkataramanan,<\/em> <em>Yannis Avrithis<\/em>, <em>Ewa Kijak, Laurent Amsaleg<\/em><br>CVPR 2022 [<a href=\"https:\/\/github.com\/shashankvkt\/AlignMixup_CVPR22\">code<\/a>]<\/li><li><a href=\"https:\/\/arxiv.org\/pdf\/2106.04990.pdf\"><strong>It Takes Two to Tango: Mixup for Deep Metric Learning<\/strong><\/a><br><em>Shashanka Venkataramanan, Bill Psomas, Ewa Kijak, Laurent Amsaleg, Konstantinos Karantzalos, Yannis Avrithis<\/em><br>ICLR 2022 [<a href=\"https:\/\/github.com\/billpsomas\/Metrix_ICLR22.git\">code<\/a>]<\/li><li><strong>Un jeu de donn\u00e9es pour r\u00e9pondre \u00e0 des questions visuelles \u00e0 propos d&rsquo;entit\u00e9s nomm\u00e9es<\/strong><br><em>Paul Lerner, Salem Messoud<\/em>, <em>Olivier Ferret, Camille Guinaudeau, Herv\u00e9 Le Borgne, Romaric Besan\u00e7on, Jose Moreno and Jes\u00fas Lov\u00f3n-Melgarejo<\/em><br>Traitement Automatique des Langues 2023 &#8211; num\u00e9ro sp\u00e9cial d\u00e9di\u00e9 au TAL inter-\/multimodal (num\u00e9ro 63-2)<\/li><li><strong>Reconnaissance d&rsquo;Entit\u00e9s Nomm\u00e9es fond\u00e9e sur des Mod\u00e8les de Langue Enrichis avec des D\u00e9finitions de Types d&rsquo;Entit\u00e9s<\/strong><br><em>Jesus Lovon-Melgarejo, Jose Moreno, Romaric Besan\u00e7on, Olivier Ferret, Lynda Tamine<\/em><br>TALN 2023<\/li><li><strong><a href=\"https:\/\/hal.science\/hal-04131549\/\">Recherche cross-modale pour r\u00e9pondre \u00e0 des questions visuelles<\/a><\/strong><br><em><em>Paul Lerner, Olivier Ferret, Camille Guinaudeau<\/em><\/em><br>CORIA-TALN 2023<\/li><li><a href=\"https:\/\/aclanthology.org\/2022.jeptalnrecital-taln.43.pdf\"><strong>Un jeu de donn\u00e9es pour r\u00e9pondre \u00e0 des questions visuelles \u00e0 propos d\u2019entit\u00e9s nomm\u00e9es en utilisant des bases de connaissances<\/strong><\/a><br><em>Paul Lerner, Olivier Ferret, Camille Guinaudeau, Herv\u00e9 Le Borgne, Romaric Besan\u00e7on, Jose Moreno and Jes\u00fas Lov\u00f3n-Melgarejo<\/em><br>TALN 2022 [<a href=\"https:\/\/github.com\/PaulLerner\/ViQuAE\">code<\/a>]<br><\/li><\/ul>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Entity-Aware Cross-Modal Pretraining for Knowledge-based Visual Question AnsweringOmar Adjali, Paul Grimal, Olivier Ferret, Sahar Ghannay, Herv\u00e9 Le BorgneECIR 2025 Multi-Level Information Retrieval Augmented Generation for Knowledge-based Visual Question AnsweringOmar Adjali, Paul Grimal, Olivier Ferret, Sahar Ghannay, Herv\u00e9 Le BorgneEMNLP 2024 Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled videoShashanka Venkataramanan, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"hide_page_title":""},"_links":{"self":[{"href":"https:\/\/www.meerqat.fr\/index.php?rest_route=\/wp\/v2\/pages\/22"}],"collection":[{"href":"https:\/\/www.meerqat.fr\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.meerqat.fr\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.meerqat.fr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.meerqat.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=22"}],"version-history":[{"count":34,"href":"https:\/\/www.meerqat.fr\/index.php?rest_route=\/wp\/v2\/pages\/22\/revisions"}],"predecessor-version":[{"id":235,"href":"https:\/\/www.meerqat.fr\/index.php?rest_route=\/wp\/v2\/pages\/22\/revisions\/235"}],"wp:attachment":[{"href":"https:\/\/www.meerqat.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=22"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}