Learned sparse retrieval: Difference between revisions
Add info regarding independent SPLADE implementations in HuggingFace |
No edit summary |
||
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Document search algorithm}}{{Redirect|SPLADE|the eating utensil|splayd}} |
{{Short description|Document search algorithm}}{{Redirect|SPLADE|the eating utensil|splayd}} |
||
'''Learned sparse retrieval''' or '''sparse neural search''' is an approach to [[ |
'''Learned sparse retrieval''' or '''sparse neural search''' is an approach to [[Information Retrieval]] which uses a sparse vector representation of queries and documents.<ref>{{Cite book |last1=Nguyen |first1=Thong |last2=MacAvaney |first2=Sean |last3=Yates |first3=Andrew |title=Advances in Information Retrieval |chapter=A Unified Framework for Learned Sparse Retrieval |date=2023 |editor-last=Kamps |editor-first=Jaap |editor2-last=Goeuriot |editor2-first=Lorraine |editor3-last=Crestani |editor3-first=Fabio |editor4-last=Maistro |editor4-first=Maria |editor5-last=Joho |editor5-first=Hideo |editor6-last=Davis |editor6-first=Brian |editor7-last=Gurrin |editor7-first=Cathal |editor8-last=Kruschwitz |editor8-first=Udo |editor9-last=Caputo |editor9-first=Annalina |chapter-url=https://link.springer.com/chapter/10.1007/978-3-031-28241-6_7 |series=Lecture Notes in Computer Science |volume=13982 |language=en |location=Cham |publisher=Springer Nature Switzerland |pages=101–116 |arxiv=2303.13416 |doi=10.1007/978-3-031-28241-6_7 |isbn=978-3-031-28241-6|s2cid=257585074 }}</ref> It borrows techniques both from lexical [[Bag-of-words model|bag-of-words]] and [[vector embedding]] algorithms, and is claimed to perform better than either alone. The best-known sparse neural search systems are '''SPLADE'''<ref>{{Cite book |last1=Formal |first1=Thibault |last2=Piwowarski |first2=Benjamin |last3=Clinchant |first3=Stéphane |title=Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval |chapter=SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking |date=2021-07-11 |chapter-url=https://doi.org/10.1145/3404835.3463098 |series=SIGIR '21 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=2288–2292 |arxiv=2107.05720 |doi=10.1145/3404835.3463098 |isbn=978-1-4503-8037-9|s2cid=235792467 }}</ref> and its successor SPLADE v2.<ref name=":0">{{Cite arXiv |last1=Formal |first1=Thibault |last2=Piworwarski |first2=Benjamin |last3=Lassance |first3=Carlos |last4=Clinchant |first4=Stéphane |date=21 September 2021 |title=SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval |class=cs.IR |eprint=2109.10086v1}}</ref> Others include DeepCT,<ref>{{Cite book |last1=Dai |first1=Zhuyun |last2=Callan |first2=Jamie |title=Proceedings of the Web Conference 2020 |chapter=Context-Aware Document Term Weighting for Ad-Hoc Search |date=2020-04-20 |pages=1897–1907 |chapter-url=http://dx.doi.org/10.1145/3366423.3380258 |location=New York, NY, USA |publisher=ACM |doi=10.1145/3366423.3380258|isbn=9781450370233 |s2cid=218521094 }}</ref> uniCOIL,<ref>{{Cite arXiv |last1=Lin |first1=Jimmy |last2=Ma |first2=Xueguang |date=28 June 2021 |title=A few brief notes on DeepImpact, COIL, and a conceptual framework for information retrieval techniques |class=cs.IR |eprint=2106.14807}}</ref> EPIC,<ref>{{Cite book |last1=MacAvaney |first1=Sean |last2=Nardini |first2=Franco Maria |last3=Perego |first3=Raffaele |last4=Tonellotto |first4=Nicola |last5=Goharian |first5=Nazli |last6=Frieder |first6=Ophir |title=Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval |chapter=Expansion via Prediction of Importance with Contextualization |date=2020-07-25 |chapter-url=https://doi.org/10.1145/3397271.3401262 |series=SIGIR '20 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=1573–1576 |arxiv=2004.14245 |doi=10.1145/3397271.3401262 |isbn=978-1-4503-8016-4|s2cid=216641912 }}</ref> DeepImpact,<ref>{{Cite book |last1=Mallia |first1=Antonio |last2=Khattab |first2=Omar |last3=Suel |first3=Torsten |last4=Tonellotto |first4=Nicola |title=Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval |chapter=Learning Passage Impacts for Inverted Indexes |date=2021-07-11 |chapter-url=https://dl.acm.org/doi/10.1145/3404835.3463030 |series=SIGIR '21 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=1723–1727 |arxiv=2104.12016 |doi=10.1145/3404835.3463030 |isbn=978-1-4503-8037-9|s2cid=233394068 }}</ref> TILDE and TILDEv2,<ref>{{Cite arXiv |last1=Zhuang |first1=Shengyao |last2=Zuccon |first2=Guido |date=13 September 2021 |title=Fast Passage Re-ranking with Contextualized Exact Term Matching and Efficient Passage Expansion |class=cs.IR |eprint=2108.08513}}</ref> Sparta,<ref>{{Cite arXiv |last1=Zhao |first1=Tiancheng |last2=Lu |first2=Xiaopeng |last3=Lee |first3=Kyusong |date=28 September 2020 |title=SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval |class=cs.CL |eprint=2009.13013}}</ref> SPLADE-max, and DistilSPLADE-max.<ref name=":0" /> |
||
There are also extensions of sparse retrieval approaches to the vision-language domain, where these methods are applied to multimodal data, such as combining text with images.<ref>{{Cite book |last1=Nguyen |first1=Thong |last2=Hendriksen |first2=Mariya |last3=Yates |first3=Andrew |last4=de Rijke |first4=Maarten |title=European Conference on Information Retrieval |chapter=Multimodal Learned Sparse Retrieval with Probabilistic Expansion Control |date=2024 |location=Cham |publisher=Springer Nature Switzerland |pages=448–464 }}</ref> This expansion enables the retrieval of relevant content across different modalities, such as finding images based on text queries or vice versa. |
|||
Some implementations of SPLADE have similar latency to [[Okapi BM25]] lexical search while giving as good results as state-of-the-art neural rankers on in-domain data.<ref>{{Cite book |last1=Lassance |first1=Carlos |last2=Clinchant |first2=Stéphane |title=Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval |chapter=An Efficiency Study for SPLADE Models |date=2022-07-07 |chapter-url=https://doi.org/10.1145/3477495.3531833 |series=SIGIR '22 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=2220–2226 |arxiv=2207.03834 |doi=10.1145/3477495.3531833 |isbn=978-1-4503-8732-3|s2cid=250340284 }}</ref> |
Some implementations of SPLADE have similar latency to [[Okapi BM25]] lexical search while giving as good results as state-of-the-art neural rankers on in-domain data.<ref>{{Cite book |last1=Lassance |first1=Carlos |last2=Clinchant |first2=Stéphane |title=Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval |chapter=An Efficiency Study for SPLADE Models |date=2022-07-07 |chapter-url=https://doi.org/10.1145/3477495.3531833 |series=SIGIR '22 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=2220–2226 |arxiv=2207.03834 |doi=10.1145/3477495.3531833 |isbn=978-1-4503-8732-3|s2cid=250340284 }}</ref> |
||
The Official SPLADE model weights and training code is released under a [[Creative Commons NonCommercial license]].<ref>{{Cite web |title=splade/LICENSE at main · naver/splade |url=https://github.com/naver/splade/blob/main/LICENSE |access-date=2023-08-25 |website=GitHub |language=en}}</ref> |
The Official SPLADE model weights and training code is released under a [[Creative Commons NonCommercial license]].<ref>{{Cite web |title=splade/LICENSE at main · naver/splade |url=https://github.com/naver/splade/blob/main/LICENSE |access-date=2023-08-25 |website=GitHub |language=en}}</ref> But there are other [https://huggingface.co/prithivida/Splade_PP_en_v1 independent implementations of SPLADE++] (a variant of SPLADE models) that are released under permissive licenses. |
||
SPRINT is a toolkit for evaluating neural sparse retrieval systems.<ref>{{Cite book |last1=Thakur |first1=Nandan |last2=Wang |first2=Kexin |last3=Gurevych |first3=Iryna |last4=Lin |first4=Jimmy |title=Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval |chapter=SPRINT: A Unified Toolkit for Evaluating and Demystifying Zero-shot Neural Sparse Retrieval |date=2023-07-18 |chapter-url=https://doi.org/10.1145/3539618.3591902 |series=SIGIR '23 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=2964–2974 |arxiv=2307.10488 |doi=10.1145/3539618.3591902 |isbn=978-1-4503-9408-6|s2cid=259949923 }}</ref> |
SPRINT is a toolkit for evaluating neural sparse retrieval systems.<ref>{{Cite book |last1=Thakur |first1=Nandan |last2=Wang |first2=Kexin |last3=Gurevych |first3=Iryna |last4=Lin |first4=Jimmy |title=Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval |chapter=SPRINT: A Unified Toolkit for Evaluating and Demystifying Zero-shot Neural Sparse Retrieval |date=2023-07-18 |chapter-url=https://doi.org/10.1145/3539618.3591902 |series=SIGIR '23 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=2964–2974 |arxiv=2307.10488 |doi=10.1145/3539618.3591902 |isbn=978-1-4503-9408-6|s2cid=259949923 }}</ref> |
Latest revision as of 21:59, 23 October 2024
Learned sparse retrieval or sparse neural search is an approach to Information Retrieval which uses a sparse vector representation of queries and documents.[1] It borrows techniques both from lexical bag-of-words and vector embedding algorithms, and is claimed to perform better than either alone. The best-known sparse neural search systems are SPLADE[2] and its successor SPLADE v2.[3] Others include DeepCT,[4] uniCOIL,[5] EPIC,[6] DeepImpact,[7] TILDE and TILDEv2,[8] Sparta,[9] SPLADE-max, and DistilSPLADE-max.[3]
There are also extensions of sparse retrieval approaches to the vision-language domain, where these methods are applied to multimodal data, such as combining text with images.[10] This expansion enables the retrieval of relevant content across different modalities, such as finding images based on text queries or vice versa.
Some implementations of SPLADE have similar latency to Okapi BM25 lexical search while giving as good results as state-of-the-art neural rankers on in-domain data.[11]
The Official SPLADE model weights and training code is released under a Creative Commons NonCommercial license.[12] But there are other independent implementations of SPLADE++ (a variant of SPLADE models) that are released under permissive licenses.
SPRINT is a toolkit for evaluating neural sparse retrieval systems.[13]
External links
[edit]Notes
[edit]- ^ Nguyen, Thong; MacAvaney, Sean; Yates, Andrew (2023). "A Unified Framework for Learned Sparse Retrieval". In Kamps, Jaap; Goeuriot, Lorraine; Crestani, Fabio; Maistro, Maria; Joho, Hideo; Davis, Brian; Gurrin, Cathal; Kruschwitz, Udo; Caputo, Annalina (eds.). Advances in Information Retrieval. Lecture Notes in Computer Science. Vol. 13982. Cham: Springer Nature Switzerland. pp. 101–116. arXiv:2303.13416. doi:10.1007/978-3-031-28241-6_7. ISBN 978-3-031-28241-6. S2CID 257585074.
- ^ Formal, Thibault; Piwowarski, Benjamin; Clinchant, Stéphane (2021-07-11). "SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking". Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '21. New York, NY, USA: Association for Computing Machinery. pp. 2288–2292. arXiv:2107.05720. doi:10.1145/3404835.3463098. ISBN 978-1-4503-8037-9. S2CID 235792467.
- ^ a b Formal, Thibault; Piworwarski, Benjamin; Lassance, Carlos; Clinchant, Stéphane (21 September 2021). "SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval". arXiv:2109.10086v1 [cs.IR].
- ^ Dai, Zhuyun; Callan, Jamie (2020-04-20). "Context-Aware Document Term Weighting for Ad-Hoc Search". Proceedings of the Web Conference 2020. New York, NY, USA: ACM. pp. 1897–1907. doi:10.1145/3366423.3380258. ISBN 9781450370233. S2CID 218521094.
- ^ Lin, Jimmy; Ma, Xueguang (28 June 2021). "A few brief notes on DeepImpact, COIL, and a conceptual framework for information retrieval techniques". arXiv:2106.14807 [cs.IR].
- ^ MacAvaney, Sean; Nardini, Franco Maria; Perego, Raffaele; Tonellotto, Nicola; Goharian, Nazli; Frieder, Ophir (2020-07-25). "Expansion via Prediction of Importance with Contextualization". Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '20. New York, NY, USA: Association for Computing Machinery. pp. 1573–1576. arXiv:2004.14245. doi:10.1145/3397271.3401262. ISBN 978-1-4503-8016-4. S2CID 216641912.
- ^ Mallia, Antonio; Khattab, Omar; Suel, Torsten; Tonellotto, Nicola (2021-07-11). "Learning Passage Impacts for Inverted Indexes". Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '21. New York, NY, USA: Association for Computing Machinery. pp. 1723–1727. arXiv:2104.12016. doi:10.1145/3404835.3463030. ISBN 978-1-4503-8037-9. S2CID 233394068.
- ^ Zhuang, Shengyao; Zuccon, Guido (13 September 2021). "Fast Passage Re-ranking with Contextualized Exact Term Matching and Efficient Passage Expansion". arXiv:2108.08513 [cs.IR].
- ^ Zhao, Tiancheng; Lu, Xiaopeng; Lee, Kyusong (28 September 2020). "SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval". arXiv:2009.13013 [cs.CL].
- ^ Nguyen, Thong; Hendriksen, Mariya; Yates, Andrew; de Rijke, Maarten (2024). "Multimodal Learned Sparse Retrieval with Probabilistic Expansion Control". European Conference on Information Retrieval. Cham: Springer Nature Switzerland. pp. 448–464.
- ^ Lassance, Carlos; Clinchant, Stéphane (2022-07-07). "An Efficiency Study for SPLADE Models". Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '22. New York, NY, USA: Association for Computing Machinery. pp. 2220–2226. arXiv:2207.03834. doi:10.1145/3477495.3531833. ISBN 978-1-4503-8732-3. S2CID 250340284.
- ^ "splade/LICENSE at main · naver/splade". GitHub. Retrieved 2023-08-25.
- ^ Thakur, Nandan; Wang, Kexin; Gurevych, Iryna; Lin, Jimmy (2023-07-18). "SPRINT: A Unified Toolkit for Evaluating and Demystifying Zero-shot Neural Sparse Retrieval". Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '23. New York, NY, USA: Association for Computing Machinery. pp. 2964–2974. arXiv:2307.10488. doi:10.1145/3539618.3591902. ISBN 978-1-4503-9408-6. S2CID 259949923.