Instructions to use facebook/dpr-question_encoder-multiset-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/dpr-question_encoder-multiset-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="facebook/dpr-question_encoder-multiset-base")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base") model = AutoModel.from_pretrained("facebook/dpr-question_encoder-multiset-base") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 564f88ed861330b44381b80e8606e2977c0757beaa24659ec191a6f3d90837da
- Size of remote file:
- 438 MB
- SHA256:
- 62f6c3a9b8076b88647c9d75c359c1182bd75d3aab7fc7299dad59690919e06f
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.