Instructions to use g8a9/roberta-tiny-10M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use g8a9/roberta-tiny-10M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="g8a9/roberta-tiny-10M")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("g8a9/roberta-tiny-10M") model = AutoModelForMaskedLM.from_pretrained("g8a9/roberta-tiny-10M") - Notebooks
- Google Colab
- Kaggle
| timestamp,experiment_id,project_name,duration,emissions,energy_consumed,country_name,country_iso_code,region,on_cloud,cloud_provider,cloud_region | |
| 2023-02-14T11:57:37,0a175e0d-b591-46d3-8518-9842f68d0ae0,codecarbon,42247.267627716064,2.4456974802960616,3.7020444097572627,Italy,ITA,lombardy,N,, | |