Then I submitted to the Open LLM Leaderboard and waited. And waited. Back in the day, the OpenLLM Leaderboard was flooded with dozens of fine-tunes of merges of fine-tunes each day (it was the Wild West), and the waiting list was long. But after a month or so, the results arrived:
«США, несмотря на заявления о якобы уничтоженной армии Ирана, не спешат заходить непосредственно в пролив и брать на себя задачу установления там режима гарантированно безопасного судоходства», — говорится в посте.
,详情可参考wps
First, we need a dataset for which we’ll be able to tell if the model has trained. Let's create one that will make our model talk like Yoda. We can get a bunch of questions from TriviaQA, and generate responses by prompting an LLM to answer the question while pretending it’s Yoda. Running the script, I get a few thousand prompts and responses that look something like this:
18:16, 11 марта 2026Экономика
Example: search for pathname containing "v"