Templates
https://github.com/oobabooga/text-generation-webui/tree/main/instruction-templates https://www.reddit.com/r/LocalLLaMA/comments/1ca01fa/why_does_almost_every_model_introduce_a_new/
Browser Tokenizer
https://www.reddit.com/r/LocalLLaMA/comments/1c9wicb/llama_3_tokenizer_runs_in_your_browser/ https://huggingface.co/spaces/Xenova/the-tokenizer-playground https://github.com/belladoreai/llama3-tokenizer-js https://belladoreai.github.io/llama3-tokenizer-js/example-demo/build/
Samplers
https://www.reddit.com/r/LocalLLaMA/comments/1c9ydld/we_should_explore_samplers_again/ https://artefact2.github.io/llm-sampling/index.xhtml https://github.com/oobabooga/text-generation-webui/pull/5677
autojudge
Automated Benchmark
https://oobabooga.github.io/benchmark.html https://www.reddit.com/r/LocalLLaMA/comments/1c9s4mf/wizardlm28x22b_seems_to_be_the_strongest_open_llm/ https://www.reddit.com/r/LocalLLaMA/comments/1c8xxb0/i_made_my_own_model_benchmark/ https://automatic1111.github.io/llm-political-compass/ https://github.com/oobabooga/text-generation-webui/pull/5879
Combine with sampler sweep Reliability Log answers