đź“– llm-tracker

Search

SearchSearch
      • 2024-01
      • 2024-05
      • 2024-06
      • 2024-08-05
      • DeepSeek-V3 and DeepSeek-R1
      • llama.cpp on CPU
      • Obsidian Plugins
      • Power Usage and Energy Efficiency
      • Proprietary LLM Sizes
      • RTX 3090 vs 7900 XTX Comparison
      • Strix Halo
      • Token Pricing
      • torchtune performance testing
      • UI Clients
      • Video
      • vLLM MI300X vs H100
      • vLLM on RDNA3
      • Code Evaluation
      • EvalPlus
      • List of Evals
      • MT-Bench
      • Speech-to-Text
        • Airoboros LMoE
        • AMD GPUs
        • Apple Silicon Macs
        • ChatGPT Code Interpreter
        • ExLlamaV2
        • Getting Started
        • GPU Monitoring
        • GPU Rentals
        • Hardware
        • Image Tools
        • Inference Speed
        • Inferencing Engines
        • Intel GPUs
        • llama.cpp
        • LLM Inference Benchmarking Cheat‑Sheet for Hardware Reviewers
        • ML Workflow Tips
        • MLC LLM
        • MT-Bench
        • Nvidia GPUs
        • OmniQuant
        • OpenAI API Compatibility
        • Prompting
        • Qwen Fine Tune
        • Replit Models
        • StyleTTS 2 Setup Guide
        • Unsloth
        • _TODO
        • 8xV100-16 vs 8xL4-24
        • Chatbot UI
        • Comparing Quants
        • fastchat2
        • Fine Tuning Mistral
        • HuggingFace Uploading Models
        • JA Dataset Cleaning
        • MT-Bench Analysis
        • torchtune
        • Transcription Test
        • Best Models
        • QUEUE
        • Small Models
        • Alternative Architectures
        • API
        • Code Assistants
        • Colophon
        • Hallucinations
        • Improving LLM Quality
        • Interpretability
        • Japanese LLMs
        • Learning Resources
        • Lists of Models
        • Memory Editing
        • Mixtral
        • Mixture of Experts
        • Other Tools
        • Prompt Injection Protection
        • Quantization Overview
        • RAG
        • RyzenAI
        • S-LoRA
        • SOTA
        • Speed
        • State of AI
        • Translation
        • Use Cases
        • UX
      • 2025 Getting Started Pack
      • About
      • AMD Strix Halo (Ryzen AI Max+ 395) GPU Performance
      • Deep Research
      • DeepSeek-V3 Architecture
      • DeepSeek-V3 Testing
      • GPU Comparison
      • How to do LLM Benchmarking
      • LLM ChangeMyView
      • MI300X Testing
      • Notes
      • Qwen 3 Testing
      • Reading Lists
      • Review of Coding Tools
      • SD Benchmarking
      • SYCL for AMD
      • TGI
      • Untitled
      • Untitled 1
      • Untitled 2
      • Untitled 3
      • Untitled 4
      • vLLM Benchmark serving
      • W7900 Pervasive Computing Project
    Home

    ❯

    2025 Getting Started Pack

    2025 Getting Started Pack

    Dec 18, 2024, 1 min read

    https://darioamodei.com/machines-of-loving-grace

    https://www.youtube.com/watch?v=1yvBqasHLZs

    https://unlocked.microsoft.com/ai-anthology/

    https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/

    https://ai.meta.com/blog/meta-fair-updates-agents-robustness-safety-architecture/

    Resources

    https://www.oneusefulthing.org/

    https://www.latent.space/

    https://www.interconnects.ai/

    https://every.to/podcast

    https://www.cognitiverevolution.ai/

    Training Models

    Overview of modern post-training

    https://allenai.org/blog/tulu-3-technical https://allenai.org/blog/olmo2

    Course

    https://github.com/huggingface/smol-course https://hamel.dev/blog/posts/course/

    Technical

    https://pytorch.org/torchtune/main/ https://axolotl-ai-cloud.github.io/axolotl/ https://unsloth.ai/blog

    • Resources
    • Training Models
    • Overview of modern post-training
    • Course
    • Technical

    Backlinks

    • No backlinks found

    Created with Quartz © 2025