📖 llm-tracker

Search

SearchSearch
      • 2024-01
      • 2024-05
      • 2024-06
      • 2024-08-05
      • DeepSeek-V3 and DeepSeek-R1
      • llama.cpp on CPU
      • Obsidian Plugins
      • Power Usage and Energy Efficiency
      • Proprietary LLM Sizes
      • RTX 3090 vs 7900 XTX Comparison
      • Strix Halo
      • Token Pricing
      • torchtune performance testing
      • UI Clients
      • Video
      • vLLM MI300X vs H100
      • vLLM on RDNA3
      • Code Evaluation
      • EvalPlus
      • List of Evals
      • MT-Bench
      • Speech-to-Text
        • Airoboros LMoE
        • AMD GPUs
        • Apple Silicon Macs
        • Benchmarking LLM Speed
        • ChatGPT Code Interpreter
        • ExLlamaV2
        • Getting Started
        • GPU Monitoring
        • GPU Rentals
        • Hardware
        • Image Tools
        • Inference Speed
        • Inferencing Engines
        • Intel GPUs
        • llama.cpp
        • ML Workflow Tips
        • MLC LLM
        • MT-Bench
        • Nvidia GPUs
        • OmniQuant
        • OpenAI API Compatibility
        • Prompting
        • Qwen Fine Tune
        • Replit Models
        • StyleTTS 2 Setup Guide
        • Unsloth
        • _TODO
        • 8xV100-16 vs 8xL4-24
        • Chatbot UI
        • Comparing Quants
        • fastchat2
        • Fine Tuning Mistral
        • HuggingFace Uploading Models
        • JA Dataset Cleaning
        • MT-Bench Analysis
        • torchtune
        • Transcription Test
        • Best Models
        • QUEUE
        • Small Models
        • Alternative Architectures
        • API
        • Code Assistants
        • Colophon
        • Hallucinations
        • Improving LLM Quality
        • Interpretability
        • Japanese LLMs
        • Learning Resources
        • Lists of Models
        • Memory Editing
        • Mixtral
        • Mixture of Experts
        • Other Tools
        • Prompt Injection Protection
        • Quantization Overview
        • RAG
        • RyzenAI
        • S-LoRA
        • SOTA
        • Speed
        • State of AI
        • Translation
        • Use Cases
        • UX
      • 2025 Getting Started Pack
      • About
      • Deep Research
      • DeepSeek-V3 Architecture
      • DeepSeek-V3 Testing
      • GPU Comparison
      • How to do LLM Benchmarking
      • LLM ChangeMyView
      • MI300X Testing
      • Notes
      • Qwen 3 Testing
      • Reading Lists
      • Review of Coding Tools
      • SD Benchmarking
      • SYCL for AMD
      • TGI
      • Untitled
      • Untitled 1
      • Untitled 2
      • Untitled 3
      • Untitled 4
      • vLLM Benchmark serving
      • W7900 Pervasive Computing Project
    Home

    ❯

    _TOORG

    ❯

    RTX 3090 vs 7900 XTX Comparison

    RTX 3090 vs 7900 XTX Comparison

    Jan 03, 2025, 1 min read

    Architecture https://chipsandcheese.com/p/amds-cdna-3-compute-architecture https://chipsandcheese.com/p/amd-rdna-3-5s-llvm-changes https://chipsandcheese.com/p/microbenchmarking-amds-rdna-3-graphics-architecture

    https://espadrine.github.io/blog/posts/recomputing-gpu-performance.html

    https://gpuopen.com/learn/wmma_on_rdna3/

    Prior

    https://www.reddit.com/r/LocalLLaMA/comments/191srof/amd_radeon_7900_xtxtx_inference_performance/ https://www.reddit.com/r/LocalLLaMA/comments/1atvxu2/current_state_of_training_on_amd_radeon_7900_xtx/ https://www.reddit.com/r/LocalLLaMA/comments/1ghvwsj/llamacpp_compute_and_memory_bandwidth_efficiency/ https://www.reddit.com/r/LocalLLaMA/comments/1fssvbm/september_2024_update_amd_gpu_mostly_rdna3_aillm/

    https://cprimozic.net/notes/posts/machine-learning-benchmarks-on-the-7900-xtx/

    Convos

    GPU Specs Table: https://claude.ai/chat/ec80af9d-40e8-431d-b0b6-ebd2be11fde4

    Nvidia Architecture, detailed Memory Levels https://claude.ai/chat/2c271405-2df3-46ff-af2e-1d9b9ef85184

    AMD comparison, wave32 vs wave64 https://claude.ai/chat/a73ec329-88d0-48b8-b116-a70b26e6302c

    • Prior
    • Convos

    Backlinks

    • No backlinks found

    Created with Quartz © 2025