Is it worth getting a 5090 for my needs?

Reddit r/LocalLLaMA News

Summary

User asks whether purchasing an RTX 5090 and high-end PC for ~$5500 is worth it for LLM experimentation and learning, compared to cloud compute alternatives.

I'm considering biting the bullet and getting a pc with the following specs: * 5090 * Amd 9950x3d * X870 motherboard * 32gb ram (16x2) CL32 EDIT2: Price for this is falling in the arena of 5500-6000 USD where I live. Obviously costs a bomb. But I'm hoping it will become cost effective over time (10 years probably) as I intend to use it to learn as much as I can about LLMs and ideate and work on use cases for them. I also feel the future is going to be LLMs in some form or other and it's better late than never to try and keep up. My questions 1. how does it perform with dense models like qwen3.6-27B and gemma4-31B. These are most likely the models I'll be trying to build applications around. 2. The alternative is using adhoc compute resources on [vast.ai](http://vast.ai) or maybe spend more for Google cloud or something. But that gets expensive also fast. I can keep costs down by keeping it adhoc but that increases friction. 3. My only application is LLMs. I don't play games or anything else that needs a gpu like this one. Edit: forgot to mention, my current system is a lenovo e14 laptop with 780m igpu and 32gb ram.
Original Article

Similar Articles

Which computer should I buy: Mac or custom-built 5090? [D]

Reddit r/MachineLearning

A user seeks advice on whether to purchase a Mac (M5) or custom-built RTX 5090 for machine learning projects involving fine-tuning, custom pipelines, and image/video-heavy workflows, with curiosity about Apple's MLX framework as an alternative to NVIDIA CUDA.

RTX Pro 4500 Blackwell - Qwen 3.6 27B?

Reddit r/LocalLLaMA

A developer shares local inference benchmarks and systemd configurations for running the Qwen3.6-27B model on an NVIDIA RTX Pro 4500 Blackwell GPU using llama.cpp. The post requests optimization tips for throughput and explores potential use cases for larger models.