About to build a 6× Arc B70 LLM rig, want to talk to someone experienced first

Reddit r/LocalLLaMA News

Summary

A user seeks experienced guidance on building a 6× Intel Arc B70 LLM inference rig, particularly for Llama models and vLLM deployment, offering compensation for consultation.

Hello, I’m preparing to build a rig with six Intel Arc B70s, but before I move forward, I’d like to speak with someone who has experience building similar systems (no arc specific knowledge required) , particularly with llama and vLLM. In my initial tests using a 5090 machine & a 128GB of unified memory system, I’ve been seeing some interesting results. I have several questions and would really value the opportunity to discuss them with someone experienced so I can make informed decisions and set things up correctly from the start. I’m open to paying for your time; however, depending on the rate, I would appreciate seeing some evidence of relevant experience. Thanks!
Original Article

Similar Articles