Deploying Livepeer's LLM Pipeline with an Ollama GPU Runner – A Cloud SPE Deep Dive
Cloud SPE member Mike Zupper demonstrates how to enable 8GB+ NVIDIA GPUs to run LLM AI inference on the Livepeer network using a custom Ollama-based Docker GPU runner.