Building the Foundation: Livepeer NaaP Analytics and Our Treasury Proposal
The Cloud SPE's treasury proposal for Network-as-a-Product (NaaP) analytics has passed. Here's what we're building, why it matters, and how Milestone 1 is progressing.
From corporate veteran to independent creator. I build programming courses, explore Web3 technologies, and share decades of development experience through blogging and teaching.
use livepeer_rs::Client;
// 24+ years of experience
// 4+ years in Web3
async fn build_future() {
let courses = create_courses();
let blog = share_knowledge();
let web3 = explore_livepeer();
tokio::join!(courses, blog, web3);
} After 24+ years in corporate development, I'm now focused on sharing knowledge and exploring the frontier of Web3 technology
Comprehensive courses teaching real-world development skills from decades of experience.
Insights from 24+ years of tinkering, building, and solving complex programming challenges.
Building on the Livepeer network and exploring the cutting edge of decentralized technology.
After decades with various languages, these are the technologies capturing my attention right now
Systems programming and performance-critical applications
Backend services and concurrent applications
Full-stack web development and modern frameworks
Decentralized video infrastructure and Web3 protocols
In 2023, I made the leap from corporate software development to focus on what I'm truly passionate about: teaching others and building on cutting-edge technologies. After 24+ years of solving complex problems, I'm now dedicated to sharing that knowledge through courses, blogging, and exploring Web3.
Sharing insights from 24+ years of building software and recent explorations in Web3
The Cloud SPE's treasury proposal for Network-as-a-Product (NaaP) analytics has passed. Here's what we're building, why it matters, and how Milestone 1 is progressing.
How I built an OpenAI-compatible API on Livepeer's decentralized GPU network — no rate limits, no data harvesting, just fast and affordable LLM inference for AI agents like OpenClaw.
Cloud SPE member Mike Zupper demonstrates how to enable 8GB+ NVIDIA GPUs to run LLM AI inference on the Livepeer network using a custom Ollama-based Docker GPU runner.
Whether you're interested in programming courses, want to discuss Web3 development, or just want to chat about software engineering, I'd love to hear from you.