Tech Resources
LLM models, training datasets, technical guides, and infrastructure references from the Blue Note Logic engineering team.
Tools & Frameworks
Development tools, inference runtimes, and frameworks for building and deploying AI applications.
Ollama Runtime
Self-hosted LLM inference engine for running language models locally. Supports GGUF format, GPU acceleration, REST API, and custom Modelfiles for production deployment.
Access →
llama.cpp
High-performance C++ inference engine for GGUF models. The foundational runtime behind Ollama and most local LLM tools, offering maximum control over inference parameters.
Access →