Tech Resources
LLM models, training datasets, technical guides, and infrastructure references from the Blue Note Logic engineering team.
Model Downloads
Production-ready LLM models in GGUF and other formats for local inference and deployment.
dobetter-norge-v2
Domain-specific LLM fine-tuned for Norwegian legal text. Built on Qwen 2.5 7B with QLoRA 4-bit quantization, trained on 2,847 legislative documents and 7,603 expert Q&A pairs.
View details → Featured
Claude Model Family Reference
Comprehensive comparison of Anthropic's Claude model tiers: Opus 4.6, Sonnet 4.6, and Haiku 4.5. Context windows, capabilities, pricing, and recommended use cases from BNL's production experience.
View details →
Qwen 2.5 7B Base Model
The base model architecture behind dobetter-norge-v2. Alibaba's Qwen 2.5 7B offers strong multilingual performance and serves as an excellent foundation for domain-specific fine-tuning.
Access →Training Datasets
Curated Q&A pairs, document corpora, and structured training data for fine-tuning domain-specific models.
Norwegian Legal Q&A Dataset
7,603 expert-curated question-answer pairs covering Norwegian legislation, case law, and regulatory compliance. Used to fine-tune dobetter-norge-v2.
View details →
Norwegian Legislative Corpus
Source document collection of 2,847 Norwegian laws, regulations, and court decisions. The foundation dataset used to generate training data for dobetter-norge-v2.
Access →Technical Guides
Step-by-step deployment guides, fine-tuning workflows, and configuration references from the BNL engineering team.
Ollama Local Inference Setup Guide
Complete guide to deploying GGUF models locally with Ollama. Covers installation, model loading, API configuration, and performance optimization for production use.
View details →
QLoRA Fine-Tuning Workflow
Blue Note Logic's documented workflow for fine-tuning language models with QLoRA 4-bit quantization. Covers dataset preparation, hyperparameter selection, training execution, and evaluation.
Access →
GGUF Quantization Reference
Comprehensive comparison of GGUF quantization levels from Q2_K through Q8_0. File sizes, quality tradeoffs, VRAM requirements, and inference speed benchmarks.
Access →
Claude Code CLI Developer Guide
Setup and advanced usage of Anthropic's Claude Code CLI for AI-assisted development workflows. Configuration tips and productivity patterns from the Gilligan.TECH development stack.
Access →Tools & Frameworks
Development tools, inference runtimes, and frameworks for building and deploying AI applications.
Ollama Runtime
Self-hosted LLM inference engine for running language models locally. Supports GGUF format, GPU acceleration, REST API, and custom Modelfiles for production deployment.
Access →
llama.cpp
High-performance C++ inference engine for GGUF models. The foundational runtime behind Ollama and most local LLM tools, offering maximum control over inference parameters.
Access →Hardware & Infrastructure
GPU specifications, inference benchmarks, and infrastructure configuration for local and cloud AI workloads.
NVIDIA RTX 5090 Inference Benchmark
Local inference performance data for the NVIDIA RTX 5090. Benchmarks across model sizes, quantization levels, and batch configurations from BNL's production hardware.
View details →
GPU Inference Configuration Guide
Setting up local AI inference infrastructure: CUDA toolkit installation, driver requirements, VRAM planning for different model sizes, and multi-GPU configuration.
Access →