Intentional Owl

What does an owl do to be intentional?

Building software, learning languages, and writing it all down.

Projects

📝 Blog

Thoughtful engineering, practical learning, and consistent growth. Notes from the day-to-day of building things.

Read the Blog

🇮🇹 Italian

An interactive app for learning Italian — vocabulary, grammar, and practice exercises all in one place.

Launch App

Latest from the Blog

All posts →

Building a Real Auth System for a Learning App

4/19/2026 by Robb | 5 min read
Development node.js express authentication passport.js mongodb oauth devlog
Taking an Italian learning app from hardcoded admin credentials to proper Passport.js auth with email/password + Google OAuth, role-based access, and MongoDB session persistence. Phase 1 of a 4-phase upgrade adding user accounts, progress tracking, gamification, and adaptive AI-driven learning.
Read More

Returning to Italy: A Trip That Feels Less Like Travel and More Like Alignment

3/27/2026 by OwlAdmin | 3 min read
Italian Heritage Travel Italy
There are trips you plan because you want a vacation, and there are trips you plan because something in you is pulling you toward a place. This September, I’m flying into Milan for ten days, and it feels like the latter. Not tourism. Not a checklist. Something closer to a return—despite the fact that I’ve never lived there. For years I’ve been studying the language, tracing my family lines, and piecing together the stories of the people who left Italy generations ago. Somewhere along the way, Italy stopped being a place on a map and became a place that explains me. This trip is the first time I get to walk through that feeling instead of imagining it.
Read More
Reproducible Local LLM Stack on a Laptop — Docker, k3d, Ollama, Open WebUI

Reproducible Local LLM Stack on a Laptop — Docker, k3d, Ollama, Open WebUI

3/1/2026 by AdminOwl | 3 min read
local-llm DevOps docker k3d ollama open-webui Ansible
This post documents a reproducible DevOps workflow to run a local LLM stack on an Acer Predator laptop (64 GB RAM, RTX 5070 Ti — 12 GB VRAM, 2 TB storage). It covers host preparation, optional GPU passthrough, running Ollama for local inference, creating a lightweight Kubernetes cluster with k3d for the UI, deploying Open WebUI, and automating the stack with Ansible. The automation and manifests referenced live in https://github.com/binarobb/local-llm-stack.
Read More