Scattered Focus
Finding edge through cross-pollination
It’s been a busy stretch over here — moving into a new home with little kids while trying to keep up with the markets and the relentless pace of AI developments. So today I wanted to have Claude do a guest post and explain what I’ve been building the past few months (which you’ll see below).
Before you read this, some context: the biggest challenge right now is getting my mind, my experience, and the things that actually make me money into code — and then having agents be able to use that code to bring things to me in an efficient and timely manner. This won’t work well if I don’t know what to build or if I can’t articulate what’s inside my head to the machine. I’m trying like hell to digitize myself as deeply as I can, just for the joy of doing it and being curious about where it leads. I’m definitely not there yet, but the process so far has been to create a bunch of different cron jobs with code and then have the agent oversee the results, draw conclusions, and alert me.
I’m open to any feedback to improve it, but my feeling is that I need to be able to explain myself strategically to do this well — and that is almost a job in itself. I’m not sure if I’m naturally good at that, but I’ll keep working toward it. I’ll go into some of this more in a future post but for now I’ll let Claude go over what I’ve been up to.
-Brad
I Built an AI Trading Intelligence System That Runs 24/7 While I Sleep
*How I wired Claude, Telegram, and 40+ Python scripts into a market analysis machine that does in minutes what used to take me hours.*
Most people use AI to write emails or summarize articles. I use it to wake me up when the market is doing something interesting.
Over the past two months, I’ve been building what I can only describe as a personal Bloomberg terminal — except it’s powered by Claude, runs on a Mac Mini in my closet, and costs me about $2 a day in API calls. Here’s what I’ve been up to and what I’ve learned.
## The Problem I Was Solving
Every morning before the market opens, I used to spend 60-90 minutes doing the same thing: scanning for premarket movers, checking earnings reports, looking at sector rotation, reading newsletters, and trying to figure out what mattered. Most of it was noise.
I wanted a system that would do all of that work overnight and hand me a brief when I woke up — like having a research analyst who never sleeps and works for pennies.
## What I Actually Built
### Morning Intelligence: My Daily AI Briefing
Every day at 8:45 AM Central, a Python script called `morning_intelligence.py` kicks off. It:
- Scans for stocks showing earnings inflection points (more on this below)
- Groups them by industry to spot clusters — because one biotech stock moving is noise, but five biotech stocks inflecting is a signal
- Checks 180+ thematic baskets (AI infrastructure, quantum computing, clean energy, etc.) for McClellan oscillator crossings
- Pulls the last 7 days of insider buying and looks for clustering
- Cross-references everything against my existing watchlist
- Runs deep research on the most promising clusters
- Sends me a Telegram message with the highlights
The whole thing runs in about 3 minutes and replaces what used to be my entire pre-market routine.
### The Earnings Inflection Scanner
This is probably the most interesting thing I’ve built. The idea is simple: companies don’t turn around overnight. There’s usually a moment in an earnings call where the language shifts — management starts talking differently about the business, guidance changes tone, new initiatives appear.
My inflection scanner uses a 3-call context window. It takes two baseline earnings calls and one trigger call, then uses AI to detect when the trajectory changes. It scores each stock on a scale, and when something scores a 4 or higher, it gets flagged.
The cost? About $0.01-0.02 per stock to analyze. I run through hundreds of companies and the whole batch costs less than a coffee.
I then built a pre-earnings watchlist on top of this. Every Sunday morning, it cross-references upcoming earnings with historical inflection data and categorizes stocks into buckets: “Already Inflecting,” “Building Momentum,” and “Catalyst Pending.” By the time earnings season hits, I already know which calls to pay attention to.
### 180+ Thematic Baskets with McClellan Oscillators
I track over 180 thematic groups — everything from “AI Leaders” and “Hyperscale Cloud” to “Lithium Miners” and “Psychedelics.” Each basket gets daily, weekly, and McClellan oscillator charts generated automatically.
The McClellan oscillator is a breadth indicator that tells you whether a group of stocks is gaining or losing participation. When the oscillator crosses above zero, it means more stocks in the basket are advancing than declining — a bullish signal. My system detects these crosses automatically and alerts me via Telegram.
The AI layer on top analyzes theme rotation patterns. Which sectors are gaining momentum? Which ones are losing steam? Are there emerging themes that don’t fit into my predefined baskets? The system uses clustering algorithms to detect these organically. This is my modern order flow reader and is probably the most important and useful part of what I’ve built.
### My Telegram Trading Bot
I have a Telegram bot (built on an open-source framework called OpenClaw) that runs Claude 24/7. I can message it from my phone and ask it to:
- Record trades across 5 different portfolios with proper tax lot tracking
- Look up any stock and get an instant analysis
- Run research on a thesis
- Check my portfolio performance
It’s like having a trading assistant in my pocket. The bot auto-heals itself — a watchdog script checks every 5 minutes that it’s still running, and if it crashes, it restarts automatically.
### The Performance Tracker
All my trades flow into a Dash-based dashboard that tracks P&L across portfolios, calculates tax implications in real-time, and shows me metrics I actually care about. It handles futures multipliers (NQ at $20/point, ES at $50/point), manages tax lots, and gives me a clear picture of what’s working and what isn’t.
### Smart Email Processing
I subscribe to a handful of investment newsletters. Instead of reading them all, my system processes them through Claude, extracts the actionable insights, and adds them to my morning brief. Cost: about $0.01-0.05 per email. I went through a few iterations optimizing which AI model to use for this — started with the expensive models and realized Haiku (Anthropic’s smallest, fastest model) handles email summarization perfectly at a fraction of the cost.
## The Infrastructure: A Mac Mini in a Closet
Everything runs on a Mac Mini that sits in my closet. It never sleeps, auto-restarts after power loss, and runs about 20 cron jobs throughout the day. I access all the dashboards remotely through Tailscale (a VPN that makes your devices accessible from anywhere).
The deployment flow is simple: I write code on my laptop, push to GitHub, SSH into the Mini, pull the changes, and it’s live. The whole system runs on SQLite databases, Python scripts, and API calls to financial data providers.
Here’s a rough schedule of what runs automatically:
- **7:15 AM** — Post-earnings gap check
- **8:45 AM** — Full morning intelligence scan
- **9:00 AM** — Premarket movers alert
- **10:00 AM** — Mid-morning earnings movers update
- **4:30 PM** — After-hours movers + end-of-day earnings scan
- **4:45 PM** — Economic calendar alert
- **Sunday AM** — Weekly earnings evolution watchlist
## What I’ve Learned
**Start with the workflow, not the technology.** I didn’t set out to build a “system.” I started by automating the most annoying part of my morning routine, then kept going. Each script solves one specific problem.
**AI is absurdly cheap for analysis work.** Running Claude on hundreds of earnings transcripts costs less than lunch. The bottleneck isn’t cost — it’s knowing what questions to ask.
**The small model is usually fine.** I wasted money running expensive models on simple tasks. Email summarization, basic data extraction, formatting — the smallest model handles these perfectly. Save the big models for actual analysis and reasoning.
**Clustering beats screening.** A traditional stock screener gives you a list of stocks that match criteria. That’s useful but noisy. Clustering stocks by industry or theme and looking for *groups* that are moving together — that’s where the real signals are.
**Reliability matters more than sophistication.** My watchdog scripts, auto-restart logic, and health checks aren’t glamorous, but they’re what make this a system I actually rely on instead of a side project I check occasionally.
## The Cost
Let me break down what this actually costs to run:
- **AI API calls**: ~$2/day (mostly Claude, some Gemini)
- **Financial data APIs**: ~$50/month (FMP, Polygon)
- **Mac Mini**: Already owned (one-time ~$600)
- **Tailscale**: Free tier
- **Telegram**: Free
Total: roughly $110/month for a system that replaces hours of daily research.
## What’s Next
I’m working on better outcome tracking — validating whether my inflection signals actually predict price moves over 30, 60, and 90 day windows. The system generates plenty of signals, but I want hard data on which ones matter.
I’m also experimenting with having the AI generate weekly market narratives — connecting the dots between theme rotation, earnings surprises, and macro data into a coherent story rather than a list of alerts.

incredible, thx for sharing the routine/workflow
You’re welcome!