The Challenge
Oversite is a construction management SaaS platform built for engineering firms, inspectors, and contractors. With a team of just four — a CTO/lead developer, one additional developer, and two sales representatives — they were building a strong product but drowning in the day-to-day.
The team was already using strong tools individually — HubSpot for CRM, Otter for meetings, GitLab for code, Intercom for support, Slack for communication. But nothing talked to anything else. Every morning started with 30–45 minutes of platform-hopping just to figure out what needed attention.
A Sales Rep Doing QA. Before every release, a sales team member spent hours manually clicking through every workflow, every button, and every feature — because the team couldn't afford a dedicated QA hire.
Pipeline Reports Took Half a Day. Generating a weekly sales report meant manually pulling HubSpot data, reviewing call recordings, and cross-referencing meeting notes — a 3–4 hour process.
Action Items Disappeared. Daily standups generated commitments and priorities, but without a system to capture them, decisions made at 9 AM were forgotten by afternoon.
Support Spread Across Three Tools. Customer issues arrived in Intercom, were discussed in Slack, and tracked as GitLab issues — with no unified view.
What We Built
Rather than replacing Oversite's existing tools, we connected them through Claude's Model Context Protocol (MCP) — creating an AI orchestration layer where a single prompt can pull data from multiple platforms, synthesize it, and deliver actionable intelligence.
1. Unified Daily Operations Hub. Connected Otter, Intercom, Slack, Gmail, and Google Calendar through Claude so any team member can ask "What should I be working on today?" and get a prioritized briefing.
2. AI-Powered Sales Intelligence. Built a Claude project that pulls real-time deal data from HubSpot and enriches it with qualitative context from Otter meeting transcripts. A full pipeline report that took 3–4 hours now generates in under 5 minutes.
3. Automated QA Testing System. Designed a two-track QA architecture: locally, Claude Code generates Playwright tests against a real browser, with a self-healing loop that automatically fixes failing tests. In CI, every merge request triggers an AI analysis that posts test suggestions directly as GitLab comments.
4. Team AI Onboarding & Project Architecture. Conducted hands-on training and designed role-specific Claude projects — Sales Pipeline Reporting, Accounts Receivable, QA Automation — each with tailored instructions and connected integrations.
The Results
30+ hours saved every week. The equivalent of nearly one full-time employee returned to high-value work.
80% reduction in manual QA time. Freeing a sales rep to get back to selling instead of clicking through test workflows before every release.
5 minutes to generate a full sales pipeline report. Enriched with meeting intelligence — down from 3–4 hours of manual compilation.
~$200/month total cost. The entire AI automation system costs roughly $200/month — versus $15K–23K/month for the 2–3 additional hires it replaces.