Internal working document for 4K's prospective Frontier engagement. This doc is the synthesized picture of what 4K's audit revealed, what operational simplifications would help them regardless of our involvement, what we could build organized by the tool types we have available, and how those builds group into engagement tiers. Source detail lives in companion Markdown documents in the parent folder; this site presents the picture in scannable form.
Four pages, designed to be read in order or referenced individually. Each page links to the others.
The strategic patterns we've identified, the quantitative pains 4K named, the four constraints the audit set, and the tool-type taxonomy that organizes everything else.
02 / SimplificationsEight operational simplifications 4K could make that would benefit them whether they hire us or not — and would materially simplify any AI system we build on top. Leverage-ranked.
03 / CapabilitiesEvery build item we've identified, grouped by tool type (Skills, MCP Connectors, Managed Agents, Custom Apps, Config & Process). Click any item for full per-item detail: what we'd build, where the data comes from, what 4K-side input we need, what could go wrong.
04 / TiersHow the build items group into six tiers (Foundations → Substrate → Skills → Automations → Intelligence → Extensions). Substrate-first ordering — the CKA lands as Tier 1 foundation, not as a finale. Each tier's composition, dependencies, and annotations for what changes if 4K does the simplifications.
Six patterns that appear across multiple audit findings. Each one shapes how we'd approach the engagement and what we'd recommend 4K do on their side.
Elia's framing: "every 'add this' pairs with a 'retire this.'" Cheat sheets name the retirement explicitly. If our build adds capability without subtracting workflow, we've failed the test. Every item in our capability inventory carries an explicit Retires: line for this reason.
Elia · M3 ~58:25–59:14
Beto's reconciliation formulas, Shanice's sentiment rubric, Beto's risk-correlation rules, Si's rate-card application logic, the engagement-type definitions — all live as implicit knowledge in specific people's heads. Writing them down has value for 4K regardless of us (succession risk, training, consistency) and is the load-bearing soft input for every skill we'd build.
Same engagement has different names in Harvest, Forecast, ClickUp, Notion, Pipedrive, Slack, and Drive. Beto has a reconciliation formula. PMs eyeball matches. The $20K → $15K miss traces back partly to this drift. This isn't a tool problem — it's a structural data-governance problem that should be solved by 4K, not engineered around by us.
The CKA substrate (Build 1) isn't just one deliverable — it's the foundation that makes a class of downstream work possible. SiBorg refactor, Client Health Agent, deep onboarding briefs, cross-engagement comparable retrieval — all of these consume the substrate. Building it is HARD; the value compounds across everything else.
Harvest is leaving at 5× repricing. Forecast goes with it. The replacement decision is the single highest-leverage simplification 4K can make — a unified PSA replaces both systems, eliminates the name-format-drift problem, removes the Forecast-API blocker, and collapses much of Beto's reconciliation burden. The status quo is the expensive position, not the safe one.
Build 2 (Client Health Agent) and any cross-engagement analysis surfaces variance across PMs — response cadence, projection-update timeliness, ticket-cadence per engagement, etc. This data must never appear in PM-visible views. Permission architecture has to be load-bearing from day one of any Build 2 design, not retrofitted. The cost of an accidental leak is the engagement.
Top-level synthesis of the audit's findings. Quantitative signals on the left, constraints and policy on the right.
| Pain | Frequency / Volume | Time cost | Owner |
|---|---|---|---|
| Scorecard ETL ("Updating Fucking Scorecards") Beto · M2 ~06:09–13:15 | 2× weekly (Tue/Wed L10s) | 5 min current + Notion entry + CC import + revenue-projection entry (was 30 min historical). Plus mid-week double-update if numbers shift. | Beto |
| Client report production Joanna · M2 ~21:00–30:00 | Daily + monthly + quarterly per client; ~50 reports/week aggregate | 30–60 min monthly; 15 min daily per day | Joanna, Fabi, Shanice (per-PM templates) |
| Sentiment scoring Shanice · M2 ~55:55–56:51; Si · M3 ~45:27 | Biweekly / monthly / quarterly per engagement | ~15 min per scoring entry; gut-reaction 1–10 with implicit weights | Shanice |
| Late-invoice / AR monitoring Elia · M3 ~29:45–32:54 | 2× weekly (Mon / Thu cycles) | Volume not captured in audit (15-min follow-up question) | Peke + Jade (split Mon/Thu) |
| Pipeline → Finance HQ reconciliation Elia · M3 ~21:24–34:38 | 5-stage manual chain; $20K → $15K example = 25% miss on single client's monthly projection | Two-person debugging dependency (sensitive-adjacent) | Beto + Teresa (structural dependency) |
| "HUGE" engagement onboarding Elia · M2 ~51:35 | Per-new-team-member, per-engagement | ~30+ min verbal walkthrough because written record is stale or scattered | Engagement lead + new team member |
| SiBorg estimation Si · M3 ~13:43–14:10; ~16:33–18:24 | Per-opportunity (~50–100/year) | "All-logo data wish" — she wants cross-engagement actuals feedback she doesn't have. Plus 6–8 months accumulated win/loss notes that aren't queryable. | Si |
| Pipedrive active leads audit · tech stack | ~65 active leads | Currently no programmatic access for the team — manual UI work | Si + sales-side |
5× repricing at renewal. 4K probably won't be on Harvest at end of year. Every Harvest-anchored build needs a pluggable connector layer so the swap is config, not re-architecture.
Elia · M3 ~57:48
Every "add this" pairs with a "retire this." Cheat sheets name the retirement explicitly. Our capability inventory honors this with a Retires: line on every entry.
Elia · M3 ~58:25–59:14
Build zone is operations, finance, biz-dev, AR. The human-to-human client conversation stays out of scope. We don't build anything that automates, mediates, or substitutes for direct client relationships.
Elia · M1 ~25:25
Three load-bearing 4K tools — ClickUp, Notion, Forecast — are not in our productized MCP family today. ClickUp and Notion have vendor MCPs (adoption, not invention). Forecast is the real exception: no public API, vendor disclaims undocumented use.
Every build item maps to one of four tool types — or to a fifth implicit category for things that aren't tools-we-build (configuration artifacts, setup activities, organizational process work). The Capabilities page is organized by these types.
SKILL.md files in Claude Teams projects. Encode workflows, prompts, decision logic, output format, business rules. Invoked by a user or by a Managed Agent. Consume data through MCPs at invocation; don't run on a schedule themselves.
Most of the Tier-1-style operational items are Skills.
Lambda-deployed services exposing tools to Claude via MCP protocol. Our RunReport pattern (dedicated tools + code-generation for open-ended queries). Worth building even when an off-the-shelf vendor MCP exists, for control: auth, rate limiting, response normalization, read-only enforcement, security via AST validation.
Scheduled or event-driven runs in our frontier-agents Lambda platform. Each gets a .md job file with cron schedule, system prompt, MCP attachments, delivery channels. Agents compose Skills and MCPs to do work that needs to happen on its own.
Completely custom systems with persistent state, ingestion pipelines, data stores, possibly web UIs. The CKA is the canonical example. Where the largest engineering risk and the highest leverage both sit.
Real engagement deliverables that aren't tools-we-build: INSTRUCTIONS files (configuration artifacts), Claude Teams workspace setup (setup activity), Newfangled-side NIST 800-53 alignment posture (organizational process work), Finance HQ discovery sprint (human consulting work). The four tool types don't cover these; the Capabilities page gives them their own section.
The intended reading sequence is left-to-right across the four pages. Each section answers a different question.