4K Engagement — Working Doc

Newfangled · Frontier · Internal · 2026-05-15

Internal working document for 4K's prospective Frontier engagement. This doc is the synthesized picture of what 4K's audit revealed, what operational simplifications would help them regardless of our involvement, what we could build organized by the tool types we have available, and how those builds group into engagement tiers. Source detail lives in companion Markdown documents in the parent folder; this site presents the picture in scannable form.

Build items
33
Across 5 tool types
Hard blockers
2
Forecast API + MSA review
Simplifications
8
High-leverage 4K cleanups
Audit pains addressed
12
All major pains scoped

Navigation map

Four pages, designed to be read in order or referenced individually. Each page links to the others.

Strategic patterns we've identified

Six patterns that appear across multiple audit findings. Each one shapes how we'd approach the engagement and what we'd recommend 4K do on their side.

Pattern 01
The consolidation principle

Elia's framing: "every 'add this' pairs with a 'retire this.'" Cheat sheets name the retirement explicitly. If our build adds capability without subtracting workflow, we've failed the test. Every item in our capability inventory carries an explicit Retires: line for this reason.

Elia · M3 ~58:25–59:14

Pattern 02
Knowledge in people's heads

Beto's reconciliation formulas, Shanice's sentiment rubric, Beto's risk-correlation rules, Si's rate-card application logic, the engagement-type definitions — all live as implicit knowledge in specific people's heads. Writing them down has value for 4K regardless of us (succession risk, training, consistency) and is the load-bearing soft input for every skill we'd build.

Pattern 03
Multi-system reconciliation as recurring cost

Same engagement has different names in Harvest, Forecast, ClickUp, Notion, Pipedrive, Slack, and Drive. Beto has a reconciliation formula. PMs eyeball matches. The $20K → $15K miss traces back partly to this drift. This isn't a tool problem — it's a structural data-governance problem that should be solved by 4K, not engineered around by us.

Pattern 04
The substrate as force multiplier

The CKA substrate (Build 1) isn't just one deliverable — it's the foundation that makes a class of downstream work possible. SiBorg refactor, Client Health Agent, deep onboarding briefs, cross-engagement comparable retrieval — all of these consume the substrate. Building it is HARD; the value compounds across everything else.

Pattern 05
Two-system reconciliation (Harvest + Forecast specifically)

Harvest is leaving at 5× repricing. Forecast goes with it. The replacement decision is the single highest-leverage simplification 4K can make — a unified PSA replaces both systems, eliminates the name-format-drift problem, removes the Forecast-API blocker, and collapses much of Beto's reconciliation burden. The status quo is the expensive position, not the safe one.

Pattern 06
PM-consistency data is leadership-only forever

Build 2 (Client Health Agent) and any cross-engagement analysis surfaces variance across PMs — response cadence, projection-update timeliness, ticket-cadence per engagement, etc. This data must never appear in PM-visible views. Permission architecture has to be load-bearing from day one of any Build 2 design, not retrofitted. The cost of an accidental leak is the engagement.

What 4K told us

Top-level synthesis of the audit's findings. Quantitative signals on the left, constraints and policy on the right.

Pain Frequency / Volume Time cost Owner
Scorecard ETL ("Updating Fucking Scorecards") Beto · M2 ~06:09–13:15 2× weekly (Tue/Wed L10s) 5 min current + Notion entry + CC import + revenue-projection entry (was 30 min historical). Plus mid-week double-update if numbers shift. Beto
Client report production Joanna · M2 ~21:00–30:00 Daily + monthly + quarterly per client; ~50 reports/week aggregate 30–60 min monthly; 15 min daily per day Joanna, Fabi, Shanice (per-PM templates)
Sentiment scoring Shanice · M2 ~55:55–56:51; Si · M3 ~45:27 Biweekly / monthly / quarterly per engagement ~15 min per scoring entry; gut-reaction 1–10 with implicit weights Shanice
Late-invoice / AR monitoring Elia · M3 ~29:45–32:54 2× weekly (Mon / Thu cycles) Volume not captured in audit (15-min follow-up question) Peke + Jade (split Mon/Thu)
Pipeline → Finance HQ reconciliation Elia · M3 ~21:24–34:38 5-stage manual chain; $20K → $15K example = 25% miss on single client's monthly projection Two-person debugging dependency (sensitive-adjacent) Beto + Teresa (structural dependency)
"HUGE" engagement onboarding Elia · M2 ~51:35 Per-new-team-member, per-engagement ~30+ min verbal walkthrough because written record is stale or scattered Engagement lead + new team member
SiBorg estimation Si · M3 ~13:43–14:10; ~16:33–18:24 Per-opportunity (~50–100/year) "All-logo data wish" — she wants cross-engagement actuals feedback she doesn't have. Plus 6–8 months accumulated win/loss notes that aren't queryable. Si
Pipedrive active leads audit · tech stack ~65 active leads Currently no programmatic access for the team — manual UI work Si + sales-side
Constraint 01
Harvest is moving

5× repricing at renewal. 4K probably won't be on Harvest at end of year. Every Harvest-anchored build needs a pluggable connector layer so the swap is config, not re-architecture.

Elia · M3 ~57:48

Constraint 02
Consolidation over addition

Every "add this" pairs with a "retire this." Cheat sheets name the retirement explicitly. Our capability inventory honors this with a Retires: line on every entry.

Elia · M3 ~58:25–59:14

Constraint 03
Hospitality layer is no-touch

Build zone is operations, finance, biz-dev, AR. The human-to-human client conversation stays out of scope. We don't build anything that automates, mediates, or substitutes for direct client relationships.

Elia · M1 ~25:25

Constraint 04
Connector status open

Three load-bearing 4K tools — ClickUp, Notion, Forecast — are not in our productized MCP family today. ClickUp and Notion have vendor MCPs (adoption, not invention). Forecast is the real exception: no public API, vendor disclaims undocumented use.

Four tool types we build with, plus one we configure

Every build item maps to one of four tool types — or to a fifth implicit category for things that aren't tools-we-build (configuration artifacts, setup activities, organizational process work). The Capabilities page is organized by these types.

Tool type 01
Skills

SKILL.md files in Claude Teams projects. Encode workflows, prompts, decision logic, output format, business rules. Invoked by a user or by a Managed Agent. Consume data through MCPs at invocation; don't run on a schedule themselves.

Most of the Tier-1-style operational items are Skills.

Tool type 02
Custom MCP Connectors

Lambda-deployed services exposing tools to Claude via MCP protocol. Our RunReport pattern (dedicated tools + code-generation for open-ended queries). Worth building even when an off-the-shelf vendor MCP exists, for control: auth, rate limiting, response normalization, read-only enforcement, security via AST validation.

Tool type 03
Managed Agents

Scheduled or event-driven runs in our frontier-agents Lambda platform. Each gets a .md job file with cron schedule, system prompt, MCP attachments, delivery channels. Agents compose Skills and MCPs to do work that needs to happen on its own.

Tool type 04
Custom Apps

Completely custom systems with persistent state, ingestion pipelines, data stores, possibly web UIs. The CKA is the canonical example. Where the largest engineering risk and the highest leverage both sit.

Fifth category
Configuration / Setup / Process

Real engagement deliverables that aren't tools-we-build: INSTRUCTIONS files (configuration artifacts), Claude Teams workspace setup (setup activity), Newfangled-side NIST 800-53 alignment posture (organizational process work), Finance HQ discovery sprint (human consulting work). The four tool types don't cover these; the Capabilities page gives them their own section.

Read in order

The intended reading sequence is left-to-right across the four pages. Each section answers a different question.