Every soft-information gap and every "we need to reconcile across systems" line in the capability inventory is a place where 4K's current state makes their own life harder AND limits what we can build effectively. This page surfaces the highest-leverage simplifications — most are valuable for 4K independent of any AI work. Several are succession-risk reductions or single-source-of-truth cleanups that 4K should do regardless. If they happen, our build gets materially simpler and the resulting system works better.
Eight simplifications. Some take a few days; one is essentially free if 4K is already planning to act on it. Several would benefit 4K independent of any AI work — they're operational hygiene wins. The downstream impact column references items from the Capabilities page.
| # | Simplification | Effort for 4K | Downstream impact on our build |
|---|---|---|---|
| 01 | Unified PSA when replacing Harvest | None extra (already replacing) | Massive — kills #21, simplifies #22, #24 |
| 02 | Canonical engagement IDs + registry | Days | High — clean #13, #24, #28 |
| 03 | Document implicit rules (Beto's formulas, Shanice's rubric, etc.) | Weeks of writing | High — #02, #07, #24, #26 all benefit |
| 04 | Drive folder cleanup (SOWs, rate cards, etc.) | Days | Medium-high — #05, #08, #28 |
| 05 | Notion DB linkage | Days | Medium — #28 ingestion gets cleaner |
| 06 | Resolve Zoom + MSA policy questions | A meeting each | Medium — unblocks #31, #32 |
| 07 | PM projection-update SOP + related behavioral SOPs | Cultural change (hardest) | Medium — addresses root cause of #22 pain |
| 08 | Template consolidation evaluation | Internal exercise | Low — #23 accommodates either way |
The Harvest replacement decision is the highest-leverage simplification 4K can make — and the only one that's essentially free if they're already replacing the tool anyway.
4K runs two systems (Harvest for actuals + Forecast for planning) plus Teresa's CC sheet plus the invoicing sheet plus Beto's master sheet plus Pipeline-and-Active-Projects sheet plus Notion scorecards — at least six surfaces that have to agree on what's happening with each engagement.
research/harvest-replacements.md; Scoro and Bonsai are also viable; Replicon for enterprise scale.
Every engagement has a different identifier and likely a different name in Harvest, Forecast, ClickUp, Notion, Pipedrive, Slack, and Drive. Beto has a reconciliation formula. PMs eyeball matches. The $20K → $15K reconciliation miss was partly a consequence of this drift.
acme-2026-cms-rebuild). Use it in every system that supports a custom field or naming convention.When Beto's not around, anyone can figure out which Pipedrive deal corresponds to which Harvest project. The current "ask Beto" pattern is a continuity risk.
Plus: as a 4K-internal cleanup, this is the kind of operational discipline that compounds. New engagements start clean from day one of having the practice.
Several of the most important inputs to our build don't exist anywhere except in specific people's heads. Writing them down has value for 4K regardless of us — institutional-knowledge preservation, training material for new hires, disambiguation for ambiguous cases. This is the most time-consuming simplification but also the most foundational.
| Knowledge artifact | Who owns it today | Why writing it down matters (independent of AI) | Which build items benefit |
|---|---|---|---|
| Master sheet reconciliation formulas | Beto (only) | Accumulated over years. Succession risk — nobody else can run the scorecard process if Beto is out for a week. | #24 (Scorecard Updater) |
| Sentiment scoring rubric | Shanice (only) | Implicit weights (client tone, deliverable confidence, relationship signals, escalation likelihood). Making it explicit improves her own consistency and lets others do scoring if she's unavailable. | #02 (Sentiment scoring assistant) |
| "Really unhappy client" alarm criteria | Beto (only) | What specific signals trigger his concern at the Roundtable? Documenting these improves leadership succession. | #07 (Roundtable agenda), #26 (Client Health Agent) |
| Engagement-type-vs-risk correlation rule | Beto's experience | {Fixed Fee, T&M w/ end-dates} → higher risk; CC, Staff Aug retainers → mostly positive. This is real intelligence about how 4K's engagements work. |
#01 (Engagement-type classifier), #26 |
| Engagement-type classification rules themselves | Implicit in heads | What distinguishes Fixed Fee from Build for a project with a defined deliverable? What is "Pixie"? Documenting this is a training artifact for new PMs. | #01 |
| Rate-card-application rules | Si (only) | How she picks the right card for a new engagement. Implicit logic. | #05 (Rate-card lookup), #27 (SiBorg refactor) |
| Late-invoice escalation language + thresholds | Peke + Jade | Written down once, used everywhere — and trains future admin team members. | #14, #19 (Late-invoice / AR) |
Several skills depend on documents being findable. Today they may be scattered. One-time cleanup that pays back forever — both for our skills and for any human who needs to find a contract.
Before we start building, ask Beto or whoever owns ops to do a 2-day pass: structure the contracts folder, structure the rate-cards folder, link amendments to parent SOWs.
Costs them maybe 8 hours; saves us weeks of "the skill produced the wrong answer because it found an outdated rate card."
4K has multiple Notion databases that conceptually relate but aren't explicitly linked. Notion-internal cleanup, no tool work needed — just somebody (Shanice + Beto, probably) deciding the relation model and adding the properties.
Link them via Notion Relation properties — engagement → its scoring entries, engagement → its scorecard rows, engagement → its partners involved.
Notion's Relation properties enable Rollup properties — counts, sums, aggregates across linked DBs. 4K could get useful "summary" views in Notion itself just from this cleanup, before any AI work runs on top.
Example: a per-engagement page that automatically rolls up its scoring history, recent scorecard performance, and partner-cost totals. Possible today; just needs the linkage.
Two policy questions block real work for us AND create ambiguity for 4K. Worth resolving for both reasons.
Does the policy apply to recordings only, or recordings + transcripts? Right now even 4K can't fully describe their own policy boundary. The data agreement exists; the interpretation isn't unified.
Writing down "recordings deleted in 24h; transcripts retained because they're a separate artifact" or "everything deleted in 24h, no exceptions" reduces 4K's ambiguity AND unblocks #31 (Zoom transcript pipeline).
4K's default MSA template probably doesn't specifically address whether AI processing of communications is permitted. Either:
This is a legal review 4K should do anyway. If they're going to be working with AI-assisted client engagements going forward, knowing what their MSA permits is a baseline operational question.
The hardest simplification because it's cultural change, not configuration. But it's the actual root cause of several of the financial-reconciliation pains Elia called out. Our automations can catch the drift; only 4K can change the behavior.
The $20K → $15K reconciliation problem isn't a tool problem — it's that PMs aren't updating projections on time. Our automation (#22) catches the drift but doesn't fix the behavior.
Recommendation: establish an SOP that projections are updated within X days of any scope change, invoice issuance, or stage transition. Make it part of the PM's regular workflow (recurring reminder or a stage-gate in ClickUp). Surface compliance through Build 2 Client Health Agent (#26) — flag PMs whose projections are stale.
Engineers logging multiple tickets per Harvest entry creates the multi-ticket disambiguation problem that Joanna and others manually clean up.
Recommendation: SOP for engineers — one ticket per Harvest entry. Eliminates the disambiguation rule that #09 and #23 have to work around. Tiny behavioral change; large downstream simplification.
The "DB rots" problem (per Joanna) exists because Notion engagement DB updates aren't part of any workflow.
Recommendation: updating happens as part of project lifecycle (kickoff, weekly cadence, stage transitions, closeout) rather than ad-hoc. The substrate's recency-weighted reconciliation logic in #28 still helps, but the underlying data quality is better to start.
None of these SOPs are technical. All of them require leadership commitment, repeated reinforcement, and probably 3–6 months to take root. The build doesn't depend on them happening — it just gets more value when they do.
We'd surface these as "things we observe will improve outcomes," not as gating dependencies.
Pattern 4 from the audit says process literacy is unevenly distributed and that's why PMs have their own templates. The audit said to accommodate, not reform. That's the right call for our build (#23 Client Report Assistant accommodates variance rather than forcing uniformity).
But 4K could ask themselves separately: of the variance that exists, how much is genuinely client-specific (real, keep it) vs. how much is just "PM A and PM B prefer different layouts but the content is the same" (waste, consolidate)? If they sort that, they may find half their template variance is consolidatable without losing real value. Half the variance, half the maintenance burden.