Simplifications

What 4K Should Clean Up First

Every soft-information gap and every "we need to reconcile across systems" line in the capability inventory is a place where 4K's current state makes their own life harder AND limits what we can build effectively. This page surfaces the highest-leverage simplifications — most are valuable for 4K independent of any AI work. Several are succession-risk reductions or single-source-of-truth cleanups that 4K should do regardless. If they happen, our build gets materially simpler and the resulting system works better.

Framing for the 4K conversation. Don't position these as "things we need from you." Position them as "things that would benefit you whether we build anything or not — but if you do them, our work gets materially easier and the resulting system works better." This positions Newfangled as advisors to their operation, not just AI builders. And every one of these that gets done before we start building is a complication we don't have to engineer around.

Effort vs. impact at a glance

Eight simplifications. Some take a few days; one is essentially free if 4K is already planning to act on it. Several would benefit 4K independent of any AI work — they're operational hygiene wins. The downstream impact column references items from the Capabilities page.

# Simplification Effort for 4K Downstream impact on our build
01 Unified PSA when replacing Harvest None extra (already replacing) Massive — kills #21, simplifies #22, #24
02 Canonical engagement IDs + registry Days High — clean #13, #24, #28
03 Document implicit rules (Beto's formulas, Shanice's rubric, etc.) Weeks of writing High — #02, #07, #24, #26 all benefit
04 Drive folder cleanup (SOWs, rate cards, etc.) Days Medium-high — #05, #08, #28
05 Notion DB linkage Days Medium — #28 ingestion gets cleaner
06 Resolve Zoom + MSA policy questions A meeting each Medium — unblocks #31, #32
07 PM projection-update SOP + related behavioral SOPs Cultural change (hardest) Medium — addresses root cause of #22 pain
08 Template consolidation evaluation Internal exercise Low — #23 accommodates either way
Sequencing thought. Items 01, 02, 05, and 06 are low-effort decisions or cleanups that could happen in parallel with our early engagement work. Item 03 (documentation) is the most time-consuming for 4K but also the most foundational. Item 07 (behavioral SOPs) is cultural and takes the longest — but doesn't block our build, it just changes how much value the build provides.

Pick a unified PSA when replacing Harvest

The Harvest replacement decision is the highest-leverage simplification 4K can make — and the only one that's essentially free if they're already replacing the tool anyway.

Today's reality
Six surfaces have to agree

4K runs two systems (Harvest for actuals + Forecast for planning) plus Teresa's CC sheet plus the invoicing sheet plus Beto's master sheet plus Pipeline-and-Active-Projects sheet plus Notion scorecards — at least six surfaces that have to agree on what's happening with each engagement.

If 4K picks unified PSA
Several problems collapse simultaneously
  • Forecast-undocumented-API problem disappears — Productive has a public API
  • Harvest-Forecast name-format drift disappears — one system, one canonical name
  • Teresa's CC sheet stops being a separate input — CC engagements live in the same system as Build engagements, with engagement-type as a property
  • Beto's master-sheet reconciliation mostly disappears — most of his formulas exist because data lives in too many places
  • Sales-to-delivery handoff becomes one record's stage change, not a manual transfer between systems
Downstream effect on our build: Item #21 (Forecast watched-folder pipeline) goes away. Item #24 (Scorecard Updater) gets significantly simpler — one source for time + planning instead of three. Item #22 (Pipeline upstream consolidation) gets simpler — the consolidation already happens inside the PSA. The entire reconciliation layer in #24's "Beto's master sheet formulas migrated to code" workstream mostly disappears.
Recommendation to put to 4K: when evaluating Harvest replacements, weight "does this consolidate things we currently maintain across 3–6 surfaces" higher than "does this match Harvest's specific feature surface." The status quo is the expensive position, not the safe one. Productive is the strongest candidate per research/harvest-replacements.md; Scoro and Bonsai are also viable; Replicon for enterprise scale.

Canonical engagement IDs across systems

Every engagement has a different identifier and likely a different name in Harvest, Forecast, ClickUp, Notion, Pipedrive, Slack, and Drive. Beto has a reconciliation formula. PMs eyeball matches. The $20K → $15K reconciliation miss was partly a consequence of this drift.

What 4K should do
Three steps
  • Pick one canonical slug per engagement (e.g., acme-2026-cms-rebuild). Use it in every system that supports a custom field or naming convention.
  • Maintain a registry — a Notion DB or a Drive sheet — that maps the canonical slug to each system's native ID (Harvest project ID, ClickUp space/list ID, Pipedrive org/deal ID, Notion engagement-DB page ID, Slack channel IDs both modes, Drive folder ID).
  • Make creating a new engagement a small workflow that issues the canonical slug first and populates the registry; everything else flows from that.
Why this benefits 4K independently
Succession-proofing

When Beto's not around, anyone can figure out which Pipedrive deal corresponds to which Harvest project. The current "ask Beto" pattern is a continuity risk.

Plus: as a 4K-internal cleanup, this is the kind of operational discipline that compounds. New engagements start clean from day one of having the practice.

Downstream effect on our build: Item #13 (per-engagement INSTRUCTIONS headers) becomes trivial to maintain. Item #24 (Scorecard Updater) doesn't need Beto's reconciliation formulas for engagement-name normalization. Item #28 (CKA) gets clean per-engagement scoping without us reverse-engineering it. Every skill that does cross-system lookups stops paying the name-drift tax.

Move implicit knowledge to written, structured form

Several of the most important inputs to our build don't exist anywhere except in specific people's heads. Writing them down has value for 4K regardless of us — institutional-knowledge preservation, training material for new hires, disambiguation for ambiguous cases. This is the most time-consuming simplification but also the most foundational.

Knowledge artifact Who owns it today Why writing it down matters (independent of AI) Which build items benefit
Master sheet reconciliation formulas Beto (only) Accumulated over years. Succession risk — nobody else can run the scorecard process if Beto is out for a week. #24 (Scorecard Updater)
Sentiment scoring rubric Shanice (only) Implicit weights (client tone, deliverable confidence, relationship signals, escalation likelihood). Making it explicit improves her own consistency and lets others do scoring if she's unavailable. #02 (Sentiment scoring assistant)
"Really unhappy client" alarm criteria Beto (only) What specific signals trigger his concern at the Roundtable? Documenting these improves leadership succession. #07 (Roundtable agenda), #26 (Client Health Agent)
Engagement-type-vs-risk correlation rule Beto's experience {Fixed Fee, T&M w/ end-dates} → higher risk; CC, Staff Aug retainers → mostly positive. This is real intelligence about how 4K's engagements work. #01 (Engagement-type classifier), #26
Engagement-type classification rules themselves Implicit in heads What distinguishes Fixed Fee from Build for a project with a defined deliverable? What is "Pixie"? Documenting this is a training artifact for new PMs. #01
Rate-card-application rules Si (only) How she picks the right card for a new engagement. Implicit logic. #05 (Rate-card lookup), #27 (SiBorg refactor)
Late-invoice escalation language + thresholds Peke + Jade Written down once, used everywhere — and trains future admin team members. #14, #19 (Late-invoice / AR)
Pitch to 4K: "These are operating-knowledge artifacts you should have anyway. The fact that they live in specific people's heads is a continuity risk. Write them down once; reap the benefit forever — even if you never build any AI on top of them."

Clean up the Drive structure for source-of-truth documents

Several skills depend on documents being findable. Today they may be scattered. One-time cleanup that pays back forever — both for our skills and for any human who needs to find a contract.

Documents to organize
Four canonical surfaces
  • SOWs and contracts — one canonical folder, predictable naming, amendments linked to the parent SOW
  • Rate cards — one canonical folder, dated versions (so the skill can pick the applicable card by engagement date)
  • Boilerplate language — one canonical folder for the patterns Si and Joanna actually use in proposals
  • Brand assets — same pattern, standardized location
Practical move
8 hours of cleanup; weeks of saved engineering

Before we start building, ask Beto or whoever owns ops to do a 2-day pass: structure the contracts folder, structure the rate-cards folder, link amendments to parent SOWs.

Costs them maybe 8 hours; saves us weeks of "the skill produced the wrong answer because it found an outdated rate card."

Downstream effect on our build: Item #08 (SOW Q&A) goes from fuzzy to trivial. Item #05 (Rate-card lookup) becomes reliable. Item #28 (CKA's tagged-subset ingestion) gets clean inputs.

Link the Notion DBs that should be related

4K has multiple Notion databases that conceptually relate but aren't explicitly linked. Notion-internal cleanup, no tool work needed — just somebody (Shanice + Beto, probably) deciding the relation model and adding the properties.

DBs to link
Four databases, currently disconnected
  • Engagement DB — temperature, status, basic metadata
  • Client-success scoring DB — Shanice's 1:1 check-in summaries (separate from engagement DB)
  • Scorecards DB — per-metric per-week
  • Partners DB — vendors, contractors

Link them via Notion Relation properties — engagement → its scoring entries, engagement → its scorecard rows, engagement → its partners involved.

Bonus
Free improvement to 4K's own Notion workspace

Notion's Relation properties enable Rollup properties — counts, sums, aggregates across linked DBs. 4K could get useful "summary" views in Notion itself just from this cleanup, before any AI work runs on top.

Example: a per-engagement page that automatically rolls up its scoring history, recent scorecard performance, and partner-cost totals. Possible today; just needs the linkage.

Downstream effect on our build: Item #28 (CKA) ingestion gets cleaner traversal at indexing time. Skills get cleaner context at query time. No need for us to infer relationships from naming similarity.

Resolve the policy questions explicitly

Two policy questions block real work for us AND create ambiguity for 4K. Worth resolving for both reasons.

Question 01
Zoom 24-hour retention interpretation

Does the policy apply to recordings only, or recordings + transcripts? Right now even 4K can't fully describe their own policy boundary. The data agreement exists; the interpretation isn't unified.

Writing down "recordings deleted in 24h; transcripts retained because they're a separate artifact" or "everything deleted in 24h, no exceptions" reduces 4K's ambiguity AND unblocks #31 (Zoom transcript pipeline).

Question 02
MSA confidentiality language

4K's default MSA template probably doesn't specifically address whether AI processing of communications is permitted. Either:

  • It implicitly forbids → bounds what we can do with shared Slack channels (#32) and any future expansion
  • It's silent → 4K is operating in a gray zone they should resolve

This is a legal review 4K should do anyway. If they're going to be working with AI-assisted client engagements going forward, knowing what their MSA permits is a baseline operational question.

One-time decisions; long-term clarity. These aren't "things we need from 4K." They're things 4K should resolve regardless of whether we build anything. The Zoom interpretation question and the MSA language question both have answers that are useful for 4K's own operation.

Behavioral SOPs — fix the root cause, not the symptom

The hardest simplification because it's cultural change, not configuration. But it's the actual root cause of several of the financial-reconciliation pains Elia called out. Our automations can catch the drift; only 4K can change the behavior.

SOP 01
PM projection-update cadence

The $20K → $15K reconciliation problem isn't a tool problem — it's that PMs aren't updating projections on time. Our automation (#22) catches the drift but doesn't fix the behavior.

Recommendation: establish an SOP that projections are updated within X days of any scope change, invoice issuance, or stage transition. Make it part of the PM's regular workflow (recurring reminder or a stage-gate in ClickUp). Surface compliance through Build 2 Client Health Agent (#26) — flag PMs whose projections are stale.

SOP 02
Engineer Harvest-entry cadence

Engineers logging multiple tickets per Harvest entry creates the multi-ticket disambiguation problem that Joanna and others manually clean up.

Recommendation: SOP for engineers — one ticket per Harvest entry. Eliminates the disambiguation rule that #09 and #23 have to work around. Tiny behavioral change; large downstream simplification.

SOP 03
Engagement DB update cadence

The "DB rots" problem (per Joanna) exists because Notion engagement DB updates aren't part of any workflow.

Recommendation: updating happens as part of project lifecycle (kickoff, weekly cadence, stage transitions, closeout) rather than ad-hoc. The substrate's recency-weighted reconciliation logic in #28 still helps, but the underlying data quality is better to start.

Honesty about this
Behavioral change is slow

None of these SOPs are technical. All of them require leadership commitment, repeated reinforcement, and probably 3–6 months to take root. The build doesn't depend on them happening — it just gets more value when they do.

We'd surface these as "things we observe will improve outcomes," not as gating dependencies.

Per-PM template consolidation evaluation

Pattern 4 from the audit says process literacy is unevenly distributed and that's why PMs have their own templates. The audit said to accommodate, not reform. That's the right call for our build (#23 Client Report Assistant accommodates variance rather than forcing uniformity).

But 4K could ask themselves separately: of the variance that exists, how much is genuinely client-specific (real, keep it) vs. how much is just "PM A and PM B prefer different layouts but the content is the same" (waste, consolidate)? If they sort that, they may find half their template variance is consolidatable without losing real value. Half the variance, half the maintenance burden.

This is a 4K-internal evaluation that doesn't need our involvement. Worth flagging as a "while you're here" simplification. Our build accommodates either outcome.