4K describes themselves as NIST 800-53 aligned (not certified or authorized). To credibly deliver to them — and to any future regulated-adjacent client — Newfangled needs its own documented alignment posture. This work is a Frontier business capability, not a 4K-specific cost. The numbers below are recalibrated for a GRC platform + heavy AI assistance stack (browser automation for audits, AI-assisted writing, configuration analysis, evidence collection scripting). The AI leverage assumption knocks ~40% off the internal-hours estimate. Hard costs barely move — most external spend is for things AI can't do (pen test, platform license, endpoint tooling).
These terms are often conflated; they're meaningfully different. 4K is aligned. We don't need to be certified to deliver to them. We need to be credibly aligned.
Organization has adopted 800-53 as their security control framework, implements controls (Low baseline tailored to context), documents implementation in an SSP and control matrix, self-assesses, maintains ongoing discipline.
Required: Documentation, disciplined implementation, evidence catalogs, internal assessment.
Cost: $33–48K Year 1 + $25–35K/yr ongoing + 350–475 internal hrs Year 1.
Timeline: 7–8 months at 12 hrs/week to credible posture.
Third-party assessor reviews; regulator issues an Authorization to Operate (ATO) under FedRAMP, FISMA, or equivalent.
Required: All of the above PLUS third-party assessment, formal authorization process, ongoing continuous monitoring under the authorization.
Cost: $500K–$2M+ direct.
Timeline: 12–24 months.
This is not on the table for 4K. 4K isn't certified either.
Two numbers per year: external spend (hard costs we pay), and internal hours (Newfangled time). Internal hours are the AI-assisted figures — see §05 for the without-AI counterfactual and §06 for where AI gives the biggest leverage.
| Year 1 | Year 2+ | Notes | |
|---|---|---|---|
| Internal hours | 350–475 hrs | 280–400 hrs/yr | AI-assisted. ~9–13 hrs/week across team for 9 months in Year 1; ~150 compliance-specific hrs/yr at steady state (rest overlaps with normal eng ops). |
| External spend | $33–48K | $25–35K/yr | Year 1 includes a one-time vCISO readiness review (~$8–10K). Steady-state recurring is GRC platform + pen test + endpoint + training + scanning. |
Most of these are recurring — the platform and pen test reset each year, endpoint and MDM are per-seat subscriptions. Only the vCISO review and one-time background checks meaningfully differ between Year 1 and Year 2+.
| Item | Year 1 | Year 2+ | Notes |
|---|---|---|---|
| GRC platform (Drata Foundation recommended) | $8–12K | $8–12K | Native NIST 800-53 framework support; engineering-team oriented. See §04 for platform choice. |
| Annual pen test | $8–12K | $8–12K | Grey-box web app + API + light AWS config review. |
| EDR (CrowdStrike Falcon Go / SentinelOne, ~25 endpoints) | $2–3K | $2–3K | Required for SI family (system integrity). |
| MDM (Kandji for Mac, Intune for Windows mix) | $2–3K | $2–3K | Required for endpoint encryption, patching, inventory. |
| Security awareness training (KnowBe4 or similar) | $600–900 | $600–900 | $30–45/user/yr × ~20 users. |
| Vulnerability scanning (Snyk for deps + AWS Inspector) | $3–5K | $3–5K | Inspector cheap; Snyk is most of the cost. |
| Secrets management (Doppler or AWS Secrets Manager) | $1–2K | $1–2K | Skip if AWS Secrets Manager alone is enough. |
| Background checks (Checkr) | $200–500 | per-hire | $50–100/hire. |
| Required subtotal | $25–38K | $25–38K |
| Item | Year 1 | Year 2+ | Notes |
|---|---|---|---|
| vCISO / readiness review (15–25 hrs of expert time) | $8–10K | $0 | Catches SSP gaps before customers do; pays for itself if it saves 50 internal hours. Recommended for Year 1. |
| Bug bounty program (HackerOne / Intigriti) | — | $5–10K | Defer to Year 2+; only if we have product surface worth it. |
GRC platform is the most strategic external decision. It dictates how much evidence collection automates vs. stays manual, how easy quarterly access reviews are, and how painful audit responses become.
Engineering-team UX, native 800-53 framework, decent IaC and AWS integration. Target ~$8K with negotiation.
Get a quote and use Sprinto + ComplyJet quotes for downward pressure.
Its value is white-glove implementation, which is wasted on us — we have the engineering capability to drive a self-service platform.
Credible but we pay a premium for integration breadth we don't need.
Often $5–8K for a 20-person single-framework setup. Typically beat the big three on price for single-framework engagements under 50 employees — same logic applies to NIST 800-53.
AI leverage assumption knocks ~40% off the hours. The savings concentrate in Phases 1, 2, and 4 (drafting, scaffolding, summarization). Phase 3 (technical controls) drops less because rollout work — MFA enforcement, MDM enrollment, EDR deployment — still requires humans clicking and validating.
| Phase | Without AI | With AI | What AI does for us |
|---|---|---|---|
| 1. Scoping & foundation | 80–120 | 45–60 | System/data inventory via browser+AWS automation; CRM mapping; risk register drafting |
| 2. Policies (~25 docs + SSP) | 110–140 | 30–50 | Policy tailoring from templates; SSP scaffolding & section drafts |
| 3. Technical controls | 250–350 | 180–240 | IaC drafting for compliance configs; audit scripts; doc-while-implementing |
| 4. Operational processes | 80–120 | 40–60 | IR/DR/runbook drafting; vendor review summarization |
| 5. Self-assessment + pen test | 60–100 | 35–55 | Browser automation for control checks; POA&M generation; pen test scoping |
| Total | 580–830 | 330–465 |
The 40% reduction isn't uniform. Some compliance work is overwhelmingly drafting and synthesis (huge AI lift); some is humans clicking and validating (minimal lift).
A lot of this overlaps with normal engineering ops we'd be doing anyway. The compliance-specific overhead on top is closer to ~150 hrs/year.
| Activity | Hours/yr | Notes |
|---|---|---|
| Quarterly access reviews × 4 | 24 | Final approval requires human attestation. |
| Vulnerability/patch management ops | 120–180 | Largest single bucket; overlaps heavily with normal eng ops. |
| Annual risk assessment refresh | 10 | AI-assisted refresh of last year's register. |
| Annual IR tabletop | 12 | Human participation required. |
| Annual DR test | 16 | Actual restore test, documented. |
| Pen test support + remediation | 30–50 | Findings-dependent. |
| SSP + policy annual refresh | 15 | AI-assisted. |
| Training delivery + tracking | 8 | Mostly platform-driven. |
| Vendor reviews (ongoing) | 20 | AI-assisted summarization. |
| Platform tuning + evidence drift response | 20 | Ongoing. |
| Incident response (we will have some) | 20–40 | Even at small scale, plan for it. |
| Total | 280–400 |
Phase ordering matters: platform setup first so it can drive gap analysis; policies before technical controls so we know what we're implementing; technical work heavy in months 3–5; self-assessment and pen test toward the end.
| Month | Activity |
|---|---|
| Month 1 | Platform setup, scoping, foundation |
| Month 2 | Policies drafted, technical work starts |
| Month 3 | Technical controls heavy lift |
| Month 4 | MFA/SSO/EDR/MDM rollout completed |
| Month 5 | Operational processes live |
| Month 6 | Self-assessment, remediation |
| Month 7 | Pen test executed |
| Month 8 | Pen test remediation + SSP finalized → "NIST 800-53 Low aligned" state achieved |
In addition to the org-level investment, each build item in the Capabilities inventory carries a small alignment-documentation tax. This is folded into the engineering hour estimates in Pricing, not billed separately.
Adds ~10–15% engineering overhead per build. Real work but doesn't change the build itself. Doesn't push EASY items into MODERATE or MODERATE into HARD.
The exception: builds that handle sensitive data classifications carry higher overhead (e.g., anything touching shared Slack channels — see #32).
NIST 800-53 organizes controls into families. For an aligned posture at Low baseline, these are the families that need explicit implementation and documentation. Most map to existing AWS-native services or established practices — this is configuration discipline, not greenfield buildout.
| Family | Name | What we implement |
|---|---|---|
| AC | Access Control | MFA on all admin access, RBAC, least privilege, documented account-management procedures, periodic access reviews |
| AU | Audit & Accountability | CloudTrail, CloudWatch logs with retention policies, immutable audit trail for security-relevant events, documented review cadence |
| AT | Awareness & Training | Security awareness training for all staff, role-specific training for elevated access |
| CM | Configuration Management | Infrastructure-as-code (we already use CDK), documented baseline configurations, change control procedures, component inventory |
| CP | Contingency Planning | Documented backup, DR, BCP procedures; annual testing with evidence |
| IA | Identification & Authentication | MFA enforcement everywhere, password policies, identity proofing for elevated access |
| IR | Incident Response | Documented IR plan, roles, escalation procedures, annual tabletop exercise with documented results |
| MA | Maintenance | Controlled maintenance procedures, especially for tools with elevated access |
| MP | Media Protection | Sanitization procedures, transport controls (mostly moot since we're cloud-native) |
| PE | Physical & Environmental | Mostly inherited from AWS; document the inheritance |
| PL | Planning | The SSP itself + supporting plans |
| PS | Personnel Security | Background checks per role, security training, agreements |
| RA | Risk Assessment | Annual risk assessment, vulnerability scanning with remediation SLAs, periodic pen tests |
| CA | Assessment, Authorization, Monitoring | Continuous monitoring program, periodic control assessment |
| SC | System & Communications Protection | TLS everywhere, encryption at rest (KMS defaults), boundary protection (VPC, security groups, WAF where applicable) |
| SI | System & Information Integrity | Vulnerability scanning, patch management, malicious code protection (endpoint security), dependency monitoring |
| SR | Supply Chain Risk Management | Vendor security assessments, SBOM tracking, dependency hygiene |
| PT | PII Processing & Transparency | Privacy controls if PII is in scope |
The compliance investment isn't 4K-specific — it's a Frontier business capability decision. Pursuing it unlocks a class of clients we currently can't serve. Not pursuing means every regulated-adjacent prospect forces us into a code-only delivery model.
Unlocks:
This may be Frontier's actual moat at scale — agencies positioned for regulated verticals are rare.
Trade-offs:
Separate from the alignment decision: who operates the deployed system? Three operating models, in order of how natural they are once we have alignment posture.
Newfangled operates the system; provides attestation; client auditors get evidence package. The natural Frontier operating model. Requires the alignment investment above.
For 4K: Frontier hosts the CKA substrate, runs the Managed Agents, holds the data. 4K consumes via Claude Teams. Auditor evidence flows from our SSP + GRC platform.
4K deploys, we maintain via limited / controlled access. Useful if 4K specifically wants their own boundary (e.g., they already have a hardened AWS account).
Compliance with 4K's access-control requirements (PIV / MFA / break-glass procedures). Their compliance posture, our delivery work.
We deliver code; 4K deploys and operates entirely. We don't process 4K's production data. Their compliance, their SSP, their audit; we're not in scope for their authorization.
Required fallback if Newfangled doesn't pursue alignment — every regulated-adjacent client gets this model.
Pursuing alignment makes Option B viable. Without alignment, we're stuck in Option C for 4K and every future regulated-adjacent client. Option A is viable either way but depends on 4K's specific preferences.
The order matters. Platform selection drives gap analysis, which drives the backlog. Picking a target date anchors the work and prevents drift.