By Ned Butler CCP, Lead CCA, & CMMC Consultant at Redspin

Introduction

When scanning NIST SP 800-171, it’s easy to assume some requirements are duplicates — after all, several sound almost identical at first glance. But the framework was intentionally written to avoid redundancy. Each control exists for a distinct reason, targeting a unique security concern. The problem? Many organizations collapse “look-alike” requirements into a single catch-all control, creating gaps, confusion, and assessment headaches.

This blog untangles the most commonly conflated pairs, showing what each really demands, how to keep the boundaries clear, and why making these distinctions makes your security program easier to manage and far more defensible in front of assessors

One Requirement, One Concern: Avoiding Look-Alike Control Traps in NIST SP 800-171

NIST SP 800-171 is intentionally non-duplicative: each requirement addresses a distinct concern, even when the wording sounds similar. In the field, though, I keep seeing teams collapse two look-alike requirements into one “catch-all” control. That shortcut creates blind spots, leads to assessment findings, and causes day-to-day friction. What follows are the pairs I see mixed up most often – what each one is really asking for, how to keep the lines straight, and a few practical patterns that make your program cleaner and easier to run.

Maintenance Execution vs. Flaw Remediation: How Work Is Performed vs. What Must Be Corrected (3.7.1 vs. 3.14.1)

The tl;dr:

  • 3.7.1 (Perform Maintenance) is about how maintenance is carried out – authorization, supervision, remote access rules, and records for local/remote, scheduled/emergency work.
  • 3.14.1 (Flaw Remediation) is about what must be corrected and by when – your flaw/patch lifecycle from discovery and triage to remediation and verification.

Why they get blended: Patching is a maintenance activity. Teams document patching under 3.14.1 and assume 3.7.1 is “covered.” It isn’t. 3.7.1 cares about conduct; 3.14.1 cares about correction.

Keep them separate in practice

  • 3.7.1 Maintenance execution and control
    • Policy/Procedures: Define maintenance types (scheduled, emergency, remote); who authorizes; required artifacts (tickets/logs).
    • Remote controls: PAM/jump box, MFA, time-boxed access, session monitoring/recording, explicit approval for file transfer.
    • Records: Every maintenance action has a ticket trail: request > approval > pre-checks (backup/snapshot/rollback) > execution > post-checks > closure.
    • Vendors: Contract language on supervision, tooling/media limits, and handling of data.
  • 3.14.1 Flaw remediation discipline
    • Scope: Vulnerabilities, software defects, configuration errors, firmware, and EDR/AV signature updates – not just scanner findings.
    • Timelines & proof: Severity-based SLAs with verification (re-scan or version evidence) before closure; exceptions documented with compensating measures and expirations.

How they connect: 3.14.1 decides what changes and by when. 3.7.1 dictates how you perform the work safely and traceably. In your tickets, cross-reference them: remediation tickets (3.14.1) link to the maintenance/change tickets (3.7.1) that carried out the work.

What assessors usually check

  • 3.7.1: Maintenance policy/procedures, remote-maintenance standard, vendor clauses, authorized maintainer list, maintenance windows, sample tickets with approvals and post-checks, PAM/jump host screenshots.
  • 3.14.1: Flaw/patch policy with SLAs, last two cycles of scanning/patching, remediation tickets with verification, exception register with sign-offs, and end dates.

 

Post-Scan Vulnerability Remediation vs. Broader System-Flaw Management (3.11.3 vs. 3.14.1)

The tl;dr:

      • 3.11.3 (Vulnerability Remediation) is about remediating scanner-identified vulnerabilities according to risk, with verification.
      • 3.14.1 (Flaw Remediation) is the broader discipline of identifying, correcting, and validating fixes for all types of flaws – patches, firmware, signatures, configuration issues – whether or not a scanner found them.

Where the confusion starts: Both emphasize timely fixes. The difference lies in scope: 3.11.3 is the post-scan component of vulnerability management, while 3.14.1 encompasses the entire program for addressing system flaws from multiple sources.

Make the split visible

      • 3.11.3 From finding to verified fix
        • Maintain a transparent chain: Identify (CVE/KB) > assign risk rating > set due date by SLA > create remediation plan > verify (re-scan or provide version evidence).
        • Track exceptions with compensating measures and an expiration date.
      • 3.14.1 Beyond scans
        • Intake feeds include vendor advisories, CSIRT alerts, firmware/BIOS updates, EDR/AV signature updates, configuration defect reviews, pen tests, code reviews, and change reviews.
        • Apply the same discipline to timelines, implementation, and verification, even when no CVE exists.

How they connect: 3.11.3 lives inside your 3.14.1 program. Your reporting should show both the scanner backlog/MTTR (3.11.3) and the cadence/verification for patches, firmware, and signatures (3.14.1).

What assessors usually check

      • 3.11.3: Scanning standard/schedule, coverage list, sample remediation tickets with verification, and exception register.
      • 3.14.1: Patch/flaw policy listing all intake sources, change/maintenance records (e.g., monthly patch windows), firmware/signature updates, dashboards with MTTR and % past-due across all flaw categories.

 

Session Lock vs. Session Termination (3.1.10 vs. 3.1.11)

The tl;dr:

      • 3.1.10 (Session Lock after Inactivity) is about automatically locking the active user session (e.g., screen lock) and hiding on-screen content after a specified period of idle time. User must re-authenticate to resume. In the meantime, applications and processes keep running.
      • 3.1.11 (Session Termination after Inactivity) is about automatically ending the user session after idle time. The user is logged out, context is cleared, and a new session is required to continue.

Why they get mixed: Both are triggered by inactivity, and both prompt for credentials on return. The difference is impact: lock protects against walk-away/shoulder-surfing while keeping work alive; termination clears the session state and reduces exposure from stale, long-lived sessions.

How to implement both (they’re complementary, not either/or)

      • Use a shorter lock and a longer termination to strike a balance between usability and risk.
        • Common defaults in OSC environments:
          • Lock: 10–15 minutes of inactivity (workstations/AVD/web apps).
          • Terminate: 30–60 minutes of inactivity (tighter for high-risk roles).
      • Apply timers across OS, VDI, and key applications so behavior is consistent.

How they connect: Treat 3.1.10 as the first layer – lock the screen quickly to shield data while keeping the user’s work alive. 3.1.11 is the backstop – if inactivity continues, end the session so stale credentials or unattended processes don’t linger. Configure them in a staggered sequence (lock then terminate) with aligned timers, and pair termination with 3.13.9 (see next section) so idle network connections are dropped along with the user session.

What assessors usually check

    • 3.1.10: GPO/MDM profiles or app settings showing lock timers; screenshots/test notes proving the lock masks content and requires  re-authentication.
    • 3.1.11: System/application/VDI settings that enforce idle logoff; logs or test steps demonstrating session termination at the configured interval.

 

User Session Timeout vs. Network Connection Timeout (3.1.11 vs. 3.13.9)

The tl;dr:

    • 3.1.11 (Session Termination): Terminate user sessions after a period of inactivity, in other words, log off the interactive session for OS/applications/VDI/web.
    • 3.13.9 (Connection Termination): Terminate network connections after a period of inactivity, in other words, tear down VPN/RDP/SSH/TLS transport sessions.

Why they get mixed: Both mention “sessions,” “connections,” and “inactivity.” One is about the logical user state; the other is about the transport layer.

Keep both layers honest

    • 3.1.11 User session policies
      • OS/application idle timers that force logout; practical defaults 30–60 minutes for desktop or web app inactivity (tune to risk).
      • Test and document: demonstrate auto-termination.
    • 3.13.9 Transport timeouts
      • Idle timeouts on VPN, RDP/SSH gateways, reverse proxies/load balancers, and web server keep-alives.
      • Set these equal to or shorter than user session timeouts to prevent “zombie” network pipes.

How they fit together: User sessions should not outlive the network connection and vice versa. Align timers so the stricter layer enforces the shortest permitted idle duration.

What assessors usually check

    • 3.1.11: GPO/MDM profiles for idle session logoff, application session timeout settings, and a test log or screenshot of auto-termination.
    • 3.13.9: VPN/RDP/SSH gateway configuration exports, proxy/LB idle timeout settings, and logs showing idle teardown.

 

Segregation of Duties (People) vs. Separation of Management Functions (Technology) (3.1.4 vs. 3.13.3)

The tl;dr:

    • 3.1.4 (Separation of Duties) is people/process separation – split critical responsibilities so no single person can request, approve, and execute a sensitive action end-to-end.
    • 3.13.3 (Role Separation) is technical/architectural separation – keep the management plane (admin paths/interfaces) isolated from user functionality.

Why they get blended: Both say “separate,” but one separates roles and approvals, the other separates interfaces and pathways in the technology stack.

Build them side-by-side, not on top of each other

    • 3.1.4 — People/process separation
      • Simple SoD matrix (the Requester is not the Approver, is not the Executor; include an independent Verifier where feasible).
      • Workflow rules: No self-approval; dual control for high-risk steps.
      • If staffing is limited, consider using compensating measures (e.g., session recording and next-day peer review).
    • 3.13.3 — Management plane isolation
      • Separate administrator (privileged) identities from daily (non-privileged) identities; no email or web access with administrator accounts.
      • Administrator activity originates from privileged access workstations (PAWs) or hardened AVD sessions, through a PAM/jump host to a management VLAN or restricted admin portal.
      • User subnets cannot reach management interfaces (iLO/DRAC, hypervisors, firewall consoles, directory/cloud admin portals).

How they fit together: Use 3.1.4 to ensure that different people handle requests, approval/implementation, and use 3.13.3 to ensure those implementations occur only via separate management paths that are unreachable by standard users.

What assessors usually check

    • 3.1.4: SoD matrix/RACI, ticket workflow screenshots blocking self-approval, samples showing independent approval and verification.
    • 3.13.3: ACLs/firewall rules blocking user-to-management paths, PAM/jump-host enforcement, distinct admin identities, conditional access limiting admin portals to compliant PAWs/AVDs, and SIEM logs for administrator activity.

 

Organization Media on External Systems vs. Removable Media Inside the Boundary (3.1.21 vs. 3.8.7)

The tl;dr:

    • 3.1.21 (Portable Storage Use) restricts or prohibits using organization-controlled portable storage on external systems (home PCs, supplier laptops, kiosks – anything that is outside your control).
    • 3.8.7 (Removable Media) governs how removable media is handled inside your environment – the internal rules, tooling, restriction, encryption, and scanning for USB/SD/DVD on in-scope systems.

Why they’re confused: Both mention removable/portable media. The key difference is where the media is used: outside your organization’s boundary (3.1.21) versus inside your boundary (3.8.7).

Set the lines clearly

    • 3.1.21 Outside use is off-limits
      • Policy: organization-controlled media shall not be connected to external systems; exceptions require CISO approval and documented controls.
      • Bake the rule into acceptable use policies, supplier MSAs, and awareness training.
    • 3.8.7 Inside use is tightly controlled
      • Block USB storage by default with device-control/MDM; allow exceptions for defined roles with logging.
      • Require encryption (e.g., BitLocker To Go) for any approved write access; scan media upon insertion or write.
      • In VDI/AVD, disable or tightly control USB redirection to session hosts; prefer managed file exchange (e.g., SFTP) over “sneakernet.”

How they fit together: 3.1.21 keeps your media off untrusted endpoints. 3.8.7 defines the rules inside your boundary. Even if you grant internal exceptions, that media still isn’t allowed on external systems.

What assessors usually check

    • 3.1.21: Policy language, supplier/contractor clauses, awareness materials, and attestations.
    • 3.8.7: MDM/group policy/device-control settings, encryption requirements, logs of blocked/allowed events, periodic review evidence.

 

Implementing Distinctions, Not Duplicates: From Look-Alikes to Clear Lines

NIST SP 800-171 is designed to be non-duplicative. When two controls sound similar, it’s almost always because they cover adjacent but different concerns:

  • Operations vs. corrections: 3.7.1 governs how maintenance is performed; 3.14.1 governs what must be corrected and on what timeline.
  • Scanner loop vs. full flaw program: 3.11.3 is the remediation half of vulnerability management, while 3.14.1 encompasses the broader flaw lifecycle, spanning patches, firmware, signatures, configuration defects, and more.
  • Lock vs. termination (user state): 3.1.10 locks the session after inactivity (work continues, content is shielded); 3.1.11 terminates the session if inactivity persists (fresh login required).
  • User state vs. transport: 3.1.11 ends the user session, while 3.13.9 closes the network connection to prevent idle, open ports from lingering.
  • People vs. pathways: 3.1.4 separates who can request/approve/execute; 3.13.3 separates where administration can occur in the architecture.
  • Outside vs. inside the boundary: 3.1.21 restricts organizational media on external systems; 3.8.7 governs internal media usage.

Get these splits right, and your program gets easier to operate. You’ll write tighter policies, run cleaner workflows, and set controls that actually reinforce each other instead of overlapping awkwardly.

 

Practical next steps for OSCs

  1. Publish simple, clear standards for each requirement covered here, including scope, roles, key settings, and validation steps.
  2. Link related activities so they reinforce each other (e.g., remediation tickets link to maintenance tickets; admin actions originate from PAWs via PAM).
  3. Demonstrate behavior by keeping simple artifacts (scan deltas, version outputs, session logs, and blocked attempts) to show what actually happens.
  4. Train on the nuances – brief enablement for administrators and requesters on “session vs. transport,” “people vs. pathways,” and “outside vs. inside” pays off in fewer exceptions and smoother assessments.
  5. Sanity-check your stack – for environments using virtual desktops or remote gateways, align idle timers across the desktop platform and gateways; disable USB redirection by default; require encrypted media for any exception; route admin work through PAM to management segments; and link remediation to maintenance.

If two controls sound similar, assume they are not the same, find the split, and build to it. That mindset keeps your program crisp, credible, and compliant without the wasted effort that comes from mashing distinct requirements together.

 

Book a meeting to get CMMC certified with Redspin: