Cybersecurity for Research Universities Grant-Compliant, Research-Velocity-Safe
Petronella Technology Group builds CUI enclaves, private AI clusters, and export-control guardrails that satisfy CMMC Level 2, NIST SP 800-171 r3, FERPA, HIPAA, and ITAR obligations without turning your principal investigators into a help-desk ticket queue. Raleigh based. On site across North Carolina. National delivery for cloud-delivered controls.
Five regulatory regimes. One grant portfolio. Zero margin for confusion.
A single mid-size research university in North Carolina routinely carries Department of Defense contracts flowing Controlled Unclassified Information, National Institutes of Health awards touching protected health information, National Science Foundation work with human-subject data, export-controlled engineering projects under ITAR and EAR, and a student body generating FERPA-protected records every semester. Any one of these regimes is its own audit. Stacked together they create the unique research-IT problem: the same researcher, the same laptop, the same Slurm job, the same cloud tenant can cross classification boundaries between lunch and a 3 p.m. lab meeting.
The EDUCAUSE 2024 Top 10 IT Issues report puts cybersecurity and privacy at the top of the list for higher education for the sixth straight year. Per the same report, institutions are simultaneously being asked to democratize AI for researchers and to harden the environment against nation-state actors who have explicitly targeted U.S. research universities. The REN-ISAC quarterly threat briefings throughout 2024 and 2025 flagged sustained credential-harvesting campaigns against research faculty, targeted deepfake phishing at grant administrators, and ransomware crews staging in shared HPC environments. The Cybersecurity and Infrastructure Security Agency added higher education to its list of priority sectors during the same window.
Ad-hoc CUI handling no longer clears a CMMC Level 2 assessment. The Department of Defense rule that codifies CMMC 2.0 requires assessment evidence that survives a third-party review. Per the Department of Education's Federal Student Aid cybersecurity enforcement guidance, FERPA violations now trigger enforcement with the same teeth as HIPAA. NIST Special Publication 800-171 Revision 3, final as of May 2024, rewrote the control families that most academic CUI environments rely on. The old "we think we are compliant" posture no longer satisfies anybody.
Petronella Technology Group entered the research-IT conversation because the infrastructure pieces we already run for defense contractors, healthcare systems, and financial services, private AI clusters, managed CUI enclaves, 24/7 monitored endpoints, were already adjacent to what a CISO or Research IT Director at a regional research university needed. The overlap is not coincidence. It is the same control DNA with academic freedom as an additional design constraint.
The enclave model, not the whole campus
Most research universities do not need to make the entire institution CMMC Level 2 compliant. They need a defensible CUI enclave where the controls are demonstrably in place, the boundary is auditable, and research workflows can move in and out without leaking. The starting architecture question is never "how do we lock everything down," it is "where is CUI today, where could it leak tomorrow, and what boundary do we draw so that the answer stays stable for three years."
Find the data before you scope the enclave
Grant award letters, data-use agreements, statements of work, and sponsor portals each describe a different piece of the CUI picture. We walk the lifecycle for each funded award: ingress from the sponsor, transformation inside research workflows, storage during the award, retention after closeout. The enclave scope comes from the map, not from a generic template.
Documentation the assessor can actually score
A System Security Plan written for NIST SP 800-171 r3 is not a marketing document. It is an evidence map. We write the SSP with the assessor's scoring guide open next to us, cross-reference each of the control families, and build the Plan of Actions and Milestones so the remediations are real work with real owners and real dates.
Readiness before the C3PAO arrives
Most institutions want a dry run before a formal assessment. Our team runs a full pre-assessment against the 110 practices, flags the gaps that will cost you points, and documents the compensating controls. When the C3PAO arrives for the certified Level 2 assessment, nobody is surprised. That is the goal.
On-prem AI infrastructure without the data-egress exposure
Research universities are being asked to provide AI compute to faculty and graduate students at the same time that the cloud frontier labs are training on any prompt that is not explicitly walled off. For awards touching CUI, export-controlled research, or human-subject data, the answer is rarely "ship it to a public API." It is a private cluster with the right scheduler, the right isolation primitives, and the right monitoring.
Secure GPU scheduling on Slurm and Kubernetes
Multi-tenant GPU clusters need access control that survives a graduate-student turnover cycle. We stand up Slurm with queue-level ACLs tied to institutional identity, Kubernetes namespaces mapped to research groups, and per-job audit logging that is writable but not deletable by the scheduler account. Job containers run with network policies that restrict egress to data-sources pre-approved for the award.
MIG partitioning for sensitive workloads
On NVIDIA data-center GPUs that support Multi-Instance GPU, we carve the device into isolated slices so that a CUI job and a non-CUI job never share cache lines or memory bandwidth. The audit trail shows which principal investigator, which award, and which container ran in which MIG slice at which time. The evidence is produced by the platform rather than assembled after the fact.
Local-flash data residency and encrypted staging
Training data never leaves the cluster's encrypted local flash for CUI or ITAR workloads. Staging from institutional storage happens through a single audited mount point with integrity verification. Model outputs are scanned before export for classification markings and deemed-export indicators. The pipeline is boring on purpose.
NVIDIA Elite Partner Channel sourcing
We source DGX, HGX, and RTX PRO systems through the NVIDIA Elite Partner Channel and handle rack-and-stack, power and cooling coordination with facilities, and the InfiniBand or NVLink fabric build. Our vendor relationships run through NVIDIA Elite Partners rather than any direct factory arrangement, and we pass that advantage through to you honestly.
See the private LLM platform for the software layer that typically rides on top of these clusters, and the NVIDIA infrastructure page for the specific hardware options.
Deemed-export exposure is a systems problem, not a paperwork problem
When foreign nationals have access to technology controlled under the International Traffic in Arms Regulations or the Export Administration Regulations, the access itself can be a "deemed export" requiring a license. For research universities with international graduate students, postdocs, and visiting scholars, this is an everyday reality, not an edge case. The question is how the access is controlled at the system layer so that compliance is enforceable instead of aspirational.
Identity mapped to export status
Export-controlled project shares and repositories are gated by group membership that reflects the export-license or Technology Control Plan status of each member. New arrivals cannot be added until the compliance office confirms the paperwork. Revocations propagate immediately across the storage layer, the version-control layer, and the compute scheduler.
VLANs that match your TCP
The Technology Control Plan on file with your export compliance office defines who can access which technology. We build the VLAN topology, firewall rules, and wireless segmentation so the network layer matches the plan. Lab workstations, cluster front-ends, and storage targets all live inside the segment that maps to the authorized group.
Private git hosting with export audit
We stand up on-prem Gitea or GitLab instances for export-controlled code, with Section 889 vendor review completed, entity-list screening wired into the identity provider, and branch protection configured so merges cannot happen without a reviewer whose export status is on record. The audit log shows every push, pull, and permission change.
REDCap, IRB workflows, and the boundary between PHI and research data
Research that touches protected health information is governed by the HIPAA Privacy and Security Rules, the Common Rule for human-subjects research, the institution's Institutional Review Board, and, increasingly, sponsor data-sharing policies that do not always agree with each other. The IT environment has to honor all of them at once.
REDCap hardening. The Research Electronic Data Capture platform is used widely for clinical and translational research data entry. The default deployment is not HIPAA-ready. We harden the LAMP or containerized stack, enable the audit and two-factor modules, configure data access groups so IRB protocol boundaries show up in the software, and lock the export channels so de-identification review happens before data leaves the system.
IRB-aware access governance. Every approved protocol defines a research team, a data scope, and a retention schedule. We tie the approval workflow in the IRB system to the identity provider, so access is granted when a protocol is approved and revoked when it is closed. The audit evidence is produced by the platform on demand rather than by a research coordinator assembling a spreadsheet the week before an OHRP site visit.
De-identification review on outbound data. Data leaving the clinical environment for secondary analysis, publication, or data-sharing commitments runs through a de-identification pipeline configured against the HIPAA Safe Harbor method or a documented Expert Determination, whichever matches your institution's policy. The review step produces signed evidence that the outbound dataset matches the approved method.
Student records, alumni segmentation, and the research crossover
The Family Educational Rights and Privacy Act governs student education records and the directory information that can be released without consent. For research universities the FERPA picture is complicated by institutional research datasets that mix student records with research data, by learning-analytics platforms that vendor-share data, and by alumni engagement systems that retain records for decades.
We audit the access map across the student information system, the learning management system, institutional research warehouses, and the downstream platforms that consume extracts. We separate alumni and development data from current-student FERPA scope so the marketing side of the university does not accidentally touch records it has no lawful access to. And we document the legitimate educational interest determinations so that when a parent, a student, or a state attorney general asks the FERPA question, the answer is already written down.
Six steps from first call to annual re-attestation
Discovery and grant-portfolio triage
We inventory the active research awards, the data classifications each one triggers, and the current state of controls. The deliverable is a prioritized map of which awards need enclave-grade handling, which need baseline hardening, and which are already in good shape.
CUI scoping and boundary design
For awards that require a CUI boundary we design the enclave: identity, network, storage, compute, monitoring. The boundary is drawn so it can hold for three years without becoming unusable for researchers. The design goes through a joint review with your Research IT, OSP, and security offices before any build starts.
Enclave build and integration
We build the enclave: GPU cluster, secure workstations, managed storage, monitoring, identity integration with your campus SSO, export-aware group structures, and logging into your SIEM. Research workflows are migrated one award at a time so the science never stops.
Training, SSP, and POAM
We run the required security awareness and role-based training for researchers, admins, and principal investigators. We author the System Security Plan and Plan of Actions and Milestones against NIST SP 800-171 r3. Every control family has a named owner and a named evidence artifact.
24/7 monitoring and incident response
The enclave is monitored by our security operations with AI-assisted triage and human analyst escalation. Research-specific detections (deemed-export indicators, abnormal classifier exports, Slurm-job anomalies) are tuned on top of the standard EDR and network baseline. Incident response is 24/7, covered by written runbooks, and rehearsed.
Annual re-attestation and continuous improvement
The environment does not sit still, and neither does the threat. Annual re-attestation covers the SSP, POAM, control evidence refresh, and a tabletop exercise with the research and compliance teams. When new awards land or when a regulation changes (NIST SP 800-171 r4, CMMC scoping changes, state privacy laws), we update the scope rather than starting over.
Raleigh based. Regulated-industry native. 24 years of the same people answering the phone.
Our office at 5540 Centerview Drive in Raleigh is 10 minutes from NC State, 30 minutes from Duke and UNC, and inside the Research Triangle Park service radius. When an auditor arrives on site or a hardware issue requires hands on a rack, we drive.
Petronella Technology Group is a CMMC-AB Registered Provider Organization (RPO number 1449), listed at cyberab.org. The entire delivery team holds CMMC-RP credentials. That is the credential that actually matters when a DOD prime asks who authored your SSP.
Craig Petronella is a Digital Forensics Examiner, DFE number 604180, with CCNA and CWNE credentials on top. When an incident crosses from cybersecurity into forensics evidence handling, the same firm that built your enclave also runs the investigation.
We run private AI cluster deployments for defense contractors, healthcare systems, and regulated enterprises. The research-university use case is not a one-off for us. The playbook is the same stack, adapted for academic-freedom constraints and grant-compliance evidence.
Our security operations pairs machine-speed triage with credentialed analysts. Alerts that matter reach a human inside minutes. Alerts that do not are closed with evidence that the closure happened. Research IT teams do not need another firehose, they need a filter.
Twenty-four years, the same leadership, the same phone number. Vendors come and go. Our clients keep renewing because the team that signed the first contract is still the team that answers the call.
Looking for the deployed-stack view?
This page covers the buyer-identity angle for research universities. If you want the exact technical stack, architecture patterns, and audit-evidence production detail, see the deliverable view.
See the stack we deploy for regulated industries →FAQ
How do you handle mixed CUI, PHI, and FERPA data inside a single grant portfolio?
What CMMC level do we actually need?
Can students use personal laptops on research networks?
How do you support international collaborators and visiting scholars?
What about export-controlled code repositories and data hosting?
Do you work with REN-ISAC and sector ISAOs?
What does a typical engagement cost for a regional research university?
How fast can you stand up a CUI enclave for an award that just landed?
Your next grant award should not put you in front of an auditor unprepared.
Talk to a research-IT engineer about your award portfolio, your CUI scope, and your AI infrastructure plans. First call is a conversation, not a pitch.
Related pillars: Cybersecurity services · CMMC compliance · NIST SP 800-171 · Private LLM platform · NVIDIA AI infrastructure · AI hardware · Industries we serve