MSc Cybersecurity — Robert Gordon University
Security & Threat Modelling — Enterprise Case Study
Performed a socio-technical security assessment of a real enterprise system — modelling assets, user personas, data flows, and trust boundaries. Conducted structured threat modelling and risk analysis, producing actionable recommendations to reduce human error and strengthen security posture.
Overview
This MSc project applied structured threat modelling to a real enterprise case study — treating security not as a purely technical problem but as a socio-technical one, where human behaviour, system design, and organisational processes interact to create risk. The output was a set of practical, prioritised security recommendations grounded in an evidence-based threat model.
Approach
The assessment followed a structured socio-technical methodology:
1. System Characterisation
- Identified all system assets: data stores, processing components, communication channels
- Mapped user personas and their interaction patterns with the system
- Documented data flows across trust boundaries
2. Trust Boundary Analysis
- Identified all points where data crosses from one trust zone to another
- Assessed the controls present at each boundary
- Identified boundary crossings with insufficient or absent controls
3. Threat Identification
- Applied STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) across all identified assets and data flows
- Assessed the role of human factors — usability friction driving insecure workarounds, insufficient training, inadequate feedback mechanisms
4. Risk Analysis
- Assessed likelihood and impact for each identified threat
- Prioritised threats by risk severity
- Identified existing controls and their effectiveness against each threat
Key Findings
The assessment identified that a significant proportion of the risk stemmed not from technical vulnerabilities but from usability issues driving insecure user behaviour — a pattern common in enterprise systems where security controls are implemented without adequate consideration of how real users interact with the system under time pressure.
Recommendations Produced
Recommendations addressed both technical and human-factor dimensions:
- Technical: specific control improvements for identified high-risk data flows and trust boundary crossings
- Usability: design changes to reduce friction at security-critical points, reducing the incentive for users to adopt insecure workarounds
- Organisational: training and awareness recommendations targeting identified human-factor risks
Key Learnings
Threat modelling reveals that most enterprise security failures are not the result of sophisticated attacks bypassing strong controls — they are the result of predictable human behaviour in response to poorly designed systems and processes. A threat model that only considers technical attack vectors misses the majority of real-world risk. Socio-technical analysis is not a soft add-on to security assessment; it is a core component of understanding where risk actually lives in a system.