CyberSecurity
DevSecOps
DevSecOps
The software project lifecycle model has been evolved from waterfall to agile model, and now commonly used the Scaled agile framework.
In order to onboard the security practices within the software development lifecycle, the term DevSecOps has arisen starting from 2015 by different entities like Gartner, OWASP and NSA.
The term DevSecOps includes the SEC section between the development and operation to onboard all required activities related to the security within the software development lifecycle.
Here are the key DevSecOps practices summarized:
Shift Left Security: Integrate security early in development (design, coding), from planning, analysis, design, implementation, deployment, and even after deployment by operations monitoring.
We will discuss the details hereunder, to clarify the security requirements management during the SDLC.
Automated Security Testing
Use tools for static (SAST), dynamic (DAST), and software composition analysis (SCA).
Threat Modeling
Identifying risks before coding using STRIDE, OWASP, or MITRE, it helps identify early potential threats within your solution.
Secrets Management
Prevent hardcoded credentials with vaults and scanning.
Use vaults (e.g., HashiCorp Vault, AWS Secrets Manager).
Dependency Scanning: Continuously check third-party libraries for vulnerabilities.
Snyk or OWASP Dependency-Check for small enterprises
JFrog Xray or Sonatype Nexus IQ for large enterprises
Security as Code: Define policies and configurations in code (e.g., OPA, Terraform), intersected with others.
Policy Enforcement in CI/CD
- Block builds/releases if security checks fail.
- Example: Enforce quality gates in SonarQube (e.g., no critical bugs allowed and the same for vulnerabilities)
Runtime Security Monitoring
WAF: We application firewall
Protect solutions from attacks
Sample: Blocks/alerts on web attacks like SQLi, XSS
Tools:
SIEM: Security Information and Event Management
Collects, analyzes, and correlates security logs from multiple sources
Tools: Splunk, IBM QRadar, Elastic Security, Azure Sentinel
IDS:Intrusion Detection System
Detect and respond to threats.
Snort, Suricata (NIDS), OSSEC (HIDS).
Focus on all network activity, not like WAF scope
Comes after WAF in the architecture, after the load balancer
Zero Trust Principles
Enforce least privilege and continuous authentication.
Security Training
Educate developers on secure coding and common vulnerabilities.
Early Security Integration
Embed security checks early in development.
Sample: Use SAST tools like SonarQube integrated into the IDE or CI pipeline.
Reduced Risk
Continuously detect vulnerabilities to prevent surprises.
Sample: Weekly vulnerability reports generated by OWASP Dependency-Check.
Improved Compliance
Automate audit trails and enforce security policies.
Sample: Use OPA/Gatekeeper to enforce Kubernetes security policies automatically.
Cost Efficiency
Fix security issues early to reduce expensive fixes later.
Sample: Automated unit tests that include security assertions to catch issues upfront.
Higher Quality Code
Detect and fix security bugs during coding.
Sample: Use ESLint security plugins combined with peer code reviews.
Automated Governance
Enforce security policies as code with automated pipeline gates.
Sample: Fail builds automatically on high severity security issues using GitLab CI security gates.
Continuous Monitoring
Detect threats and anomalies in real time.
Sample: Deploy Falco for runtime threat detection in Kubernetes or OpenShift clusters.
5.1 Definition: Shift Left Security in Agile Engineering is the proactive integration of security practices, tooling, and expertise into the earliest stages of agile development, starting from persona definition, user story formulation, architecture, detailed design and coding, so that security is embedded into engineering workflows.
This approach ensures that security considerations evolve in parallel with functional/non-functional requirements, enabling iterative detection, remediation, and prevention of vulnerabilities as part of routine sprint activities, rather than as a separate post-development phase.
5.2 Persona definition: Hacker Focus
During this phase, we consider the hacker as a persona from our solution, with clear objectives matched to our solution potential vulnerabilities objectives that match business capabilities.
At this point, we have the following practices:
Persona Identification: Omar G, the hacker
Persona values:
Steal sensitive data
Steal assets
Impact reputation
Content crawling
Persona Characteristics:
Tech expert
Patient
Multi-Vector capabilities
OS
Infra
DB
Dev
5.3 User story Formulation
During this phase, we formulate the security-related user stories, both functional and non-functional stories, considering the CIA triangle, mentioned in lecture 01.
As a brief for CIA triangle, we consider generating user stories for the three aspects:
Confidentiality: Ensuring only the right people can see the right data — protect secrets from prying eyes through encryption, access control, and classification.
Integrity: Making sure data and systems remain correct, untampered, and trustworthy — protect against unauthorized changes using hashing, digital signatures, and validation.
Availability: Keeping systems and data accessible when needed — prevent downtime with redundancy, failover, and resilient design.
5.4 Business Governance Model
In order to govern the solution security model, you should consider concrete classification for the transactions, in order to have the proper security controls.
The golden rule here is: Scan your operations types.
During this activity, you will be able to identify and classify each operation within your solution, and put the proper security control, that should be aligned with the business team when creating the user experience to balance between user experience and security requirements.
Sample Transactions classification:
Authentication: Require OTP, CAPTCHA
Payment: : Require OTP
Assets transfer: : Require OTP
Data Crawling: : Require CAPTCHA
Sensitive data exposure: : Require OTP
Financial reports: : Require OTP
Note: The mentioned governance model can be applied as follows within a contract as a security governance model
5.5 User stories Samples with full lifecycle impact
5.5 Business requirements tradeoff
Definition: Requirements trade-offs happen when fulfilling one requirement negatively impacts another.
Security contradictions Examples:
Strong security controls may reduce usability (e.g., MFA slowing onboarding).
Strict input validation can slightly degrade performance.
Root cause: Conflicting priorities between business needs, technical limitations, and compliance obligations.
Resolution process:
Elicit requirements via workshops with both business and technical stakeholders.
Perform risk assessment to quantify the impact of relaxing or strengthening each requirement.
Apply security-by-design and defense in depth to mitigate risks without removing protections.
Define measurable acceptance criteria to ensure the chosen trade-off is testable.
1. Target System Identification
Purpose: Define the scope of testing to avoid unnecessary or unauthorized activity.
Actions:
Determine whether the target is an application, API, network service, or embedded software.
Document version, environment, and dependencies.
Confirm test permissions and approvals (security & legal compliance).
2. Input Identification
Purpose: Locate all points where external data enters the system.
Actions:
Review technical documentation and architecture diagrams.
Enumerate APIs, file parsers, network ports, form fields, configuration files, or command-line parameters.
Prioritize high-risk inputs based on sensitivity and exposure.
3. Generate Fuzzed Data
Purpose: Simulate unexpected or malicious inputs.
Actions:
Use fuzzing frameworks (e.g., AFL, libFuzzer, Peach Fuzzer).
Create malformed data (broken formats, invalid headers, encoding issues).
Add random data for unpredictability.
Include boundary cases (max length, zero length, extreme values).
4. Execute Fuzzed Data
Purpose: Test system resilience under unexpected conditions.
Actions:
Inject fuzzed inputs into the target via identified input vectors.
Control the execution pace to avoid denial-of-service from excessive load (unless intentional).
Automate execution for repeatability.
5. Observe System Behavior
Purpose: Detect abnormal responses indicating vulnerabilities.
Actions:
Monitor CPU, memory, and I/O usage.
Watch for crashes, hangs, timeouts, or incorrect outputs.
Enable debugging, logging, and instrumentation for deep insights.
6. Log Detected Defects
Purpose: Ensure every issue is traceable and reproducible.
Actions:
Record input payloads that caused anomalies.
Capture logs, stack traces, and system state snapshots.
Classify severity and potential exploitability.
Share findings with developers for remediation and retesting.
Fuzz Testing Process
Sample Fuzz Testing Scripts
Chaos Testing (Chaos Engineering) is the discipline of deliberately injecting failures into your system to verify whether it can withstand them and continue delivering service. Instead of assuming “all good,” you stress the system the way real-world conditions would — outages, latency, spikes — and learn its breaking points before your customers do.
Resilience validation → Systems are complex; one broken component can cascade. Chaos testing ensures your architecture is designed for graceful degradation.
Confidence building → Teams sleep better knowing the system has already survived fire drills.
Culture shift → Moves organizations from reactive firefighting to proactive resilience engineering.
Define steady state → Measure what “normal” looks like (e.g., requests/sec, error rate, latency).
Form hypotheses → Predict: If service X dies, the system should still serve 80% of traffic.
Inject controlled failure → Kill pods, throttle CPU, add network jitter, disable DNS, or break an external API.
Start small (blast radius) → Begin with a single container, not the whole cluster. Scale experiments gradually.
Run in production (carefully) → Staging ≠ production. Real resilience is tested live, but with guardrails.
Automate & integrate → Use tools (e.g., Gremlin, Chaos Mesh, LitmusChaos) and add chaos tests to CI/CD.
Observe deeply → Collect logs, metrics, traces — resilience without observability is blind chaos.
Prioritize safety → Rollback plans, monitoring alerts, and team alignment before pulling the plug.
Continuous learning → Document findings, adjust architecture, and iterate.
Build resilience culture → Encourage teams to embrace failure as a teacher, not an enemy.
Netflix → Built Chaos Monkey to randomly kill production instances. This grew into the Simian Army.
Amazon → Regularly shuts down services regionally to test multi-region failover.
Financial services → Simulate spikes in trading activity or payment gateway downtime to ensure graceful recovery.
Summary Rule (Dr. Ghoniem style):
Dr. Ghoniem Lawaty
Tech Evangelist @TechHuB Egypt