Home

  • How to get started in cybersecurity

    I’ve been in cybersecurity for 25+ years, and the most popular question I get is my recommendation for how people can get started.

    Here are six things that have impacted my career and helped me grow as a security person and a human being.

    1. Get a solid understanding of systems and networks. Systems and networks are the foundation for everything we do in security. If you want to be better at security, you must have a foundation in TCP/IP. Both on the theoretical side and on the application side. The easiest way to get this experience is to become a systems administrator. The lessons I learned as a sysadmin allow me to speak about things in security, such as DNS, that I would only be talking about theory if I hadn’t ever wrestled with. Since I’ve had my hands on a DNS server and configured zones, I truly understand the security challenges of DNS.
    2. If you don’t have Virtual Box installed on your laptop, do it now. Virtual machines are your friend. You can configure different machines (Windows, Linux, etc.) with virtualization and connect them. You can practice with the best security distributions (Kali, Web Security Dojo, etc.) without getting arrested! Learn to use Virtual Box (open source, by the way) and build VMs to test things and learn how they work. Have a system administrator mindset.
    3. Read like crazy. In the security business, things are always changing. Whether new technologies (zero trust is the hot thing now) or new techniques, this industry is not stagnant. To get ahead in any industry, you must continue to grow and learn. Become an avid reader, and learn from the books you read. Even though college is over, take notes about the things that catch you in the book, and act upon them. Do not think you must limit yourself to non-fiction. Sprinkle fiction in as well to expand your mind.
    4. Learn to code in a minimum of one language. Code is the foundation of everything in security. Learn at least one object-oriented programming language proficiently. You’ll be amazed at how you can apply the knowledge of a single language to any other language.
    5. Take advantage of the training resources available on the Internet. There are many free security courses and training opportunities from Universities (Stanford and MIT), YouTube, where all the local conferences are archived, and Codecademy. Take advantage of the content that is out there and learn from it. Pick a specific topic and focus on it for a month. If you know nothing about Javascript, then learn it. You do not need the proficiency of someone who builds web applications daily, but knowledge can be applied when the situation arises.
    6. Network, and not in a cheesy walk around and hand out business cards way. Make friends in the security industry. Do this on Twitter or Mastadon. Do not use Twitter as a news feed. Respond to security people and get into conversations. The worst thing that happens is that they ignore you. Go to conferences, and don’t just stand in the back of the room like you are at a middle school dance. Introduce yourself to people, talk before the sessions start, and make yourself a part of the community. You’ll benefit greatly from the relationships you establish with real security people.
  • Security utility or what we all really want

    On a recent podcast episode of the Security Table, the gang and I discussed the Lastpass breach and the impact of security products as utilities. We spent time unpacking the concept of a security utility.

    A security utility is like a municipal utility, say, your water service. You have expectations of your water service. You expect that water is available on the line as long as you regularly pay your bill. When you lift the faucet or turn on the washing machine, you expect water to come out. A security utility is much the same — a security service that you rely upon to enhance your security stance and one that you expect to always work. You can also think of it as a product or service that, if it fails, results in the breakdown of your approach to securing your digital footprints.

    This discussion caused me to think deeper about the expectations of a security utility. It came down to three primary categories:

    1. Secure by default and in every situation.
      • When purchasing a security utility, expect that its creators have thought about security from every angle and in every situation. In the example of a password manager, expect that passwords will be securely stored in the cloud so that only the owner can access the passwords. Secured in such a way that the provider cannot even access the passwords.
    2. Simple to install/administer.
      • Security utility should be simple. Of course, simple is always more secure than complex, but in this case, with security utility, end-users don’t want to spend time “figuring the service out.” This leads to the final category.
    3. Always works without any thought.
      • Security utility should always work. Consumers don’t want to think about how a security product or service will work; they expect it will always work. That is the beauty of the solution — you don’t have to think about it; it is always there, always protecting you.

    All of this to say, I had a personal experience where utility, and to some degree security utility, came into play. I built my home Wifi network using Ubiquiti gear a few years ago. I had a Wireless Controller, six access points distributed around the property, and a PoE switch that fed the access points. What I also had was a significant amount of complexity. I added a 48-port PoE switch to work in additional wired connections, which broke the setup. The network was down a few times over a month, and I spent hours troubleshooting a phantom problem where devices would appear and disappear. When they disappeared, they stopped functioning but would reappear minutes later as if nothing had happened.

    I decided to adopt a utility approach, and I ripped out all this complex gear and replaced it with a mesh Wifi solution. Instead of multiple layers of network and security devices, my Wifi is now the gateway to my home network. I now have a solution that is secure by default, simple to work with, and always works, with no troubleshooting by me.

    As folks who build products embrace the idea of security utility, anything you build should strive to become a security utility, easy to use by those who gain value from it, with minimum effort required to make it all work.

  • Looking back and forward on a twenty-five-year career in cyber security

    I got into security almost by accident. After graduating from university, my wife and I moved to Northern Virginia in 1997. There, I attended a job fair at a hotel. Standing in line to meet with a large government contractor, I investigated a room to my left, off the hallway. I saw a guy sitting, typing on a laptop, and talking to nobody. I thought, “Huh, this line will be here when I get back; let me see what this guy is up to.” I walked up to him and struck up a conversation. And just like that, I met Mike Weidner.

    During the conversation, Mike explained how they were looking for a system administrator, and I began the interview process then and there. After some follow-on interviews at their offices at Boone Blvd in Tysons Corner, Virginia, I was offered the job. Keep in mind that I had no idea what a “security” company did, but there I was.

    The next few years were incredible, as I was transformed from a system administrator into a security engineer. I stood on the shoulders of giants in the industry, learning from Marv Schaefer, Gary Grossman, Chuck Pfleeger, Victoria Thompson, Stan Wisseman, Bill Wilson, and many others. They taught me the true meaning of threats and how threats manifest in different pieces of a system. I worked alongside people who have grown into giants in the industry— Doug Landoll, Diann Carpenter, Jeff Williams, and Klayton Monroe. They pushed me to keep up and learn everything thrown my way.

    My first big takeaway from my Arca experience is the importance of pouring into the next generation. I look for opportunities to mentor newer security folks every chance I get. I’ve made this a priority since my days at Arca. It’s not like I came up with this on my own — I’ve reflected on what I was taught by all the Arca industry giants — pour into others, as the return is great.

    As the Internet bubble was in full swing, Arca was acquired not once but twice. The first acquisition didn’t stick, as the acquiring company had some issues, and we landed as the new security consulting arm of Exodus Communications. Exodus was the company that everyone used, but most had no idea. Exodus, at its peak, had 44 data centers worldwide, and three out of every four clicks on the Internet went through our data centers.

    I started as a Senior Security Consultant and migrated into Incident Response. Exodus brought together a group of ex-FBI agents and people like me that had been using computers for decades. We investigated many major breaches and worked on industry-wide security events under the guise of the Cyber Attack Tiger Team or CATT. When the Internet bubble burst, so did Exodus, and I was back on the move.

    I had a few jobs between Arca and Exodus, all without leaving. My takeaway from this time was to reinvent yourself often. I went from consulting to Incident Response and learned a whole new discipline. I leaned into all the Exodus CATT team members could teach me — learning from trained investigators from the FBI and other Intelligence agencies. Reinvent and never stop learning. Our field is so large we must all continue to learn new things.

    After Exodus, I had a brief and uneventful stint at a Government Contractor. From there, I landed at Cisco and worked on Common Criteria and FIPS 140 certifications for five years. After leaving certifications and joining an inward-facing product security team, I was challenged to bring threat modeling to the whole engineering organization. I dove deep into threat modeling, grasping how to perform it. I was the Chief Security Advocate at Cisco for five years, and one of the things I did there was work with Erick Lee to define requirements for Cisco’s threat modeling tool (which Erick did all the coding for). Afterward, I worked toward distributing the tool to engineers across the company. We saw success in our efforts as threat modeling became a prominent piece of Cisco’s Secure Development Lifecycle (CSDL).

    After a few years, I left Cisco to start Security Journey. First, I built a product that teaches developers and product-adjacent people the foundational, intermediate, and advanced facets of application security. Then, I led Security Journey to an exit in 2022. (If you are wondering, in my next chapter as CEO of Kerr Ventures, I’ll split my time between startup investing/advising, consulting, and incubating my next idea(s).) I’ve written all my learnings from Security Journey in a series of yearly posts that you’ll find on the Kerr Secure Blog.

    When I ponder all the different job functions and opportunities that I had, I realize that while the technical side of security is important, it’s the easier part of what I’ve accomplished. The soft skills of leadership are things that I developed by watching excellent leaders lead. From my managers at Arca to Tom Sweeney at Cisco, I learned how to lead by watching others lead well. I’ll never forget my favorite Tom Sweeney quote — “Judge yourself not by the number of people you manage, but by how many managers you create.” This embodied Tom’s philosophy — pour into people.

    Back on the security side, as I think about everything I’ve seen on my personal security journey, it feels like we’ve come so far, yet we have so much further. In my time, we’ve gone from a client-server desktop-focused world to fully networked, containerized, and cloud-native delivery. The technology we rely upon has revolutionized how we deliver IT and applications, yet we still have threats. The threat landscape has changed over time as new attackers enter the scene, new attack scenarios are dreamed up and implemented, and the number of interconnected devices has exponentially grown. At the end of the day, it still comes back to threats. The threats are consistent across the decades, and the need to understand them and mitigate them continues to be a priority.

    In my career around application security, I’ve watched the industry go from waterfall to Agile to DevOps. When I ponder the application security impacts, we are doing the same activities with DevOps that we did with waterfall at a more increased pace. Threat modeling is important in all methodologies, just like SAST, DAST, SCA, CVA, and RASP should be incorporated into every application security program today. While some things change, some things stay the same.

    I often wondered in the early days of my career if we would run out of threats and put ourselves out of jobs by solving all the security challenges. I thought there was a chance in those early days, but as the years went by, I understood that I’d retire from this industry someday, and the threats would still exist. They would look different than when I started, but there will be plenty of room for engineers to continue to tackle the newest threats that impact the human beings of the Internet.

    We still need more people in our industry. We can argue about how many people we need, but we can all agree that we need more people. I don’t see myself moving away from the world of security. I dream of being like my friend Brook Schoenfield when I grow up, continuing to share my knowledge and experience across our industry. Brook calls himself the “Elder Statesman of AppSec.” Someday, I hope to be invited to that group. I feel like I still have much more to learn.

  • The Security Champion Framework

    After the release of the Threat Modeling Manifesto, which was a gigantic success, both as a collaborative working group amongst fifteen Threat Modeling experts, and as the seminal work defining the essence of threat modeling, Marc French, Adam Shostack, and I talked about what we could do next to improve the world of application security.

    We identified a need for a document that helps people new to Security Champions build a program. We created a new working group to create similar work to the TM Manifesto for Security Champions.

    We waffled between creating a Security Champion Manifesto, a framework, a book, or a series of blog posts. Ultimately, the Champion group lost steam due to circumstances that took various team members away from the project, including me, at the time.

    From the beginning of the effort, I had envisioned that the output should be a framework and a maturity model for Champion programs. My vision was a document that would capture various maturity levels across the most important pillars of a program.

    My thoughts crystallized in various talks I delivered in 2022. I started at RSA with a talk entitled “Elite Security Champions Build Strong Security Culture in a DevSecOps World,” which a recording can be found on Youtube: https://youtu.be/9gVM93a1H1I. I delivered a refined version for the ISC2 Security Congress, where I fleshed out the initial categories of the Security Champion Framework.

    The framework categories are based on my experience running a large-scale Champion program at Cisco from 2011-2016 and consulting and advising various Champions programs as a consultant from 2017-2022. I’ve collected feedback from other practitioners and program builders. I want the framework to be larger than my experience and capture the experience of experts from around the globe.

    Using the words of the framework itself, “The Security Champion framework exists as a measuring stick and a roadmap. As a measuring stick, the framework allows leaders to measure how well their champions program performs. As a roadmap, the leader can use the measurements as input and build a plan to improve their program by applying updates towards a higher framework level.”

    Five high-level areas divide the framework, with one to four sub-areas within each area.

    AreaDescription
    PlanningPlanning includes the activities needed to scope and build a strategy.
    PeoplePeople include recruiting, retaining, capturing commitment, and onboarding new champions.
    MarketingMarketing includes the branding of the program and communication plans.
    ExecutionExecution includes the program pillars, coaching, education, and globalization efforts.
    MeasurementMeasurement includes metrics for demonstrating the value generated by the program.

    The Framework is released as CC-4.0-Sharealike. We are accepting PRs as feedback and additions to the framework. Please dive deeply into the framework and put it into use, and let us know of anything we missed: https://github.com/edgeroute/security-champion-framework

  • Saying goodbye to Security Journey

    Dear Security Journey,

    As I prepare to depart the company, I’ve had time to reflect on what Security Journey means and why Deb and I built it the way we did. We started in 2016 with me as a traveling security consultant and moved into a product company with eighty customers before the acquisition.

    We started Security Journey to change how the industry approaches security training. We focused on providing an experience that changes security culture rather than checking a training box. We built something that our customers needed, and we based it on real-world experience.

    Security Journey was born from all I learned at Cisco about creating an excellent educational experience. It all starts with the content, whether video or hands-on. The content must drive everything you do now and into the future. The content’s quality sets Security Journey apart from the competition. Of course, the platform is essential, as the administrators need features to execute their programs. Still, the content makes Security Journey stand apart, and each learning unit represents a small step toward changing security culture.

    Our approach has been to create the most excellent technical content and boil topics down to the most critical pieces a developer needs to know. So I am excited to see where the content engine goes in the future.

    Fight for our Customers; from Customer Success to Sales to Product to Marketing to Engineering, keep the Customer at the center of everything you do. Partner with them, learn from them, and do what’s right for them.

    In my mind, I’ve completed the task I set out to achieve: build a product that changes security culture. Now I leave the future to you as Security Journey team members. Take this product further than I ever imagined; use your creativity, knowledge, and skills to continue to revolutionize the industry. I have faith in you. I’ll be cheering you on from a distance.

    As a Board Member, I’m available to do anything I can to help you succeed. So feel free to reach out if there is anything that I can do to help you achieve your mission.

    Thanks,
    Chris Romeo
    Co-Founder of Security Journey

  • Thirty-one random #AppSec Thoughts

    Over October 2022, for NCSAM, I shared thirty-one random AppSec thoughts. For some of these thoughts, I’ve pointed to other writings I’ve prepared covering the topics more in-depth.

    1. Shift left and shift right = Secure Development Lifecycle. What’s old is new again.
    2. With all the attention on API security, we often overlook the OWASP API Security Top Ten.
    3. The OWASP Proactive Controls is the answer to how to fix/avoid the issues of the OWASP Top Ten.
    4. #ThreatModeling is analyzing representations of a system, to uncover the security and privacy challenges that exist.
    5. Why do we have so many 4-letter acronyms in #AppSec? SAST, DAST, IAST, and RASP, oh my!
    6. Security culture is measured by what your developers do with a security problem when nobody is looking.
    7. Security champions are a force multiplier for your security team.
    8. Imagine a future where all developers are security enlightened.
    9. Everyone is a security person — no matter your functional role within the organization, you own a piece of the security solution.
    10. Security culture eats strategy for breakfast.
    11. People, process, tools, and GOVERNANCE. Governance is the piece that everyone always forgets.
    12. Security should never be a gatekeeper — security should open doors, not shut them.
    13. Regardless of how much effort you put into breaking, security is no better until the builders engage.
    14. OWASP is a treasure trove of security resources.
    15. Break the build for vulnerable open source and third-party, but provide a filtering mechanism to allow builds to progress when there is no known fix.
    16. Shift {left, right, outwards} – just start.
    17. The Sec in #DevOps is silent.
    18. GitHub is a terrible place to store secrets.
    19. Teach the developers the underlying principles of the tools, and then watch how the tools magnify #AppSec.
    20. Security requires the ability to sell and market. The best security people can explain their idea well, share the value prop, and communicate.
    21. Developers take pride in their craft — they want to create more secure code — you must show them the way.
    22. Pipelines are the best way to represent a #DevSecOps build pipeline.
    23. Developers must understand all the component pieces of the build pipeline. Developers are smart — they’ll provide feedback on those pieces and help tune the tools.
    24. Launching a new security tool does not mean we enable every policy — increasing the fidelity of the security tool results in developer buy-in.
    25. Security people must learn how to code. The resources on the Internet are vast, and the excuses for why you don’t need to code are few.
    26. The DevSecOps Maturity Model (DSOMM) is an assessment tool for your DevSecOps and a builder of roadmaps.
    27. As a security professional, drop the no; try “yes, if.” Be a partner and not a roadblock.
    28. Threat modeling uncovers design-related issues, and the best threat modeling tool is the human brain.
    29. Guard rails are a better strategy than roadblocks. Provide limits, and encourage creativity within the boundaries.
    30. At the end of the day, #AppSec is a people-based solution, with support from the process and the tools.
    31. We have too many data streams in an #AppSec program; look for tools to correlate and consolidate the streams into developer-usable results.
  • OWASP Proactive Controls: the answer to the OWASP Top Ten

    The OWASP Proactive Controls is one of the best-kept secrets of the OWASP universe. Everyone knows the OWASP Top Ten as the top application security risks, updated every few years. The OWASP Proactive Controls is the answer to the OWASP Top Ten. Proactive Controls is a catalog of available security controls that counter one or many of the top ten.

    For example, Injection is a famous top ten item, having lived within the OWASP Top Ten since its inception. One still prevalent category of Injection is SQL Injection. The counter to SQL injection from the proactive controls is “C3: Secure Database Access” and other controls. C3 prescribes secure queries, configuration, authentication, and communication for database transactions. These techniques work together to prevent data loss due to SQL Injection.

    Here is a map between the OWASP Top Ten 2021 and the Proactive Controls updated in 2018.

    OWASP Top Ten 2021Proactive Controls 2018
    A01:2021-Broken Access ControlC7: Enforce Access Controls
    A02:2021-Cryptographic FailuresC2: Leverage Security Frameworks and Libraries
    C8: Protect Data Everywhere
    A03:2021-InjectionC3: Secure Database Access
    C4: Encode and Escape Data
    C5: Validate All Inputs
    A04:2021-Insecure DesignNo mapping
    A05:2021-Security MisconfigurationNo mapping
    A06:2021-Vulnerable and Outdated ComponentsC2: Leverage Security Frameworks and Libraries
    A07:2021-Identification and Authentication FailuresC6: Implement Digital Identity
    A08:2021-Software and Data Integrity FailuresC2: Leverage Security Frameworks and Libraries
    C4: Encode and Escape Data
    C5: Validate All Inputs
    A09:2021-Security Logging and Monitoring FailuresC9: Implement Security Logging and Monitoring
    C10: Handle all Errors and Exceptions
    A10:2021-Server-Side Request ForgeryC5: Validate All Inputs

    Let’s explore each of the OWASP Top Ten, discussing how the pieces of the Proactive Controls mitigate the defined application security risk.

    A01 Broken Access Control

    Broken Access Control is when an application does not correctly implement a policy that controls what objects a given subject can access within the application. An object is a resource defined in terms of attributes it possesses, operations it performs or are performed on it, and its relationship with other objects. A subject is an individual, process, or device that causes information to flow among objects or change the system state. The access control or authorization policy mediates what subjects can access which objects. In the worst cases, authorization is forgotten and never implemented.

    The Proactive Controls offer C7: Enforce Access Controls, which describes Discretionary, Mandatory, Role-based, and Attribute-based access control strategies. The document also provides a solid list of design principles for access control.

    Action: Attribute-based access control (ABAC) is the most mature and capable of all the strategies. Use ABAC as your template, and apply the access control design principles.

    A02 Cryptographic Failures

    Cryptographic failures are breakdowns in the use of cryptography within an application, stemming from the use of broken or risky crypto algorithms, hard-coded (default) passwords, or insufficient entropy (randomness). A broken or risky crypto algorithm is one that has a coding flaw within the implementation of the algorithm that weakens the resulting encryption. A risky crypto algorithm may be one that was created years ago, and the speed of modern computing has caught up with the algorithm, making it possible to be broken using modern computing power. A hard-coded or default password is a single password, added to the source code, and deployed to wherever the application is executing. With a default password, if attackers learn of the password, they are able to access all running instances of the application. Insufficient entropy is when crypto algorithms do not have enough randomness as input into the algorithm, resulting in an encrypted output that could be weaker than intended.

    Proactive Controls offer two solutions for cryptographic failures: C2: Leverage Security Frameworks and Libraries
    and C8: Protect Data Everywhere. C2 prescribes the use of known good security frameworks and libraries. The same holds true for the source of cryptographic algorithms and implementations. Find a reliable, trusted source for your cryptographic functions, and rely upon that implementation to deliver your application’s cryptographic needs. C8 calls for data protection everywhere, in transit between your application and users and while resting in a database.

    Action: Find a solid, trusted crypto library for your application and encrypt sensitive data in transit and at rest.

    A03 Injection

    An injection is when input not validated properly is sent to a command interpreter. The input is interpreted as a command, processed, and performs an action at the attacker’s control. The injection-style attacks come in many flavors, from the most popular SQL injection to command, LDAP, and ORM.

    Proactive Controls has a plethora of options to deal with injection. C3: Secure Database Access was tailor-made to counter SQL injection. C3 includes securing the queries with prepared statements and using object-relational mapping libraries to isolate the raw creation of SQL queries. Secure configuration, authentication, and communication protect the database server and the database’s connection parameters. C4: Encode and Escape Data protects any data that is reflected back to the client, and C5: Validate All Inputs ensures that the input that comes inbound is validated using various rules.

    Action: Validate all input, and avoid exposing any input directly to an interpreter.

    A04 Insecure Design

    An insecure design focuses on the design and architectural flaws. Many future vulnerabilities can be prevented by thinking about and designing for security earlier in the software development life cycle (SDLC).

    There is no specific mapping from the Proactive Controls for Insecure Design. The Top Ten calls for more threat modeling, secure design patterns, and reference architectures. Threat modeling analyzes a system representation to mitigate security and privacy issues early in the life cycle. Secure design patterns and reference architectures provide a positive, secure pattern that developers can use to build new features.

    Action: Threat model everything in sight and use secure design patterns and reference architectures as the foundation for anything new.

    A05 Security Misconfiguration

    Security misconfiguration is when an important step to secure an application or system is skipped intentionally or forgotten. Examples of security misconfiguration include missing appropriate security hardening, unnecessary features being enabled or installed, default accounts being enabled and unchanged, error handling bleeding information, disabling security features, or setting application frameworks to insecure settings.

    There is no specific mapping from the Proactive Controls for Security Misconfiguration. The Top Ten calls for a repeatable hardening procedure as the foundation of how to counter security misconfiguration. C2: Leveraging Security Frameworks and Libraries could result in choosing frameworks with secure default configurations, which would counter various security misconfigurations.

    Action: Harden everything within your environment, and look for frameworks with secure defaults.

    A06 Vulnerable and Outdated Components

    The world of software is made up of various libraries and frameworks. Developers write only a small amount of custom code, relying upon these open-source components to deliver the necessary functionality. Vulnerable and outdated components are older versions of those libraries and frameworks with known security vulnerabilities.

    An application could have vulnerable and outdated components due to a lack of updating dependencies. A component, in this case, was added at some point in the past, and the developers do not have a mechanism to check for security problems and update their software components. Sometimes developers unwittingly download parts that come built-in with known security issues.

    The Proactive Controls doesn’t have direct instructions to update dependencies often; there is a close match with C2: Leverage Security Frameworks and Libraries. The critical thing to remember is that when you use frameworks and libraries, you must instill a plan to check for known vulnerabilities and update those dependencies. Dependencies link to other dependencies, which results in an application relying upon thousands of packages with limited visibility to the developer of all those packages. Libraries and frameworks age like milk — they quickly curdle and need to be replaced with newer versions.

    Action: If you use a framework or library, check for known vulns and update often!

    A07 Identification and Authentication Failures

    Identification and authentication failures occur when an application cannot correctly resolve the subject attempting to gain access to an information service or properly verify the proof presented as validation of the entity. This issue manifests as a lack of MFA, allowing brute force-style attacks, exposing session identifiers, and allowing weak or default passwords.

    The Proactive Controls prescribes C6: Implement Digital Identity, which ties into the advanced authentication requirements presented in NIST 800-63b. Guidance is also provided about cookies, tokens, and the use of JWT.

    Action: Implement the highest level you can support from NIST 800-63b.

    A08 Software and Data Integrity Failures

    Software and data integrity failures include issues that do not protect against integrity failures in software creation and runtime data exchange between entities. One example of a failure involves using untrusted software in a build pipeline to generate a software release. Another example is insecure deserialization, where an application receives an object from another entity and does not properly validate that object, resulting in an attack being loosed upon the application that received the object.

    Use the Proactive Controls for validation and encoding (C4: Encode and Escape Data and C5: Validate All Inputs) to prevent insecure deserialization. Properly encode output and use JSON and other object-oriented binary styles. To protect the inputs in a build pipeline, rely upon C2: Leverage Security Frameworks and Libraries to ensure that you use the best possible components available, and enact methods to ensure that you have the released version of the software and not something that has been tampered with by an attacker.

    Action: Enhance integrity by validating the software you use, and avoid serialization; instead, use JSON as a text format for inter-system data exchange.

    A09 Security Logging and Monitoring Failures

    Logging is storing a protected audit trail that allows an operator to reconstruct the actions of any subject or object that performs an action or has an action performed against it. Monitoring is reviewing security events generated by a system to detect if an attack has occurred or is currently occurring. A failure is when either of these actions is not performed correctly.

    The Proactive Controls provide C9: Implement Security Logging and Monitoring, with direct guidance on tips and tricks for successful logging, avoiding other potential pitfalls that could result in attack conditions in the logging system. C10: Handling all Errors and Exceptions provides additional context for extending applications to reflect error conditions. The application has visibility into your logging and monitoring constructs.

    Action: Just log and monitor; it may be near the bottom of the list, but it is the only way to know what steps have been performed against your system.

    A10 Server Side Request Forgery (SSRF)

    A Server Side Request Forgery (SSRF) is when an application is used as a proxy to access local or internal resources, bypassing the security controls that protect against external access.

    While the Proactive Controls do not specifically address SSRF, C5: Validate All Inputs includes the validation of URL parameters, which are the root of most SSRF vulnerabilities.

    Action: Validate any URLs allowed as input, and filter out any SSRF conditions.

    Conclusion

    While the current OWASP Proactive Controls do not match up perfectly with the OWASP Top Ten for 2021, they do a fair job of advising on controls to add to your applications to mitigate the dangers the Top Ten describes.

    Ironically, the only Proactive Control that does not line up with one of the OWASP Top Ten 2021 items is C1: Define Security Requirements. C1 describes security requirements, points to the OWASP Application Security Verification Standard (ASVS) as a source, and describes a path for implementing security requirements. Proper security requirements can assist in limiting the blast radius of the OWASP Top Ten. Shifting left is a real thing, marketing term or not, and considering requirements early in the process of building something new lessens the impact of the OWASP Top Ten.

    References

  • The best threat modeling representation

    The Threat Modeling Manifesto defines threat modeling: as “analyzing representations of a system to highlight concerns about security and privacy characteristics.”A representation is the foundation of threat modeling. It is the item that threat modelers use to capture the essence of the thing they are modeling. Without a solid representation, it is challenging to draw robust security or privacy conclusions.

    A representation takes many different forms:

    • Data flow diagram — a process diagram that simplifies in-depth flowcharting using a limited set of elements including process, data flow, data store, external entity, and trust boundary.
    • Attack tree — an attack tree is a conceptual diagram showing how an asset or target might be attacked.
    • Swim lane diagram — visually distinguishes job sharing and responsibilities for sub-processes of a business process.
    • Pseudo-code — a plain language description of the steps that code will eventually perform.
    • Napkin — the simplest form of representation could be a napkin scribbled upon a napkin over a lovely lunch meeting.

    The takeaway is that while a representation can take many different forms, all representations have the same function. A representation exists to help the threat modeling team unlock the best possible set of threats and mitigations for whatever the representation represents.

    Many threat modelers participated in a poll on LinkedIn, answering the question, “when threat modeling, what do you use to create a representation?”

    As you see, data flow diagrams are the most used representation. Avi Douglen and Izar Tarandach, well-known leaders in the world of threat modeling, espouse data flow diagrams’ use. Each of them also adds other representations to their explanations. Swim lanes and Wardley maps for Avi and Python code (PyTM) for Izar.

    DFDs most often for new models, lately been using swimlanes more (especially if devs already have them). I am also trying out / experimenting with a focused Wardley map as well, not really confident in this yet but it feels really powerful.

    Avi Douglen

    Whichever works best for the owners of the system to better express it via the model. If it is me, then probably DFDs. Or Python code!

    Izar Tarandach

    Steve Springett summarizes the benefits of multiple representations in his descriptions of how he uses DFDs and Attack Trees to best understand and discover the threats within the things he models.

    Why just one? DFDs and attack trees are commonly used in my threat models. In my experience, the DFD can inform the Attack Tree since the DFD will have all the assets and processes that can be attacked. The Attack Tree can then identify things in the DFD that were previously marked as out of scope and we can reevaluate if that truly is the case. But I always start out with a DFD.

    Steve Springett

    The takeaway here is that with threat modeling, there is no best representation. Use the representation that makes the most sense, and look for opportunities to add additional representations to your process. Expand your mind and your representations to make even better threat models. Don’t be afraid to use a new representation. The best threat modeling exercises a creative mind, as part art and part science.

  • Threat modeling and a lack of tools

    I’ve been threat modeling for quite some time. I could argue that I started threat modeling at a high level in 1997 when I started my first job working for one of the first security consulting firms, Arca Systems. Threat modeling was a crucial part of everything we did, but we didn’t call it threat modeling or a separate process. It was how we were taught to think.

    I taught myself the formal threat modeling processes I eventually rolled out at a large tech company. I started with STRIDE and used that as the foundation for how I taught people the threat modeling process. We took that process and turned it into a tool that helped walk engineers through the process of threat modeling.

    The core principle of the tool was that it should allow an engineer to be the expert in the thing they were building, but not in security. The tool asked the engineer to create a representation, tag that model with attributes, and then consider the threats/develop mitigations. The tool led the engineers through the process.

    The tool’s goal was to put itself out of business over time. The tool was not designed to always be how threat models were performed. The hope was that engineers would learn the process without needing the tool to facilitate threat modeling.

    Watching this tool be utilized over time, I thought that tools could enhance any threat modeling program. I still hold true to what we wrote in the Threat Modeling Manifesto, “People and collaboration over processes, methodologies, and tools.”

    Humans must learn how to do threat modeling manually before adopting a technological solution. This is the same circumstance as when we code up a solution to a manual problem. If we do not find a manual solution to the challenge, we could end up coding something that misses the mark.

    When I asked the question about tooling usage within the world of threat modeling on LinkedIn, I was surprised to learn that most threat modelers are not using any tools. They are using a manual process. This leads me to ask the question, “why?”

    Is the lack of tool usage a statement about the current maturity of threat modeling in our industry or a comment about a deficiency in the available tools? We have multiple options in the commercial space for tooling, as well as PyTM and Threat Dragon from the open-source community. In my experience, it’s a combination of both.

  • The Hybrid Approach to Threat Modeling

    In the pursuit of studying the AppSec person and program in the wild, today’s research unpacks the voluntary mandatory debate on threat modeling.

    Different organizations take different approaches to implementing threat modeling. This is a crucial decision in the life of the AppSec program. The existing engineering and organizational culture have a definite impact on the choice. It is best not to cross the culture with this decision, as that will likely spell defeat for your new undertaking from the beginning.

    A LinkedIn poll suggests that most respondents approach threat modeling as voluntary but encouraged.

    LI Post: https://www.linkedin.com/feed/update/urn:li:activity:6975812523346341888/

    The Mandatory Approach

    Some make threat modeling a mandatory gate and prevent developers from moving a new feature into production if threat modeling is not performed. The status of the threat model is monitored in some way, and if the model has not been uploaded or attached to a ticket, the feature cannot progress through the pipeline.

    The mandatory approach requires a clear and concise process. As you are forcing teams to perform threat modeling, they will push back heavily if you do not clearly define what it is they must complete.

    Strengths of mandatory

    • Whatever you decree is unilaterally applied to all features.
    • Because everything is being threat modeled, you must expand the creation of the models to developers and the product adjacent.

    Weaknesses of mandatory

    • If the team is forced to perform a threat model, they may consider it a compliance artifact and put little effort into it.
    • The volume of threat models makes governance and review more difficult.

    The Voluntary Approach

    The other side of the coin is voluntary, where threat modeling is encouraged but does not act as a blocker for a new feature. With voluntary, developers and product adjacent may represent your security champions, with an existing passion and drive for security.

    Strengths of voluntary

    • Those that threat model have a desire to threat model.
    • Models will be of higher quality because voluntary effort is applied.

    Weaknesses of voluntary

    • Threat models may not be completed for crucial features.
    • Security vulnerabilities resulting from design issues could slip into production with no consideration.

    Not doing threat modeling at all.

    Eight percent of respondents from the survey admitted to not doing any threat modeling. Yet, threat modeling is valuable and returns much more value than it requires. Gaining a culture where developers and product adjacent understand and think about security is priceless.

    Strengths

    • Saving resources and limiting disagreements about workload between security and engineering.

    Weaknesses

    • Missing out on design-related issues can translate into vulnerabilities in production.
    • Design-related issues could be at the subsystem level (think authentication or authorization), resulting in total system/application compromise.

    The Hybrid Approach

    The Hybrid Approach was uncovered and shared by Julie Davila, CTO, Federal at Sophos, on the poll. She described the Sophos threat modeling approach as initially mandatory but only a gate if the new code or integration warrants it.

    Sophos focuses on empowering engineering teams via security champions to do much triaging and “0 to 80” work for scale. The AppSec team is generally involved with every “first” threat model, and then teams are expected to update this autonomously with the ability to phone for help if needed. The engineering leaders endorse the shared responsibility model and are not purely bottoms-up.

    No matter what, threat modeling with every code or infrastructure change isn’t good or ideal, according to Julie. The reasons to update, broadly speaking, include:

    • New data flow (having a solid data flow diagram is essential)
    • New infrastructure (e.g., an app suddenly starts using AWS SNS)
    • Changes to the type of data going through a system (e.g., a new feature introduces customer PII data)
    • Changes to encryption
    • Changes to prior assumptions surrounding a security framework, centralized logging, etc

    Conclusion

    After considering the options on the poll, the best path forward for an organization is the hybrid approach. Hybrid defines the best of both, allowing the necessary threat modeling control for new features, but also does not push the product teams into threat modeling as busy work. Therefore, hybrid threat modeling is the best path forward for your application security program.