Home

  • The Hybrid Approach to Threat Modeling

    In the pursuit of studying the AppSec person and program in the wild, today’s research unpacks the voluntary mandatory debate on threat modeling.

    Different organizations take different approaches to implementing threat modeling. This is a crucial decision in the life of the AppSec program. The existing engineering and organizational culture have a definite impact on the choice. It is best not to cross the culture with this decision, as that will likely spell defeat for your new undertaking from the beginning.

    A LinkedIn poll suggests that most respondents approach threat modeling as voluntary but encouraged.

    LI Post: https://www.linkedin.com/feed/update/urn:li:activity:6975812523346341888/

    The Mandatory Approach

    Some make threat modeling a mandatory gate and prevent developers from moving a new feature into production if threat modeling is not performed. The status of the threat model is monitored in some way, and if the model has not been uploaded or attached to a ticket, the feature cannot progress through the pipeline.

    The mandatory approach requires a clear and concise process. As you are forcing teams to perform threat modeling, they will push back heavily if you do not clearly define what it is they must complete.

    Strengths of mandatory

    • Whatever you decree is unilaterally applied to all features.
    • Because everything is being threat modeled, you must expand the creation of the models to developers and the product adjacent.

    Weaknesses of mandatory

    • If the team is forced to perform a threat model, they may consider it a compliance artifact and put little effort into it.
    • The volume of threat models makes governance and review more difficult.

    The Voluntary Approach

    The other side of the coin is voluntary, where threat modeling is encouraged but does not act as a blocker for a new feature. With voluntary, developers and product adjacent may represent your security champions, with an existing passion and drive for security.

    Strengths of voluntary

    • Those that threat model have a desire to threat model.
    • Models will be of higher quality because voluntary effort is applied.

    Weaknesses of voluntary

    • Threat models may not be completed for crucial features.
    • Security vulnerabilities resulting from design issues could slip into production with no consideration.

    Not doing threat modeling at all.

    Eight percent of respondents from the survey admitted to not doing any threat modeling. Yet, threat modeling is valuable and returns much more value than it requires. Gaining a culture where developers and product adjacent understand and think about security is priceless.

    Strengths

    • Saving resources and limiting disagreements about workload between security and engineering.

    Weaknesses

    • Missing out on design-related issues can translate into vulnerabilities in production.
    • Design-related issues could be at the subsystem level (think authentication or authorization), resulting in total system/application compromise.

    The Hybrid Approach

    The Hybrid Approach was uncovered and shared by Julie Davila, CTO, Federal at Sophos, on the poll. She described the Sophos threat modeling approach as initially mandatory but only a gate if the new code or integration warrants it.

    Sophos focuses on empowering engineering teams via security champions to do much triaging and “0 to 80” work for scale. The AppSec team is generally involved with every “first” threat model, and then teams are expected to update this autonomously with the ability to phone for help if needed. The engineering leaders endorse the shared responsibility model and are not purely bottoms-up.

    No matter what, threat modeling with every code or infrastructure change isn’t good or ideal, according to Julie. The reasons to update, broadly speaking, include:

    • New data flow (having a solid data flow diagram is essential)
    • New infrastructure (e.g., an app suddenly starts using AWS SNS)
    • Changes to the type of data going through a system (e.g., a new feature introduces customer PII data)
    • Changes to encryption
    • Changes to prior assumptions surrounding a security framework, centralized logging, etc

    Conclusion

    After considering the options on the poll, the best path forward for an organization is the hybrid approach. Hybrid defines the best of both, allowing the necessary threat modeling control for new features, but also does not push the product teams into threat modeling as busy work. Therefore, hybrid threat modeling is the best path forward for your application security program.

  • AppSec, We Have a Problem: Not Everyone Knows How to Code

    Application and software security professionals have a single focus job description: “help developers write more secure code to limit vulnerabilities.” Therefore, everything an AppSec team does should focus back on this single core statement.

    AppSec teams deploy security tools into build pipelines to discover coding flaws and potential vulnerabilities so that developers can fix those issues before production. AppSec teams teach developers about security, including facets of secure coding principles for the developer’s specific tech stack. AppSec teams lead and facilitate threat modeling sessions to identify business logic issues that could hamper security if those issues are allowed to reach production. All AppSec and Software Security is focused on helping developers write more secure code to limit vulnerabilities.

    Can you exist without fundamental knowledge of a single coding language as an application and software security professional? As background, consider “Why cybersecurity pros need to learn how to code.” This post argues how coding knowledge benefits various roles within cybersecurity: AppSec Lead, SOC Analyst / Threat Hunter, Auditor, Pen Tester, and CISO. Finally, the post concludes that there is a benefit for everyone in cybersecurity learning how to code, listing out the coding value proposition for each role.

    Out of a desire to learn more about the application security community, a LinkedIn poll was created to ask a question of AppSec people, “how many development (coding) languages are you fluent in?” and the answers surprised me a bit.

    No language fluency

    At first blush, the thought of application security professionals having no knowledge of at least one object-oriented language is astounding. But the picture begins to change if we consider how people make their way into cybersecurity.

    Not every person makes their way to cybersecurity through a Computer Science background. A student cannot complete a CS degree without learning a few object-oriented programming languages. What if you make your way to cybersecurity through a system administration background? You can administrate many systems and networks without coding. You may learn scripting to make your job easier, but is it true object-oriented coding?

    Entering cybersecurity through a different door is not a lifelong excuse for not learning how to code. On the contrary, the value and influence of learning a coding language are priceless when working with developers.

    Imagine a situation where you drop off your car for service, and the mechanic explains that she has a teacher working with her for the day who will instruct her on your repair because of its complexity. The teacher happens to be grabbing a cup of coffee in the waiting room, and you ask how many cars they have mastered this repair on? If the teacher tells you to zero, you will have severe doubts about the safety of your vehicle when you leave that service appointment. Samet thing applies for coding — you must have domain-level experience in the subject you are asking for influence within.

    If you find yourself in this boat, know there are inexpensive opportunities to learn that first coding language. From online coding sites to a formal school to pairing with an existing developer, you’ll find that if you desire, you’ll have the opportunity to learn.

    1-2 language fluency

    The largest category of findings was those AppSec professionals fluent in one to two languages. This is encouraging because fluency in a single language prepares you to extend to multiple languages. While each language has its own nuances and details to learn and understand, having a fundamental knowledge of one object-oriented language allows you to apply that knowledge to many other languages.

    Reading code and advising on design are the two primary efforts that an application security professional must undertake. In addition, knowledge of that single language prepares you to read code in other languages and ask questions of your developer friends. These questions are based on knowledge and not ignorance.

    I think software architecture and cross-boundary knowledge is more important than one language, per se. What tends to happen when you cross architectural boundaries you run into multiple languages. Concepts are the same across languages, only different syntax. Solid programming concepts are more important than one language I would argue. Pattern recognition over being an expert in one language is more important.

    Tony Vargas

    3-5+ language fluency

    This 3-5+ group is just showing off. Just kidding. It is excellent that we have application security professionals who have embraced the skills their development populations utilize. People with this much subject matter expertise make a difference when working with their developers. Those developers cannot look at these folks and say, “you don’t understand what I do.”

    Language fluency does not mean the ability to code without using any references or resources. Using Google and other sites to help remember the syntax of a new language is not a weakness; it’s a superpower. Developers use resources all the time to solve a specific problem. AppSec professionals can use the same resources for success.

    Beyond language fluency, the best application security professionals understand the entire development lifecycle beyond language fluency. From source code control systems, issue trackers, and build pipelines, the best of the best understand all that developers have to deal with and can advise and roll up their sleeves alongside the development teams.

    Conclusion

    If you’re an application security professional that doesn’t know how to code yet, do not feel shame after reading this article — be excited to expand your mind as a lifelong learner and pick up that first coding language. Python is a great place to start, given the depth and breadth of available information, tutorials, and videos online.

    Take this article as a charge to learn what your developers know so that you can achieve the core application security mission — “help developers write more secure code to limit vulnerabilities.”

  • OWASP Top 10 2021: 7 action items for app sec teams

    In the world of application security, the OWASP Top 10 2021 is the most famous—or infamous—of documents. Loved by most and hated by a few, this foundational document is the first thing people on most application security programs try to assimilate and conquer.

    With the OWASP Top 10 2021, application security teams certainly have work to do. But if you embrace it, your app sec team will get better. Here are seven things practitioners can take action on from the new OWASP Top 10.

    1. Item No. 10 is just as crucial as item No. 1

    When the new Top 10 was released, some looked at the list and questioned the order. Is A01, “Broken Access Control,” more of an issue than A10, “Server-Side Request Forgery” (SSRF)? The simple answer is not to get hung up on the order of things on the list. If you have an SSRF in your Internet-facing web application, that issue trumps everything else you’re facing.

    Takeaway: The order of issues on the top 10 is not the important thing; deal with the highest-risk issues that stand in front of you.

    2. Define your mitigation guidance specific to the needs and wants of your organization

    Your primary focus must be on mitigating defined issues. Your goal is to eliminate whole classes of flaws over a period.

    The OWASP Top 10 has some guidance on mitigation/prevention, but it’s not actionable. As a random example, from “Broken Access Control”: “Rate limit API and controller access to minimize the harm from automated attack tooling.” Rate limiting is a good goal but a tough user story/requirement to hand to a developer.

    Takeaway: Mitigate issues using organizationally specific prevention and mitigation steps.

    3. Think of A04, ‘Insecure Design,’ as floating above all the other items

    Insecure design is the root of the other nine items on the list. A cryptographic failure started as an answer to a user story where someone said, “This crypto key size is good enough.” If a proper threat model had been performed for the new crypto feature before coding, the issue more than likely would have been discovered.

    Among the available tools and technologies that could eliminate vulnerabilities, threat modeling is the only discipline that could impact every item on the Top 10 list.

    Takeaway: Implement threat modeling as a solution for all 10 items on the list.

    4. Incorporate other OWASP projects to solidify your program’s foundation

    Since the list’s inception, people have called the Top 10 a standard, and while the OWASP team has pushed back against that for years, it has finally accepted it and added a section called “How to use the OWASP Top 10 as a standard.” This title is a bit tongue-in-cheek, though, since the section states once again the list is an awareness document and then points to the other projects that help you form a solid program foundation.

    The Application Security Verification Standard (ASVS) is pointed to as the verifiable standard and is the recommended choice for the depth you need to run a program.

    The Proactive Controls, which tries to answer the question of how to fix the Top 10 issues, is now under revision to match up with the new Top 10.

    Takeaway: The Top 10 is flashy, but it’s a mile wide and an inch deep. ASVS, together with Proactive Controls and the Top 10, is a mile wide and a mile deep.

    5. Teach the CWEs

    Common Weakness Enumerations have been part of the Top 10 since at least 2017. This year the CWEs are more front and center, and a wider distribution of CWEs was considered in the team’s analysis. As you present the new Top 10 to your developers, take them back to the foundational CWE nature of each issue.

    Takeaway: Ensure that developers have perspective on how CWEs work and how they can use them to understand and mitigate issues.

    6. Follow the resources

    The resource lists found within the Top 10 are a hidden treasure of application security goodness. As an example, “Broken Access Control” offers pointers to Proactive Controls, ASVS, OWASP Testing Guide, and OWASP Cheat Sheets. The mappings align with specific areas in those other documents that assist the program in dealing with the issue. Following the resources can show you how to transform your products and applications on an issue-by-issue basis.

    Takeaway: Go beyond the surface of each item on the list and take your teams deep into understanding, testing, and mitigating the issues.

    7. SSRF is a hidden danger that almost nobody understands

    Remember when cross-site request forgery (CSRF) first arrived on the scene? It was a challenging class of issues to explain because it had multiple moving parts. SSRF is now in the same boat. Ask 10 application security people what SSRF is and how to mitigate it and you’ll get a widely varied selection of answers and levels of understanding.

    Takeaway: Given the deadly nature of a single SSRF issue, it’s a good idea to invest in increasing understanding and mitigations.

    Unlock the potential

    While the OWASP Top 10 is seen as a “standard,” it requires more effort by you, the practitioner, to unlock its true potential. Lists of preventions and a few examples are great, but they are not a holistic approach to application security.

    Use the OWASP Top 10 for what it was initially designed for: awareness. Use it to teach your team the top issues they must understand. And look to all the other OWASP resources to fill in the gaps.

  • 6 ways to develop a security culture from top to bottom

    “We don’t need security.”

    With our modern dependence on technology and security, nobody would dare to make this statement. Everyone knows how crucial security is and how it must be embedded into everything an organization does. A simple glance at the news provides details on the data breach of the day tied to an application security vulnerability. Take a stroll to the Information Security department and you’ll hear about the latest blunder an employee made that resulted in lost data. Security is widespread and mainstream, but security culture has not kept pace with the threat landscape.

    Tim Ferriss shared his definition of culture as “what happens when people are left to their own devices.” This applies to security culture if we inject “with security” into that definition: Security culture is what happens with security when people are left to their own devices. Do they make the right choices when faced with whether to click on a link? Do they know the steps that must be performed to ensure that a new product or offering is secure prior to ship?

    Building a healthy security culture 

    An organization’s security culture requires care and feeding. It is not something that grows in a positive way organically. You must invest in security culture. A sustainable security culture is bigger than just a single event. When a security culture is sustainable, it transforms security from a one-time event into a lifecycle that generates security returns forever.

    Sustainable security culture has four defining features. First, it is deliberate and disruptive. The primary goal of a security culture is to foster change and better security, so it must be disruptive to the organization and deliberate with a set of actions to foster the change. Second, it is engaging and fun. People want to participate in a security culture that is enjoyable and a challenge. Third, it is rewarding. For people to invest their time and effort, they need to understand what they will get in return. Fourth, it provides a return on investment. The reason anyone does security is to improve an offering and lower vulnerabilities; we must return a multiple of the effort invested.

    A strong security culture not only interacts with the day-to-day procedures but also defines how security influences the things that your organization provides to others. Those offerings may be products, services, or solutions, but they must have security applied to all parts and pieces. A sustainable security culture is persistent. It is not a once-a-year event but embedded in everything you do.

    Why does an organization need a security culture? The primary answer is something that deep down we all know. In any system, humans are always the weakest leak. Security culture is primarily for humans, not for computers. The computers do exactly what we tell them to do. The challenge is with the humans, who click on things they receive in email and believe what anyone tells them. Humans need a framework to understand what the right thing is for security. In general, humans within your organization want to do the right thing—they just need to be taught.

    Luckily, wherever an organization sits on the security culture spectrum, there are things that can be done to make the culture better.

    1. Instill the concept that security belongs to everyone

    Many organizations have the opinion that the security department is responsible for security. Sustainable security culture requires that everyone in the organization is all in. Everyone must feel like a security person. This is a security culture for everyone. Security belongs to everyone, from the executive staff to the lobby ambassadors. Everyone owns a piece of the company’s security solution and security culture.

    Samantha Davison, security program manager at Uber, says, “At Uber, we are trying to change our employees’ security stories. By creating programs catered to region, department, and role, our people understand that security is part of their story and our culture.” This is an example of a company that truly believes that security belongs to everyone and bakes security into everything they do.

    You can achieve this “all in” mentality by incorporating security at the highest levels into your vision and mission. People look to these things to understand what they should focus on. Update your vision or organizational objective to clearly articulate that security is non-negotiable. Speak about the importance of security from the highest levels. This does not mean just the people who have security in their title (CISO, CSO), but also from other C-level execs all the way down to individual managers.

    2. Focus on awareness and beyond

    Security awareness is the process of teaching your entire team basic lessons about security. You must level set each person’s ability to judge threats before asking them to understand the depth of the threats. Security awareness has gotten a bad rap because of the mechanisms used to deliver it. Posters and in-person reviews can be boring, but they do not have to be. Add some creativity to your awareness efforts.

    On top of general awareness is a need for application security knowledge. Application security awareness is for the developers and testers within the organization. In your organization, they may sit within IT, or they may be the engineering function. AppSec awareness is teaching the more advanced lessons that staff need to know to build secure products and services.

    Awareness is an ongoing activity, so never pass up a good crisis. Bad things are going to happen to your organization, and many times they will be tied directly to a security problem. Grow your security culture with these teachable moments. Do not try to hide them under the rug, but instead use them as an example of how the team can get better.

    Accountability before awareness is crazy. People want to do the right thing, so show them through an awareness program and then hold them accountable for the decisions they make after gaining the knowledge.

    3. If you do not have a secure development lifecycle, get one now

    A secure development lifecycle (SDL) is foundational to sustainable security culture. An SDL is the process and activities that your organization agrees to perform for each software or system release. It includes things like security requirements, threat modeling, and security testing activities. SDL answers the how for your security culture. It is sustainable security culture in action.

    Customers across industries are starting to demand the crazy idea that organizations have an SDL and follow it. If you do not have an SDL at this juncture, Microsoft has released most of the details about its SDL free of charge. The lineage of many industry SDL programs traces back to the Microsoft program.

    A reasonable place for the SDL to live is within a product security office. If you do not have a product security office, think seriously about investing in one. This office sits within engineering and provides central resources to deploy the pieces of your security culture. While we do not want the entire organization to farm off security to the product security office, think of this office as a consultancy to teach engineering about the depths of security.

    4. Reward and recognize those people that do the right thing for security

    Look for opportunities to celebrate success. When someone goes through the mandatory security awareness program and completes it successfully, give them a high-five or something more substantial. A simple cash reward of $100 is a huge motivator for people and will cause them to remember the security lesson that provided the money. They also will be quick to tell five co-workers they received cash for learning, and those five will jump into the training quickly. If you are shuddering at the idea of giving away $100 per employee, stop being so cheap and count the cost. The return on investment on preventing just a single data breach greatly outweighs the $100 spent.

    The other side of reward is security advancement. Provide opportunities for team members to grow into dedicated security roles through advancement. Make security a career choice within your organization. Put your money where your mouth is. If you say security is important, prove it by providing growth potential for those with a passion for security.

    A final step is to provide an opportunity to earn an advanced degree in security. Many universities now offer a master’s degree in cybersecurity. If you can’t find one nearby, create your own. In my previous job, I worked with a large university in California to tailor a degree program that supported the company’s security culture. Once again, put your money where your mouth is and sponsor the first group of students. It sends a positive message to the entire organization.

    5. Build a security community

    The security community is the backbone of sustainable security culture. The community provides connections between people across the organization. The security community assists in bringing everyone together against the common problem and eliminates an “us versus them” mentality.

    The security community is achieved by understanding the different security interest levels within the organization: advocates, the security-aware, and sponsors. Security advocates are those people with a down-home passion for making things secure. These are the leaders within your community. The security aware are not as passionate but realize they need to contribute to making security better. The sponsors are those from management who help to shape the security direction. Gather all of these folks together into a special interest group focused on security.

    The security community can manifest as one-on-one mentoring and weekly or monthly meetings to discuss the latest security issues. It can even become a yearly conference, where the best and brightest from the organization have a chance to share their knowledge and skills on a big stage.

    6. Make security fun and engaging

    Last, but certainly not least, is fun. For far too long people have associated security with boring training or someone saying no all the time. To cement a sustainable security culture, build fun and engagement into all the process parts. If you have specific security training, ensure that it is not a boring voice-over PowerPoint presentation. If you engage your community through events, do not be afraid to laugh and goof around some. In my previous role, at each monthly security community event, we started the meeting off with a game of security trivia with a different security category each month. We did hackers in the movies one month and security news in another. This is just an example of how to bring fun and engagement into the process.

    Uber’s Davison offers:

    “Security can be so much more than PowerPoints and videos. Pick a fun theme and parody it—we did Game of Thrones. Give gamification a try. Throw a phishing writing workshop and have your employees write a phishing email for the company. The options are endless when you start to think outside the box.”

    What kind of security culture do you have?

    Of course, every organization has a security culture. If they say they don’t, they are either lying or afraid to admit they have a bad security culture. The good news is that any security culture can positively change how the organization approaches security. But culture change takes time, so don’t expect your members of your organization to overnight become pen-testing Ninjas that write secure code while they sleep. With the right process and attitude, you’ll get there.

  • Secure Development Lifecycle: The essential guide to safe software pipelines

    Customers demand secure products out of the box, so security should be a top priority that should be top of mind for everyone. But without a standard approach to security, it is almost impossible to deliver on the customers’ expectations.

    That’s where the Secure Development Lifecycle (SDL) comes in.

    SDL is a process. If you look at the many SDLs that exist across industries, you’ll find that most include the same basic security phases and activities. They may have different names for the pieces, but everyone follows roughly the same process.

    Here’s an essential guide to placing security front and center.

    Defining the Secure Development Lifecycle

    In its simplest form, the SDL is a process that standardizes security best practices across a range of products and/or applications. It captures industry-standard security activities, packaging them so they may be easily implemented. The software development lifecycle consists of several phases, which I will explain in more detail below.

    The SDL was unleashed from within the walls of Microsoft, as a response to the famous Bill Gates memo of January 2002. In it, Gates laid out the requirement to build security into Microsoft’s products. He admitted that due to various virus and malware outbreaks, Microsoft had to embed security if it was to be taken seriously in the marketplace.

    This resulted in the Microsoft Trustworthy Computing endeavor, out of which the idea of SDL was born. Microsoft made the SDL mandatory in 2004, and a cottage industry was unleashed. Many other companies, including Cisco, Adobe, and Aetna, have since adopted Microsoft’s SDL processes or created their own. And Microsoft has been gracious over the years in sharing its SDL successes with other companies and releasing many of its materials and tools as open source.

    The problems the SDL solves

    The lack of a standard approach to securing products causes problems. For one thing, vulnerabilities run rampant in shipped products. The triage and response needed to deal with this are major resource sinks. As a result, developers spend too much time fixing code they wrote in the past and not enough focus on the future.

    The second problem is that developers tend to repeat the same security mistakes, each time expecting a different response (which is the definition of insanity). The third issue is that problems are found at release or after deployment, beyond the reasonable time when the problems could be mitigated in an inexpensive manner.

    Finally, without a security standard customers have no assurance that a given product is secure. A single product considered for purchase may be one of the good ones, or it might be terrible from a security perspective. Without an SDL, there is no product security parity across the company. And without a standard process, some product teams ignore security altogether.

    People, processes, and technology

    The SDL is a process with different phases that contain security activities that sit inside of the classic people-process-technology triangle. The SDL forms the process portion.

    It includes both the central security team that governs the process and updates it and the product or development teams that perform security activities. The technology portion consists of tools that assist in finding vulnerabilities in source code or discovering vulnerabilities in a running instance of the product or application.

    The SDL is methodology-neutral. Security activities fit within any product development methodology, whether waterfall, agile or DevOps. Methodology differences show up in the cadence of security activities.

    The SDL was developed during the time of waterfall, so it is usually portrayed as a linear process that begins with requirements and ends with the release. When the SDL is extended to agile, some security activities get integrated into the normal sprint schedule, while others are pursued out-of-band. With DevOps, activities are embedded into the build pipeline using automation, while additional activities happen outside the pipeline.

    An SDL is divided into phases that tie closely into the waterfall approach. The standard approach to SDL includes requirements, design, implementation, test, and release/response.

    The requirements phase

    In the requirements phase, best practices for security are integrated into a product. These practices may come from industry standards or be based on responses to problems that have occurred in the past.

    Requirements exist to define the functional security requirements implemented in the product and include all the activities of the SDL. They are used as an enforcement point to ensure that all pieces are properly considered.

    Requirements may take the classic form, stating that the product or application must, may, or should, do something. One example might be that the product must enforce a minimum password length of eight characters.

    In the agile world, requirements are expressed as user stories. These stories contain the same information as do the requirements, but security functionality is written from the user’s perspective.

    The design phase

    The design phase of the SDL consists of activities that occur (hopefully) prior to writing code. Secure design is about quantifying an architecture (for a single feature or the entire product) and then searching for problems. The secure design could occur in a formal document or on a napkin.

    With many systems, the plane is in the air as the wings are being designed, but the SDL can survive even this craziness. The key is to use threat modeling.

    Threat modeling is the process of thinking through how a feature or system will be attacked, and then mitigating those future attacks in the design before writing the code. Threat modeling is akin to perceiving crimes prior to their occurrence, as in the 2002 movie Minority Report.

    A solid threat model understands a feature’s or product’s attack surface, then defines the most likely attacks that will occur across those interfaces. A threat model is only as good as the mitigations it contains to fix the problems. But it is crucial to identify security issues early in the process.

    Implementation or coding

    The next phase is the implementation or writing of secure code. The SDL contains a few things programmers must do to ensure that their code has the best chance of being secure. The process involves a mixture of standards and automated tools.

    On the standards front, a solid SDL defines a secure coding guide (such as those published by SEI CERT for CC++, and Java), that defines what is expected and provides guidance for when developers hit a specific issue and need insight.

    Implementation tools include static application security testing (SAST) and dynamic application security testing (DAST) software. SAST is like a spell-checker for code, identifying potential vulnerabilities in the source code. SAST runs against a nightly build or may be integrated into your IDE. It may find and open new bugs in the bug management system nightly or prompt the developer to pause while coding to fix a problem in real-time.

    DAST checks the application’s runtime instantiation. It spiders through an application to find all possible interfaces and then attempts to exploit common vulnerabilities in the application. These tools are primarily used on web interfaces.

    The test phase

    Formal test activities include security functional test plans, vulnerability scanning, and penetration testing. Vulnerability scanning uses industry-standard tools to determine if any system-level vulnerabilities exist with the application or product.

    Penetration testing involves testers attempting to work around the security protections in a given application and exploit them. Pen testing stretches the product and exposes it to testing scenarios that automated tools cannot replicate. Pen testing is resource-intensive, so it’s usually not performed for every release.

    The final phase: Release/response

    Release occurs when all the security activities are confirmed against the final build and the software is sent to customers (or made available for download). The response is the interface for external customers and security researchers to report security problems in products.

    Part of the response should include a product security-incident response team that focuses on triaging and communicating product vulnerabilities, both individual bugs and those that will require industry-wide collaboration (e.g., Heartbleed, Bash bug, etc.).

    Other security activities are also crucial for the success of an SDL. These include security champions, bug bounties, and education and training.

    Think differently, think secure

    The Secure Development Lifecycle is a different way to build products; it places security front and center during the product or application development process.

    From requirements to design, coding to testing, the SDL strives to build security into a product or application at every step in the development process. A modern application company cannot survive without getting serious about security, and the way to get serious is to integrate an SDL into your everyday work.

  • Why OWASP’s Threat Dragon will change the game on threat modeling

    Threat modeling has always been a dream of mine. Not that I sit around and dream of threat modeling all day, but I dream of embedding a process of security threat modeling within an entire development organization.

    Threat modeling, the process of discovering potential security vulnerabilities in a design and eliminating those vulnerabilities before writing any code, fits best during the stage of planning and designing a new feature. When threat modeling is firing on all cylinders, an organization is creating more secure software.

    What if I told you that you already know how to threat model and that you threat-model every day? Think about when you left the house this morning. You closed the door behind you and you began to threat-model the area around you. You heard cars rushing by on the street, exceeding the speed limit. Threat. You heard a dog barking from the direction where you needed to walk. Threat. The sun was beating down on you. Threat. You threat-model all the time as you consider how these different events could damage your person. Threat modeling technology is just applying these same principles to software.

    Threat modeling is a state of mind

    I’ve found in my 20-plus-year career in security that threat modeling is more than just a tool; it’s a state of mind. Threat modeling is most impactful when it moves from a development process to a developer’s state of mind. In the beginning, developers use the process to assist in understanding the steps and repeating the results.

    As developers become more proficient with threat modeling, the security light bulb goes on over their heads, and their thinking changes. They choose more secure design options without thinking about it. Tooling is important because it lays the foundation of how to perform the threat modeling process and makes it available to a large group of people simultaneously.

    The challenge with teaching an entire organization to threat model is that there were no decent, simple tools that simplified the process and were usable, until now. Threat modeling is not a new concept. Microsoft pioneered this idea within its SDL years ago, including the development of the STRIDE methodology, which drives threat modeling.

    It even created the first tool on the market, and it has updated it a few times over the years. It’s not a bad tool, but it only runs on Windows and focuses its use cases on Windows services and Azure cloud solutions. This is a deal breaker for most of companies that want to adopt an enterprise approach to threat modeling. With all the diversity of OS and platforms, plus mobile, a web-based solution is needed.

    Over the last several years, a cottage industry of threat modeling consultants and purchasable tools has sprouted up. The challenge with threat modeling consultants is that most of the ones I have encountered do not understand how to tailor threat modeling to a given enterprise. They teach a single, one-size-fits-all process. This approach makes developers mad because it does not directly apply to the software they build.

    I’ve examined the other tools on the market, and my complaint with them all is that they are too complex. For true enterprise adoption of threat modeling, any tooling that drives the process must be easy to learn and use.

    Enter the Threat Dragon

    As an industry, we are in luck, because there is a new open-source tool, just released to alpha, called the OWASP Threat Dragon. OWASP Threat Dragon is web-based and easy to use and adopt. This tool is the first real open-source product that can be used to make threat modeling a reality in all organizations.

    Mike Goodwin is the author of Threat Dragon. Here are his three primary objectives for this tool.

    1. Provide a great user experience that is simple to use.

    To be adapted to any industry, the UX has to be great.

    2. The tool will contain a threat/mitigation rule engine.

    This is important because the rule engine is how you make a threat model useful to a developer who has no knowledge of security. You have the developer draw a picture of something he knows (his feature), and then use the rule engine to “detect” potential vulnerabilities and suggest them to the developer. The rule engine teaches the developer how to detect security problems in design.

    3. Integrate Threat Dragon with other development tools.

    The aim is to provide developers with a cohesive solution for secure design and code. When you first visit the Threat Dragon page, you’ll notice that the only authentication option currently available is GitHub. This is because Threat Dragon is designed to store your threat models with your existing GitHub projects. The idea is that threat models are stored close to the final code so they can be considered when creating new features or updating an existing feature.

    How you can get started with Threat Dragon

    Your first step as a Threat Dragon modeler is to create a threat diagram. That’s just a simple data flow diagram that shows how information moves from the external side (userland) of an application, and how it flows into the internals of an app. The threat diagram is kept simple by providing only process, data store, actor, data flow, and trust boundary for you to use to draw your feature. We can thank Microsoft for defining this simple set of shapes in its Threat Modeling tool.

    After you are happy with your diagram, you begin the process of identifying threats. Because this tool is still in alpha, the rule engine has not yet been coded. This does not stop this tool from being useful, though. STRIDE is a fundamental set of possible threats (Spoofing, Tampering, Repudiation, Info Disclosure, Denial of Service, and Elevation of Privilege). Even in its current state, you can create threats from those categories on your threat diagram. In the future, the rule engine will do much of this heavy lifting for you directly.

    Threat modeling gets real

    OWASP Threat Dragon is in its infancy, but it has the makings of a powerful tool that is still easy enough to teach to an entire army of developers. Threat Dragon is poised to quickly overtake the industry as the best possible choice for threat modeling. With the release of the OWASP Threat Dragon, there is now a threat modeling tool that can be adapted to any industry.

    I look forward to the opportunity to roll this tool out across an entire organization and make my dream come true.

  • A security practitioner’s guide to software obsolescence

    Unlike wine and cheese, the software does not get better with age—in fact, its security strength decreases over time. This is because of software obsolescence. 

    The problem is more significant than any other software security issue because it includes all the other liabilities. Take the OWASP Top 10 as an example. The list contains the most prevalent application security risks, and one (A9) is “using components with known vulnerabilities.”

    And those components can introduce every other risk on the OWASP Top 10, including injection (A1), broken authentication (A2), and sensitive data exposure (A3). 

    Could a piece of third-party software be so old that it no longer attracts the attention of attackers? This occurred with Heartbleed, for some products. They were not vulnerable to Heartbleed because they were running a version of OpenSSL so old that it did not have the heartbeat code included.

    But antique libraries are a double-edged sword, because what other vulnerabilities lay within an early version of the software?

    This is a third-party software world, with software consisting of libraries that are strung together into a solution. The software supply chain is the journey your application goes through—it includes all the components you rely upon to build your solution.

    The 2019 State of the Software Supply Chain report by Sonatype exposes the depth of the problem. The report noted that each day, on average, there are 21,448 new open-source releases, with the average enterprise downloading 313,000 open-source components per year.

    The depth of the software obsolescence problem comes into focus with Java and JavaScript. One in 10 Java component downloads has known vulnerabilities at the time of download. JavaScript weighs in with a whopping 51% of components that have weaknesses.

    The downloading of known vulnerable components represents a severe challenge to the security of any application. It is difficult enough to detect a component vulnerability after you have deployed. If you start with a known vulnerable component during development, you are setting yourself up for an imminent software security failure. It’s not a matter of if, but when.

    Another report, on the state of open-source security by Snyk, is a continuation of the definition of the same problem. It found that 37% of open-source developers skip security testing, and the median time from when a vulnerability is found to when it’s fixed is two years.

    Most popular Docker images contain, on average, 30 vulnerable system libraries. Software obsolescence is not going away. Here are the top tools and approaches for embracing it.

    Culture comes first: It’s a journey

    Security culture dictates the emphasis given to security and is just as crucial for an open-source development project as it is for the team building out your application.

    From the open-source dev team perspective, as an industry, we must improve the general security knowledge of those writing the components. We must reach the point where 100% of developers feel responsible for the security of the elements they create.

    But impacting the open-source dev team is only a small portion of the solution. Many open-source groups do fix their vulnerabilities in a timely fashion, but those libraries are still downloaded and used by your development teams.

    From your development team’s perspective, you must build a culture where each developer embraces software obsolescence and is continuously on the lookout. We must reach an era in which developers believe that even though they did not write the code, any vulnerabilities introduced are a reflection of their application and craftsmanship. 

    Exercise your codebase

    A secure development lifecycle must address software obsolescence and provide an organization-wide mandate and expectation for the update of the software. A stance of “the build will break if we have vulnerable components included within” is an approach that must exist to prevent software obsolescence.

    In a DevOps world, where software releases can occur hundreds of times per day, there is an advantage in that there is a daily exercise of a codebase. Tools can detect known vulnerable components and break the build until resolution.

    For those shops that are still trudging along at a slower pace, you need a process to exercise your codebase on a weekly cadence at a minimum to determine if any vulnerable components exist.

    Break your builds the right way

    Both commercial and open source technology exists that you can use to scan codebases and detect whether any known vulnerable software components exist within it. Source code repositories are gaining features that hunt for vulnerable parts in your code as it sits in the repository, and open-source tools are focusing in on this problem.

    Here are a few examples of tools and solutions for various languages and environments:

    • Dependency-Check is a software composition analysis utility that identifies project dependencies and checks if there are any known, publicly disclosed vulnerabilities. Currently, the tool supports Java and .NET; additional experimental support exists for Ruby, Node.js, and Python, and there’s limited support for C/C++ build systems (autoconf and cmake).
    • OWASP’s Dependency-Track provides an enterprise-scale solution and uses Dependency-Check as a source of input. Dependency-Track is an intelligent software supply chain component analysis platform that allows organizations to identify and reduce risk from the use of third-party and open-source components. Dependency-Track monitors component usage across all versions of every application in its portfolio to proactively identify risk across an organization.
    • GitHub provides an integrated service called security alerts for vulnerable dependencies. When GitHub discovers or receives the notification of a new vulnerability, it identifies public repositories (and private repositories that have opted into vulnerability detection) that use the affected version of the dependency. Then it sends a security alert to repository maintainers and generates an automated security fix.
    • Bundler-audit is a patch-level verification for the Ruby language. It checks the bundler package management system for vulnerable versions of gems in Gemfile.lock and for insecure gem sources.
    • NPM audit scans a Node.js project for vulnerabilities and automatically installs any compatible updates for vulnerable dependencies. When you execute npm audit, it submits a description of the dependencies configured in your package to your default registry and asks for a report of known vulnerabilities.

    The process with all of these various tool options is to install them and ensure that they break the build whenever they detect a vulnerable component. Force developers through your standard process to update sensitive parts.

    The OWASP Component Analysis page lists other commercial and open-source tools.

    Craftsmanship needed here

    The problem of software obsolescence is not going away, so change the culture, deploy the tools, and embed those tools into your process. Break the build when you detect a problem component. While the software build that breaks may be your own, that break may prevent the next significant front-page data breach.

    While no problem is ever easy, with hard work you can eradicate vulnerable parts. A focus on craftsmanship and the right set of tools will prepare your teams for success.

  • A primer on secure DevOps: Why DevSecOps matters

    I’ve been in the world of security for 20-plus years, I have seen trends come and go, but I’ve never seen anything as disruptive to the entire technology ecosystem as DevOps, often described as a methodology to build software fast and connect together development and operations.

    Gone are the days of tossing a build over the wall and hoping that it works in production. Now development and operations are joined together as one in DevOps matrimony. DevOps accelerates the velocity with which products are deployed to customers. However, the catch with DevOps is that it moves fast, and security must move faster to keep up and make an impact.

    I’ve spent the past few months on a journey trying to understand how security fits into the world of DevOps. What I’ve discovered is that most of the “experts” out there are just reiterating the statement that mixing DevOps with security is a good thing, without telling you how to actually do it. I’m scratching my own itch by diving deeply into the world of security with DevOps to figure out what this means and how to achieve it.

    Here are actionable findings on the processes and tools for security with DevOps, or DevSecOps, while leaving the principles of security to others.

    Get your head in the game, appsec

    In the good old days, products were built under the waterfall process. With waterfall, the release cycle was measured in years, so the security process could take almost as long as it wanted. With the onset of agile development, things got speedier. Agile time is measured in weeks instead of years, and people stand up at meetings. The people can still implement the security process with agile because the pace is just slow enough. Face it, DevOps is here to stay, and it is not getting any slower. Application security must speed up to keep pace with the speed of business. Security automation is in charge under DevOps.

    DevOps is agile on steroids and feels like it is but without all the people. The people are still involved under DevOps, just in a different capacity. The people are not the process: The pipeline, the set of phases and tools that the code follows to reach deployment, define the process. The phases include build, test, and deployment. Build automation includes the tools needed to grab the code and compile it. Test executes the automated test cases, while deployment drops the build into its final destination. The people monitor the process and respond to process failures.

    Why all things continuous matters

    A perspective on DevOps begins with all things continuous. Continuous Integration (CI) is the principle that code changes are checked into the source code repository in small batches. With each check-in, the build system automatically checks out the latest version of code and goes through the build process. If the code that is checked in “breaks the build,” your changes get backed out and you get to figure out what caused the breakage.

    Continuous delivery and deployment are principles for how the results of testing are reviewed, and the system automatically makes the decision as to what to do with the build. With continuous, a set of tests is run and, if the code passes, the build moves to a staging environment.

    This is the point at which a human jumps into the process and manually makes the decision to push the new code into production. Continuous deployment is similar to delivery, except that testing is automated, as is the decision to push to production. There are no human beings in the build pipeline, so now you see why security must be automated, just like everything else in DevOps.

    The secret to secure DevOps: It’s in the code

    There are two foundational principles you must embrace for the success of DevOps and security: security as code and infrastructure as code (IaC). Security as code refers to the building of security into the tools that exist in the DevOps pipeline. This means automation over manual processes. It means the use of static analysis tools that check the portions of code that have been changed, versus scanning the entire code base.

    IaC defines the set of DevOps tools used to set up and update infrastructure components. Examples include Ansible, Chef, and Puppet. Gone are the days of system administrators spending time fixing problems on a system. With IaC, if a system has a problem, it is disintegrated and a new one (or two) are created to fill the spot.

    Security is a cultural thing and a people problem

    In any process or methodology, people create vulnerabilities. Luckily, DevOps is also a culture thing. Teams do DevOps and live and breathe the culture behind it. The hinge to success for DevOps security lies in changing the underlying DevOps culture to embrace security—with no exceptions. As with any other methodology, security must be built into DevOps.

    Name confusion

    There’s massive confusion across the security community as to what to call security in DevOps. People call it DevSecOps, SecDevOps, DevOpsSec, and even rugged DevOps. How can we have so many different terms to describe the exact same thing?

    This gives us a hint as to the disconnect that exists within security in DevOps. It’s still the wild west. There is no standard that defines security for DevOps, and the chances of a standard ever developing is small because different organizations are doing things their own way, and can’t even agree on a standard name. And while there is a standard for the secure development lifecycle (ISO/IEC 27034-1), few organizations are ever validated against it.

    DevOps + security is goodness

    Each of these terms refers to the same exact thing: the principles of how you apply security to DevOps. The term DevSecOps appears to be seizing the day and mind share across Twitter and at conferences. DevSecOps is the current “movement,” with its own website and a manifesto.

    DevOps had the attention of the security community from almost the start—or at least from its infancy. DevOps + security is not easier than security for waterfall or agile, but it isn’t any more difficult. It is just different, and a heck of a lot faster.

  • OWASP API Security Top 10: Get your dev team up to speed

    Marc Andreessen famously stated in 2011 that “software is eating the world.” Now, in 2019, application programming interfaces (APIs) serve as the backbone of modern software, and they keep on devouring everything in their path, from microservices to single-page applications and mobile apps to the Internet of Things. APIs drive everything in the web world.

    But if the software is eating the world, then security—or the lack thereof—is eating the software. Hence, the need for OWASP’s API Security Top 10.

    Erez Yalon, one of the project leaders for the OWASP API Security Top 10 and director of security research at Checkmarx, has this to say about the state and prevalence of APIs:

     “APIs build today’s software-driven world. … APIs enable developers to write data-driven and flexible applications that all end users and organizations desire.”

    But APIs are also creating a rapidly growing attack surface that isn’t widely understood and is often entirely overlooked by developers and architects. A recent report suggests that by 2022, API abuses will be responsible for most data breaches within enterprise web applications. Additional research found that while 70% of enterprises cite APIs as crucial to digital transformation, securing them is a top challenge.

    APIs are attractive to attackers because they remove the complexity of the many different front-end frameworks. In other words, APIs provide a standard interface on which to focus an attack.

    Here is a rundown of the new OWASP Top 10, with developer actions included for each item.

    Broken object-level authorization

    This is not an issue unique to APIs but is common in applications that are built in any language and perform authorization. APIs exacerbate this issue because the server component does not adequately track the client’s state.

    This allows an attacker to modify an object’s ID value and access other objects. The result is that attackers can access data that they are not authorized to access.

    Developer actions:

    • Examine/threat model your implementation of authorization policies and determine if an attacker can access items purely by knowing or guessing the object’s ID value.
    • Marry random and unpredictable object ID values with a robust access control policy implementation.

    Broken authentication

    Broken authentication is another legacy top 10 issue (found in the OWASP Top 10 for web applications). APIs suffer from the same authentication attacks, such as credential stuffing (where attackers try typical username/password combos in many locations) and brute force (where no restrictions by the API endpoint allow attackers to try all possible combinations for a username/password).

    One of the most significant issues with authentication in APIs is a total lack of it, or selective authentication, where it’s not uniform across a collection of API endpoints.

    Developer actions:

    • Confirm a standard approach to authentication that is uniform across all of your API endpoints.
    • Review the authentication requirements within the Application Security Verification Standard (ASVS) and apply these requirements to your authentication implementation.
    • Ensure that you have a strong business requirement before you expose an unauthenticated API endpoint to the public Internet.

    Excessive data exposure

    APIs are in the business of disclosing data to clients; that’s why they exist. When you design an API, you determine who your clients are, and what information you will serve up for them.

    Excessive data exposure happens when you don’t implement filtering correctly and end up sending more information than you should to a client.

    Developer actions:

    • Trace through/threat-model the information flows of data from the endpoint to the client and considers whether you have proper filtering in place.
    • Perform all filtering on the server side, not on the client. If you filter on the client, it can turn off the filter and receive all of the information.

    Lack of resources and rate limiting

    A finite set of resources serves APIs, and attackers can abuse an API by consuming all available resources, making the information service unavailable for legitimate users. Attackers consume resources both through the correct usage of an API—uploading many images, which generates multiple thumbnails and uses lots of CPU and memory—and by adjusting the API parameters to bypass filtering on the back end.

    Developer actions:

    • Analyze/threat model your design to determine whether you have proper rate-limiting controls in place.
    • Consider the OWASP Automated Threat Handbook as a knowledge source for the many bots that are using your precious computing resources.

    Broken function-level authorizations

    These are terrible ideas, but they happen. With function-level authorizations, you’re creating an individual micro-authorization policy that applies to a single function.

    What could go wrong with creating a unique strategy for every possible service? Complexity breeds vulnerability, and having separate policies for functions increases complexity tenfold. Disaster can ensue.

    Developer actions:

    • Use a standard approach to authorization that is uniform and set to deny by default; avoid function-level authorization.
    • Keep authorization simple; technology is already complicated enough. Securing something simple is hard; securing something complex is impossible.

    Mass assignment

    Mass assignment occurs when an API inadvertently exposes internal variables or objects. An attacker can craft an API request that provides values for an internal variable or object. If the endpoint does not correctly filter out those internal-only data structures, an external call may update an internal-only value.

    Developer actions:

    • Avoid exposing the internal variable or object names as input.
    • Whitelist the properties that the client can update.

    Security misconfiguration

    A security misconfiguration is a setting that could have been adjusted to lock down an API but wasn’t. Security misconfigurations include neglecting security patches on the underlying application server or host systems, allowing all HTTP verbs, missing Transport Layer Security (TLS), missing security headers or Cross-Origin Resource Sharing (CORS) policy, and enabling excessive information flow in stack traces or error messages.

    Developer actions:

    • Perform a repeatable hardening process against your API, as you would with any other host or infrastructure system.
    • Test your entire stack for security misconfigurations using scanning tools and human reviews.

    Injection attacks

    Classic injection attacks such as SQL, LDAP, XML, and command injection are the most prevalent application security risks for web applications.

    Developer actions:

    • Perform input validation via whitelisting for all input.
    • Use a parameterized interface for all inbound API requests.
    • Review the filtering logic to limit the number of records returned.

    Improper asset management

    Improper asset management stems from a lack of version control for API hierarchies. APIs go through a lifecycle just like any other software, and versions of the API reach the end-of-life state. Older versions of APIs suffer from vulnerabilities eradicated from newer releases.

    Proper asset management calls for tracking where API versions live and retiring versions to limit legacy security vulnerabilities.

    Developer actions:

    • Inventory all APIs, including environments such as production, staging, test, and development. You can’t secure what you cannot find.
    • Perform a security review of all APIs, focusing on the standardization of function.
    • Stack rank your APIs by risk level and improve the security functions of the riskiest items on the list.

    Insufficient logging and monitoring

    Logging and monitoring of data are crucial for deducing what happens when things go wrong. Logging and monitoring always fall to the end of any security list, because it’s reactive even though everyone knows it’s essential.

    Developer actions:

    • Use a standard format for logging across all APIs; this makes life easier for incident response in the future.
    • Monitor your API endpoints across all phases (production, stage, test, dev). React to security issues identified within your API.

    How the API Security Top 10 Project started

    Yalon and Inon Shkedy, a security consultant at Tangent Logic, created this project to educate those involved in API development and maintenance: developers, designers, architects, managers, and organizations.

    Many different roles within an organization must understand how to secure APIs, and API security is more than just a code-level activity. It requires design and development working in tandem.

    Here is their perspective:

    “One of the biggest challenges when it comes to API security—or any security, for that matter—is awareness. The different ways of protecting APIs require an understanding of the actual threats facing modern applications, which is where we recognized a bit of a gap.

    “We launched the OWASP API Security Top 10 list to inform developers and security professionals about the top issues that are impacting API-based applications. Where APIs exist in nearly every form, prioritizing their security is of utmost importance, and the API Security Top 10 list looks to drive awareness and attention when it comes to their implementation.”

    Don’t let APIs eat your software’s security

    APIs, just like software, are eating the world. The OWASP API Security Top 10 is a must-have, must-understand awareness document for any developers working with APIs.

    While the issues identified are not new and in many ways are not unique, APIs are the window to your organization and, ultimately, your data. If you ignore the security of APIs, it’s only a matter of time before your data will be breached.

  • The 3 most crucial security behaviors in DevSecOps

    What if I told you that you could change the security posture of your entire DevOps team without ever documenting a single line of a process? It’s hard to imagine that’s possible, but it is. Security behaviors take the place of the process and change how the developer approaches security decisions.

    In part one of this series, “A primer on secure DevOps: Why DevSecOps matters,” I discussed the importance of DevOps embracing security within its structure. The next logical question is, how do you transform a DevOps team into an army of security people? The answer is by modifying security behaviors.

    People are the true drivers of application security, and in the world of DevOps, people move fast. DevOps people are not allergic to process, but in my experience, DevOps is more about the build pipeline and automation than process. People believe that process slows everything down. But if you embed security change into everyone on the DevOps team using security behaviors, you’ll empower everyone as a security person.

    The three core security behaviors you need to instill include threat modeling, code review, and red teaming. Each behavior is highly dependent on human beings. Tools are available to support each behavior, but the primary delivery agent is the human brain. Each behavior requires learning and practice. These are not things that a development team will do without direction.

    Threat modeling

    Security behavior: Consider the security impact of each design decision, and think like the attacker.

    Desired outcome: Choose the design decision that protects the confidentiality and integrity of your customer’s data.

    Metrics to measure efficacy: How many issues are you detecting and fixing prior to committing the code, and is the security light bulb turning on when the developer sees the impact of finding the weaknesses in the design.

    Threat modeling is about examining a design (or even code, if code is your design) to understand where security weaknesses exist. Threat modeling pinpoints how an attacker will attack your design, and highlights the places most likely to come under attack. With a threat model, you attack your product on paper and fix those problems early in your development process.

    Many DevOps practitioners approach the design phase with agile-colored glasses. They design in terms of user stories or features and focus on getting the feature to build and operate. Code takes the place of traditional design time activities. This is a challenge because security can be left behind when your primary focus is to get code running.

    After the developer has applied threat modeling behavior and considered security for each design decision, they can embed security directly into their decisions, and move toward a more secure option every time.

    How to make it a habit: Show developers how to create a threat model, and quickly move to threat modeling an active design on which they are working. Move quickly from the theoretical to the real-world feature.

    Security code review

    Security behavior: Detect security flaws in another person’s code.

    Desired outcome: Find the errors in the code that could be exploited if they reach production.

    Metrics to measure efficacy: How many security issues are you able to detect and fix prior to a build, promoting from test to production, or in a specific period of time.

    A code review is a critique of another developer’s code by searching for problems. A security code review is a bit more refined. It’s deeper than just looking for logical flaws. The practitioner must understand the common types of flaws (OWASP Top 10 for Web Apps or Buffer Overflows for C), how to detect them, and how to fix them. Many teams are already doing code reviews, but the developers are not knowledgeable about security, and they’re unable to find security flaws.

    Strong DevOps teams use their infrastructure to force code review with each check-in to the main line. I’ve heard of teams that use the built-in functionality with GitHub that only promote a change if another engineer on the team has given a “+1,” indicating that they reviewed and approved the change.

    Static analysis tools offer a way to scan code changes and perform automated code reviews. These tools should not replace the human connection during your code review. Static analysis alone can’t find all the problems in your code. Knowledgeable humans can detect logic problems that tools aren’t smart enough to find. But do use static analysis tools to enable a more effective code review.

    How to make it a habit: Force a security code review as a requirement of the code commit process. Require a security +1 for each check-in. Teach your developers the fundamental security lessons of their languages, and how to find those issues in code. Finally, make static analysis tools available as part of your security tool package.

    Red teaming

    Security behavior: Attack your code with the same ferocity the bad people will apply to it when it reaches production.

    Desired outcome: Uncover flaws using active testing, fix those flaws, and push the fixes to production as fast as possible.

    Metrics to measure efficacy: How many legitimate issues are found and fixed because of red teaming efforts within a set amount of time?

    The idea of red teaming began within the military, as a way for a group of people to imagine alternative situations and then plan how to respond. Within the context of security and DevOps, a red team refers to the idea of having people who take on the persona of an attacker and attempt to compromise the code.

    Enacting such behavior means everyone on the team is always watching for some part of the product to compromise. Some teams approach red teaming by having people spend a portion of their time doing security testing, while others can justify having a dedicated red team resource that’s always attacking the code.

    The key to red team security behavior success is that nothing is ever off limits. When the code reaches production, attackers shouldn’t consider anything to be out of bounds. People enacting the red teaming behavior must be given the freedom to try any type of attack, regardless of the potential outcome. As a word of caution, you can always point the red team resources to a staging version of the pipeline to protect your production instances. The point is to never say “that could not happen” or “nobody would ever attack that way”. If your team can think it up, then so can others.

    As with the use of static tools in the code review, red teaming can incorporate dynamic analysis tools that scan for web application vulnerabilities as well as network and other infrastructure patches that are missing. These tools do not replace the knowledge of humans but can find some of the easiest issues quickly.

    How to make it a habit: Instill the idea that your code will be attacked, and provide the time and tools for everyone to spend some amount of time attacking the code.

    Why security behavior matters

    The traditional path to embracing security has historically focused on the process. You list a series of steps and expect everyone to follow those steps to ensure a secure solution. The challenge with that process is that it breeds compliance, which means that someone improves security because they are forced to do so, not because they want the system to be more secure. Compliance provides some benefits, but it will never be as good as having developers change the way they think and embrace a security mindset. With compliance, people put forth the minimum amount of effort to check the box, and that results in minimal security gains.

    To keep up with the pace of DevOps and mix in security, you need to approach things differently. You should leave behind the security process, and embrace the idea of security behaviors. If you can change security behavior, then any time your people reach a decision point, their programmed response for better security will kick in.

    The idea for a set of lightweight and scalable security behaviors hit me while performing an application security assessment for a startup. The company had a mature DevOps process, and I soon realized that traditional application security practices were not going to work in its environment. A security behavior focuses on the lightest touch points, while still having an impact on security, and is the foundation of a true security culture change for a DevOps environment.

    How to set the tone for security behavior

    A good way to embed these behaviors within your team is to educate team members about the behavior, and then quickly move to its practical application. Encourage the activities and reward the team for completing them. The idea is to reinforce the positive behavior with the goal of evolving the security behavior into a habit.

    True security culture change is reached when the behaviors begin to transform into habits. A security habit is just a security behavior that has been practiced over and over and has become ingrained in the way the developer thinks.