We are now part of Eviden, discover more...
Home > Blog and News > Simon Bérubé podcast: DevSecOps as a performance driver

Simon Bérubé podcast: DevSecOps as a performance driver

For this first episode of 2021, we are pleased to introduce our Jedi Master of DevSecOps.

Simon Bérubé, Senior Director–Product Delivery Strategy at In Fidem, is responsible for incorporating DevSecOps best practices within teams to accelerate value delivery and minimize waste due to scarce resources.

Simon is like a multifunctional Swiss Army knife: he started his career as a developer before becoming a cybersecurity expert and cloud services product owner.

His experience enables him to fully understand the different business dynamics and skillfully incorporate security into the three main pillars of DevOps delivery: agility, cloud computing, and automation.

Simon agreed to demystify the DevSecOps concept by answering some of our questions for our podcast INTRASEC, In Fidem’s cybersecurity channel. In today’s episode, he explains how to enforce security at all levels and for all environments by leveraging automated tools to accelerate your performance.

So far, your career has covered three building blocks of DevOps – agility, cloud computing and automation – and now you’re adding a fourth, security. Can you tell us more?

DevOps is about accelerating value delivery and, by the same token, improving quality, seeing how it has a direct impact on our brand and client perception. We’re tired of ideas that take two and a half years to develop and that, once delivered, don’t meet the customer’s needs.

We want to react quickly to market needs and that’s why we’re going to use DevOps.

Inevitably, when you talk about quality and you think about data leaks or cryptolockers demanding a ransom, it becomes necessary that quality also applies to security.

To do so, we have to make sure that security is not taken into account at the end, but at the beginning while building the features, which means that it has to be integrated at the design stage.

When you think about DevOps, you normally think about how you’re going to build your software, your architectures, how to automate… So people expect to have relatively robust and secure software. However, if we understand correctly what you are saying, this is not the case. Why is software not always secure with “normal” DevOps?

In development, we often think in functional terms. You focus on delivering value to the client and so you quickly switch to “solution” mode. You start thinking: “Okay, the customer wants this, so we’ll take this solution or that product or change this function in order to meet their needs.” But you need to take a step back and ask yourself the right questions: what data will be manipulated? Is it sensitive data? What’s the tolerance for public access to such data? Can it be altered without jeopardizing it?”.

So the idea is to bring the three security pillars into the thought process regarding the functionalities we need to build.

And once these questions are answered, if we have two weeks, we’ll implement the level of security we can [rather than perfect security]. This forces us to better assess the risk we are willing to accept for each feature we develop.

How do you bring security into DevOps, especially in an organization that hasn’t incorporated these considerations into its approach?

There are many ways of going about this. If you’re in an organization that has security experts, you can bring them into the sprints and have them participate in the delivery of each increment, which has a dissemination effect.

At first, it’ll be up to the security expert to incorporate it. But after a few weeks, the team will develop the same reflexes and gain some experience. If you don’t have an in-house expert, you can always hire consultants who will help the team acquire these skills.

The organization must also be willing to do this. Security programs are often initiated from the top of the organization, which has to support the teams while they grow and get organized to deliver secure software. Because let’s face it, if your manager is always telling you to hurry up because the project was promised for yesterday, you understand, as I do, that it will be difficult to get the security elements in place.

In other words, management must be willing to include security in the quality of the product.

You have hit on a rather important issue. One of the greatest impediments to the adoption of cybersecurity practices in development is pressure from management—whether voluntary or involuntary—which will find excuses for not implementing them. Does DevOps, a practice that appears to be geared heavily toward value delivery, provide a better way to include security considerations or does that remain a cost you have to be willing to cover?

Right off the bat, the biggest change we need to make is to take security as a cost and turn it into value. For our clients and partners, security is increasingly seen as a source of value. In the market, more and more partners are also requesting information on development practices to ensure that they are secure.

So you really need to put these security considerations on the same level as other quality criteria.

Obviously, the system still needs to mature. On your first sprint, your first project is not expected to be as secure as the US Department of Defense’s first increment.

However, you need a security-oriented approach right off the bat and must ask yourself how you can build things as securely as possible. You also have to say how vulnerable your components are.

For example, let’s say I know my component is not secure. Was it because of time constraints or was the level of complexity too great for me to deliver in two weeks?

In a security-based approach for a service, you could say that your application is secure if you have a certain level of tolerance for confidentiality, integrity or availability.

Can you tell us about the “degree of security” that will be advertised with a service? For example, can you tell us how open and capable people are of understanding these subtleties?

That’s a very good question. Not all the time. People tend to believe that it won’t happen to them.

But if you look at it from an organizational point of view, your client is likely to come from within. And when you have an internal client, you can tell them to start using the service to give feedback and validate needs. This allows them to be involved and comment on their experience, which is very valuable.

Externally, if you look at how quickly people accept the terms of use on websites, applications, and so on, it shows that they don’t read them.

Having said that, there’s no one-size-fits-all approach and it really depends on the application. It depends on the type of data that is being manipulated, among other things. In the end, it’s still a risk assessment exercise.

How do you validate security? What practices are used?

You can use various approaches.

One of the first is to take a white box approach. In other words, you have to validate that the application works as expected. For example, that a user is authenticated before being allowed to access a certain section. This way, we can include tests in the code to verify that authentication is performed, similar to what we would do for security functional tests.

There’s also a more black box approach where you don’t really know what you’re looking for, but you have bots that, in a controlled environment, try to misuse your application like a hacker would.

So there are plenty of known methods and these bots are programmed to test current and past attacks. They will simulate the attacks and give a report to the team.

Both methods are very much in line with the DevOps philosophy because what we’re trying to do is fail fast. We want to have the failures as close to the source as possible.

This is in contrast to more traditional development environments where you plan, you develop, you create test environments, you perform manual or non-manual tests and, in the end, you discover errors.

What we want with DevSecOps is that at 3 p.m., developers submit their code and at 4 p.m., they are already warned of any vulnerability because all the tests have been done automatically. So the next day, they’re going to start fixing that vulnerability.

What’s even closer than development is bringing these problems back to when we design as a team. This means asking ourselves—when thinking about a new feature—how will the data flow through the feature, what are the vectors that an attacker could use to compromise the data, the integrity of the system, and so on.

So if you can catch the problems at the design stage, it’s a win-win because you’re as close as possible to the source.

Otherwise, there are also tools that can be put in the developer’s interface and that scan the code they are writing and warn them if they are using a risky pattern.

In short, during the design phase, you have to determine what information you’re going to obtain and manage, whether it can be modified or not and by whom. And for that, you can do functional tests. Once that’s done, you can also add tools that will do a closer inspection, that can identify vulnerabilities, and so on. And finally, plug-ins can be put into our development environment that will prevent us from making stupid mistakes.

That’s right. There are two main concepts here: static analysis and dynamic analysis.

Some tests are more static and will analyze the structure of the code, while others are more dynamic and will test scenarios and features on your solution once deployed.

It’s kind of like what the offensive security people do, but they’re busy, so they can’t come and test your solution all the time.

So the solution is to automate some of that testing and integrate it into your pipelines. Then the role of the offensive security team will be to validate this pipeline, check its reliability, see if it would be possible to add code in production that would avoid all my verifications, and so on.

And like that, you gain speed because you’ve automated some of the penetration testing, but you keep the option of bringing in experts to test and see if your chain is as strong as you think it is.

How much can code analysis tools be used off the shelf and how much do they need to be configured for our specific environment? If we use a plug-in, we expect it to come with a list of best practices to test, and at the same time, we need to configure it. What level of configuration should we expect?

You have to realize that in DevOps, your environments deploy from the source code; you’re not waiting for a system administrator to get a virtual machine going. So you’re going to write code to specify the characteristics of the machines you want. And then you’ll have to write a test to make sure they meet the hardening guidelines for an operating system or web server, and so on.

In short, you have to run tests for your own code, and also for automated scanners; tools that require zero input are hard to come by!

When [pen testing] experts come to test DevOps pipelines, what errors do they typically see? Any low hanging fruits they need to watch out for?

An operating system version or a language that’s too old and hasn’t been updated because “it still works, I’m not touching it”.

At some point, these versions become obsolete and known vulnerabilities are released. If you don’t update them, they can be the perfect access point for hackers. In fact, that’s what they scan.

You also have to consider credentials. Sometimes there are passwords left in the source code. This is not a good DevOps practice. Only bots should know these secrets.

This also serves as a reminder of the importance of multifactor authentication because even if an attacker finds passwords, they will still lack the key to authenticate on your behalf.

If we understand correctly, DevSecOps is a continuous process. And if you’re called in and come in as an expert, you’re going to do an analysis. Can you describe the steps ?

The first thing is to look within the organization. For example, if you have a team that’s looking to do agile delivery, but is not delivering on its commitments or its promises. In such cases, they may need an agile coach to help with the actual delivery.

If you have a team that’s already mature, maybe that’s the missing security feature. In that case, you’ll have to identify the shortcomings and determine the issues and dissatisfactions with your team from a business perspective. Because if you need outside help, it means that somewhere along the line your team is not meeting the requirements.

Specifically in the case of security, what we do is we go see the teams and assess their security knowledge, and then look at the business security program. As I mentioned at the beginning, the organization wants to make the business safe, so we will look at the program. Are there clearly defined objectives?

If there are any shortfalls in the program, we will work on the security program. Are there security training plans for the teams? Can the team get ongoing training or does it have to deliver features 40 hours a week? If so, we’ll work towards adding more training so that the developers can improve their skills gradually and continuously.

And once that’s taken care of, we might focus on the tools. Do we have the right tools to identify all the scenarios? Maybe there are gaps in the tools. Maybe we’ll create proofs of concept, and so on.

So, if we understand correctly, the technical questions come at the end…

Yes. Because security should be seen as a source of value, not a cost. But let’s face it, security comes at a cost.

It requires expert work, training, and tools, to name a few. It’s important to implement this security according to the risk. If security abuses have no impact, maybe we’ll just decide not to invest in them. However, there are places where you can’t let your guard down and that’s what you should be focusing on.

So it’s important to understand the laws, safety programs and delivery standards that the company is subject to, and apply them to the teams. You need to look at the outcome and say, “Oh, we’re missing an expert to meet our requirements” or “I don’t have the tool to meet those requirements yet.”

So once you’ve identified these requirements, you can take them to the assembly line because it takes a great deal of effort to finalize them.

And we’re trying to avoid developing something unnecessary, which would be a waste.

So to recap: when we do a risk analysis as an organization, there are obligations to comply with. And even if you have to meet those obligations, DevSecOps, in a way, tries to turn such cost into value because you have stronger development practices that are still focused on delivering features to clients.

That’s right.

You can find the full interview (in French) on AushaApple PodcastsSpotifyGoogle PodcastsPodcast Addict.

Scroll to Top