Cultural Issues with Product Security in Modern Tech
Although it's debatable whether or not most tech employees "care" about security of customer data and product security, I haven't been able to get anyone to tell me in writing that they don't. Verbal claims aren't generally very useful in court, so I'll assume most people do care about security to some degree, and we've all seen the headlines of how much various companies have had to pay as a result of a data breach, hack, or other information security incident.
As a software security engineer and techlead responsible for various layers of software in complex systems, as well as an administrator of a successful Bug Bounty program, I've come across some situations that unfortunately cause great damage to customers and businesses alike. I've divided this post up into 2 major camps. If you are part of one camp, I suggest you to read the other camp's guidelines as well.
Problems brought on by Product Security professionals
Lack of understanding of constraints placed on Developers
By far the biggest misunderstanding that I've seen in security engineering and research teams is lack of understanding of constraints placed on developers. Many security professionals did not work professionally (or even non-professionally) as software engineers, and thus don't fully understand things from a software engineer's point of view. Specifically, process-related things like how work is split up and assigned, why certain tickets are prioritized over others, who on the team priority is even determined by, why some tickets take longer to get to, and etc... Often as security professionals, we assume that just because we can code, we understand everything about being an active software engineer at our organization, and this is simply not the case - most of the time, they are two distinct jobs and if nothing else, there are always team differences.
Poor understanding and conveyance of business impact
This issue is especially pervasive in bug bounty programs, and it occurs precisely because vulnerability classification and severity is not the same as vulnerability impact. For example, one of the most severe types of exploits is called "Remote Code Execution." An example of such an attack would be a security researcher finding a vulnerability in a server such that when exploited, it allows him to run any code of his choosing on that server. Obviously, this would mean that he could take over the server in many cases or do other bad things.
However, rooting a phone and then sideloading an app onto an Android phone, which attacks the memory of another app on that phone and causes it to run arbitrary code, could also be considered "Remote Code Execution." However, the impact here is vastly different.
In the server example, the researcher in Austin, Texas can take control of your server which is in a building in Sydney, Australia in just a few seconds. Your server is serving up content to millions of your users worldwide and if it is compromised, brand reputation and customer data could be at risk in a large scale.
In the Android phone example, an attacker needs:
1) Physical access to the mobile phone
2) The phone needs to be rooted, which violates warranties and is an "at-risk" action to begin with
3) The victim phone also needs to have a malicious app installed
4) Finally, after all that, the worst thing that can happen is that the attacker gets the personal information of 1 or 2 users.
See the difference in business impact here? The Android phone attack may actually be harder to do, requiring more time and skills, it requires physical access to the device for a period of time and/or the user to have a rooted phone, and it also doesn't yield a malicious hacker with nearly as much power in many cases and so is also less likely to even occur in the first place.
The problem is, both of these are "Remote Code Execution" and would have a similarly high severity rating, probably in the 8-9/10 range.
What many security researchers do is try to exploit this high vulnerability severity to persuade and sometimes even trick naive businesspeople into feeling that the vulnerability is worse than it is, in order to get recognition, shame businesses, or more often, get paid more bounty money. For this reason, many businesses are choosing to ignore or take with a grain of salt any submissions from researchers that do not provide real business impact. This degrades the trust of businesses in the security research community and can even harm relationships with well-respected researchers in some cases. The story of the boy who cried wolf comes to mind here... If the security research community continues to cry wolf over nothing, when a real wolf arrives, the business may not act fast enough to save itself and its users from a nasty attack.
 This is subjective and depends on the victim - A random Android user with no money in the bank is much different than some billionaire who accessing his/her bank account via the mobile device.
Purely Theoretical Arguments
"An attacker could...." [insert some 15-step process here] to attack a user. In my experience, this type of vulnerability report immediately puts many businesses on the defensive and begs the question "Ok, but how likely is this to happen in the real world?"
"Could this happen outside of a lab?"
"Could the average malicious attacker perform this attack?"
Security researchers should aim to answer these questions immediately in their initial reports, so that businesses do not have to chase them down to ask.
On any given software product, there are tons of potential vulnerabilities that an attacker theoretically could exploit, but many times, they simply don't. This could be because not enough people are looking hard enough to do so, the benefit of performing the attack isn't worth the risk, and so on... Many security professionals have a difficult time turning "could" into a real-world threat in their write-ups and this costs businesses and researchers time and stress,
No consideration of business priority
I've often heard researchers slamming businesses for "not caring" enough about security. This may or may not actually be the case. Many businesses have large workloads for a relatively small number of employees to work on, and have all kinds of resource (time, money, headcount) constraints placed on them to release products. Researchers fail to consider these other constraints and just assume that security should always be #1 priority on every businesses list at all times. This makes no sense because businesses are run by marketing, sales, management, legal, procurement, ecommerce, operations, finance, HR, UX, analytics, and many more teams. Yes, Product Security is one of those teams, but that means by definition that Product Security is only one piece of the pie chart. Unfortunately, in the minds of many security researchers, the entire pie chart is 100% security and that's simply not how businesses can and do operate.
Problems brought on by Product teams
Getting defensive too quickly
Product Security is part of your "team." They, like you, work for the company. This means you are all on the same team and security flaws should not be seen as personal attacks on your work.
Not understanding that security is a specialty
Realize that security of modern tech products is so complex that it's become a specialty... It has its own training and education, toolsets, processes, and workforce. There is a reason for this and it's crucial to realize as a developer, product manager, or QA tester that security expertise should not be expected of you. For the same reason that a family care physician is not expected to be a foot expert, software engineers and product managers are not expected to be security experts. The Product Security team are support staff and consultants to help you achieve a more secure product. It is totally normal to accidentally write insecure code or architect a system with some security flaws in it and when Product Security points these out, do not be hard on yourself over it. Security is separate job than software engineering and product management.
Trying to fight security rather than work with it
Some people have had a bad experience or two with security researchers and/or engineers. It could even be for good reason... After all, the first half of this article is plenty long. Maybe they finished up a commit or PR on a Friday only to have to spend 4 more hours redoing some code because the Product Security team complained. Perhaps one particular Product Security engineer seems to be constantly picking apart their code. This creates an unhealthy "us vs. them" mentality at times and many engineers and product team members choose to just push any security-related tickets aside due to this. It is seen as an obstacle rather than part of the end product. The reality is that security is a set of product features and should be treated as such. It is jointly the responsibility in modern tech of the product, engineering, and security teams to ship a secure product to consumers. An argument "We don't have the resources/time for that" can be spun around and used to acquire more resources from upper management. As I stated earlier on, while some people may brush security aside by practice, making a solid argument that "We don't have the resources to properly secure this module/system, which puts the business and its consumers at risk", even better if in writing, may actually help get the resources to do it right. Instead, developers and product managers sometimes have decided internally that security is a pain in the neck and/or not a feasible goal and passively commit insecure code into their systems.
If nothing else, I hope this article has given you, the reader, no matter which team you are on, some perspective on the overall goals of Product Security and the challenges that product, engineering, and security teams face. In my experience in this field and on the Earth, ignorance can harbor misunderstanding and even resentment and hostility. My goal with this article is to help mitigate that by shedding some light onto the critical business issues affecting modern Product Security in tech. At the end of the day, we're all in this together to give our users the best experience and "bang for their buck" which in turn helps make the business successful and profitable.