On to Software engineering, responsibility, and ownership
It is sometimes easy for technology experts to think about computer security in terms of building technology that can allow them (the experts) to be secure. However, we need to think about the question of how secure all of the users of the technology will be, not the question of how secure the most skilled users can possibly be. (The same applies to usability as well, but I think it's more uniformly understood there.) We need to design systems where everybody (not just technology experts) can be secure. Designing software that is secure only when used by experts risks increasing inequality between an elite who are able to use software securely and a larger population who cannot.
We don't, for example, want to switch to using a cryptocurrency where only the 1% of most technically sophisticated people are capable of securing their wealth from theft (and where the other 99% have no recourse when their money is stolen).
Likewise, we don't want to create new and avoidable differences in employability of individuals based on their ability to use the technology we create to maintain confidentiality or integrity of data, or based on their ability to avoid having their lives disrupted by security vulnerabilities in connected (IoT) devices.
If we can build software that is usable and secure for as many of its users as possible, we can avoid creating a new driver for inequality. It's a form of inequality that would favor us, the designers of the technology, but it's still better for society as a whole if we avoid it. This would also be avoiding inequality in the best way possible: by improving the productivity of the bulk of the population to bring them up to the level of the technological elite.