- cross-posted to:
- privacy@lemmy.ml
- humanrights@lemmy.sdf.org
- privacy@lemmy.ml
- cross-posted to:
- privacy@lemmy.ml
- humanrights@lemmy.sdf.org
- privacy@lemmy.ml
It sounds like someone got ahold of a 6 year old copy of Google’s risk register. Based on my reading of the article it sounds like Google has a robust process for identifying, prioritizing, and resolving risks that are identified internally. This is not only necessary for an organization their size, but is also indicative of a risk culture that incentivizes self reporting risks.
In contrast, I’d point to an organization like Boeing, which has recently been shown to have provided incentives to the opposite effect - prioritizing throughput over safety.
If the author had found a number of issues that were identified 6+ years ago and were still shown to be persistent within the environment, that might be some cause for alarm. But, per the reporting, it seems that when a bug, misconfiguration, or other type of risk is identified internally, Google takes steps to resolve the issue, and does so at a pace commensurate with the level of risk that the issue creates for the business.
Bottom line, while I have no doubt that the author of this article was well-intentioned, their lack of experience in information security / risk management seems obvious, and ultimately this article poses a number of questions that are shown to have innocuous answers.
In my unpopular opinion, most of the information collected is unnecessary and should not be allowed. Then there would not be anything to leak. Of course this will not make big tech more profit so, who cares…
Thanks.
I will say, it’s a softwall / email or content gate / whatever this is called…
and I did happily give them an anonymous email address that forwards to my own. 404 killing it! Independent and breaks great tech stories.
testing out stuff in production can be dangerous