diff --git a/SECURITY.md b/SECURITY.md index 87a16f17538..ac3a97e7f63 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -229,10 +229,10 @@ and [the Google OSS VRP rules](https://bughunters.google.com/about/rules/6521337925468160/google-open-source-software-vulnerability-reward-program-rules)). **Note**: Although the iterative exploration of TF API via fuzzing -`tf.raw_ops.xxx` symbols is the best way to uncover code weakeness, please bear -in mind that this is not a typical usecase that has security implications. It is +`tf.raw_ops.xxx` symbols is the best way to uncover code weakness, please bear +in mind that this is not a typical use case that has security implications. It is better to try to translate the vulnerability to something that can be exploited -during training or inference of a model (i.e., build a model that when given a +during training or inference of a model (i.e., build a model tha when given a specific input would produce unwanted behavior). Alternatively, if the TensorFlow API is only used in ancillary tooling, consider the environment where the tool would run. For example, if `saved_model_cli` tool would crash on @@ -254,7 +254,7 @@ In addition, please include the following information along with your report: * Your name and affiliation (if any). * A description of the technical details of the vulnerabilities. It is very important to let us know how we can reproduce your findings. -* A minimal example of the vulnerabity. +* A minimal example of the vulnerability. * An explanation of who can exploit this vulnerability, and what they gain when doing so -- write an attack scenario. This will help us evaluate your report quickly, especially if the issue is complex.