-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
content: source track: update source threats for draft spec #1236
base: main
Are you sure you want to change the base?
content: source track: update source threats for draft spec #1236
Conversation
✅ Deploy Preview for slsa ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 1 out of 1 changed files in this pull request and generated no suggestions.
Comments skipped due to low confidence (2)
docs/spec/draft/threats.md:86
- [nitpick] The word 'SHOULD' should not be in all caps unless it is a specific requirement in a specification. Consider changing it to 'should'.
Trustworthiness scales with transparency, and consumers SHOULD push on their vendors to follow transparency best-practices.
docs/spec/draft/threats.md:86
- [nitpick] The phrase 'Trustworthiness scales with transparency' is unclear. Consider rephrasing it to 'Trustworthiness increases with transparency'.
Trustworthiness scales with transparency, and consumers SHOULD push on their vendors to follow transparency best-practices.
Tip: Turn on automatic Copilot reviews for this repository to get quick feedback on every pull request. Learn more
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just the one comment, otherwise this looks great!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this isn't a complete review because I wasn't sure if more changes are coming, so I'm submitting this for visibility for now.
docs/spec/draft/threats.md
Outdated
<details><summary>Software producer intentionally submits bad code</summary> | ||
*Mitigation:* | ||
This kind of attack cannot be directly mitigated through SLSA controls. | ||
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it. | |
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it. |
Also, can we point to some specific ways to use scorecard here? I think we want to be careful with what we say can help with something like the xz attack...
Solution: Configure GitHub's "branch protection" feature to require pull request | ||
reviews on the `main` branch. | ||
*Example:* Adversary directly pushes a change to a git repo's `main` branch. | ||
Solution: The producer can configure branch protection rules on the `main` branch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this can lean on the source track and point to the documented change management process piece for setting guardrails on submitting changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in many ways this section is duplicative of other sections.
We could just say:
*Threat*: source threats!
*Mitigation*: SLSA source track! <url>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adityasaky do you think we should just cut all these scenarios and redirect to the source track?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this page discusses lots of threats to the build and doesn't just defer to the SLSA Build Track.
The general purpose of this page, AIUI, was to centralize all the various threats in one place and direct folks elsewhere as needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potentially then most of these examples should link to relevant sections of the relevant tracks. Maybe we'll take it as a todo.
*Mitigation:* | ||
This kind of attack cannot be directly mitigated through SLSA controls. | ||
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it. | ||
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Transparency" is unfortunately an overloaded word in our world. WDYM specifically in this context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oof, that's a good callout.
I propose that I mean:
- open source
- open build
- open policies
- published "definitions of correctness."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be inclined to remove the sentence all together. Yes, transparency (visibility into the process that creates the software) is one way of increasing trustworthiness but there are other vectors too. "Does this organization have something to lose if they do a bad thing? Do I think that's enough of an incentive for them not to do the bad thing".
Additionally, I don't think "Transparency best-practices" are well defined anywhere (or even vaguely agreed upon...).
Perhaps the easiest way forward is
- to focus on the submission of bad source (so open builds don't apply here) and assume the other parts of the chain are protected (at the very least they're covered by other threats on this page).
- to be a bit more hand-wavy since this is a squishy subject:
**Mitigation**: Users must establish some basis to trust the organization they are consuming software from. That basis may be 1) that the code is open sourced **and** has a sufficiently large user-base that makes it more likely for malicious changes to be detected, 2) that the organization has sufficient incentives (legal, reputational, etc.) to dissuade it from making malicious changes. Ultimately this is a judgement call with no straightforward answer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TomHennen I think SLSA tooling gives you a way to unilaterally improve trust when the source and builds are open source. This is possible independently of whether or not the producer has something to lose.
I agree though that this section needs to be reduced.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left two suggestions. wdyt @trishankatdatadog @TomHennen ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a comment down there...
*Mitigation:* | ||
This kind of attack cannot be directly mitigated through SLSA controls. | ||
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it. | ||
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be inclined to remove the sentence all together. Yes, transparency (visibility into the process that creates the software) is one way of increasing trustworthiness but there are other vectors too. "Does this organization have something to lose if they do a bad thing? Do I think that's enough of an incentive for them not to do the bad thing".
Additionally, I don't think "Transparency best-practices" are well defined anywhere (or even vaguely agreed upon...).
Perhaps the easiest way forward is
- to focus on the submission of bad source (so open builds don't apply here) and assume the other parts of the chain are protected (at the very least they're covered by other threats on this page).
- to be a bit more hand-wavy since this is a squishy subject:
**Mitigation**: Users must establish some basis to trust the organization they are consuming software from. That basis may be 1) that the code is open sourced **and** has a sufficiently large user-base that makes it more likely for malicious changes to be detected, 2) that the organization has sufficient incentives (legal, reputational, etc.) to dissuade it from making malicious changes. Ultimately this is a judgement call with no straightforward answer.
If we merge this before 1.1 goes out will that make it harder to remove this from the 1.1 release? Have we decided we want to include this in the 1.1 release (fine with me at this point)? @lehors thoughts? |
be nice to resolve. For example, compromised developer credentials - is that (A) | ||
or (B)? | ||
--> | ||
*Threat:* A producer intentionally creates a malicious revision with the intent of harming their consumers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TomHennen what if we update this to cover both:
*Threat:* A producer intentionally creates a malicious revision with the intent of harming their consumers. | |
*Threat:* A producer intentionally creates a malicious revision (or a VSA issuer intentionally creates a malicious attestation) with the intent of harming their consumers. |
The mitigation for both is a clearer -- you cannot.
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices. | ||
When transparency is not possible, consumers may choose not to consume the artifact, or may require additional evidence of correctness from a trusted third-party. | ||
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it. | ||
For example, a consumer may choose to only consume source artifacts from repositories that have a high score on the OSSF Scorecard. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices. | |
When transparency is not possible, consumers may choose not to consume the artifact, or may require additional evidence of correctness from a trusted third-party. | |
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it. | |
For example, a consumer may choose to only consume source artifacts from repositories that have a high score on the OSSF Scorecard. | |
Ultimately, consumers must decide which producers are trusted to produce artifacts and which issuers are trusted to produce VSAs. |
We could redirect from here over to "verifying platforms".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for late reply. Why do consumers even need to know which producers are trusted to produce artifacts (source code, I imagine, in this case)? Couldn't they just get away with the Source Track VSA from the issuer, and call it a day so long as its subjects/outputs (e.g., source code) matches the inputs to the next step (e.g., build)?
**Source integrity:** Ensure that source revisions contain only changes submitted by | ||
authorized contributors according to the process defined by the software producer and | ||
that source revisions are not modified as they pass between development stages. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this wording aligns better with the build integrity section below.
**TODO:** More producer threats? Perhaps the attack to xz where a malicious | ||
contributor gained enhanced privileges through social engineering? | ||
--> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could add another section here like
*threat*: A member of the producer's organization acts against the intent of the producer.
We could say this is when "the producer UNintentially ships bad code."
*Mitigation:* Only the version that is actually reviewed is the one that is | ||
approved. Any intermediate revisions don't count as being reviewed. | ||
*Mitigation:* The producer declares that only the final delta is considered approved. | ||
Intermediate revisions don't count as being reviewed and are not added to the protected context (such as the `main` branch). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Intermediate revisions don't count as being reviewed and are not added to the protected context (such as the `main` branch). | |
In this configuration, intermediate revisions are not considered to be approved and are not added to the protected context (e.g. the `main` branch). | |
With git version control systems this is called a "squash" merge strategy. |
@@ -267,8 +225,10 @@ does not accept this because the version X is not considered reviewed. | |||
|
|||
*Threat:* Two trusted persons collude to author and approve a bad change. | |||
|
|||
*Mitigation:* This threat is not currently addressed by SLSA. We use "two | |||
trusted persons" as a proxy for "intent of the software producer". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think we should use "two trusted persons" as a proxy for producer intent. We should probably say "the documented change process for the source" if we must, but I prefer just "the intent".
421ff2b
to
97094b5
Compare
@TomHennen! Potentially I am off base -- is 1.1 going to be different from the /draft folder? |
I'm glad @TomHennen remembers that we need to get 1.1 finalized. I was going to bring that up myself looking at this PR. Unfortunately, we don't have a very clean way of working on several versions in parallel. The plan was to only work in the draft folder and when all 1.1 issues have been addressed to create a new folder for 1.1 selecting (manually) from the draft what is relevant. Of course this is really only practical if the differences are fairly coarse (like whole files to be left out). If the content of some files start changing in ways that are only relevant for the next version this operation becomes quite impractical. If we want to go ahead and merge this, I will need to create the 1.1 folder before, and from then on every change made for 1.1 will have to be made in both folders (1.1 and draft). Not ideal but doable. |
I wonder if at this point we can just wait and include the source track (and the build track if it's ready). We've waited this long, and there don't seem to be that many outstanding issues elsewhere. By trying to separate them now we'd probably just cause ourselves more work for less gain. WDYT? |
You mean to skip 1.1? That would solve the problem of course. It's true that the more time goes by the less relevant it is. |
Not so much "skip" as include the source track in 1.1. New tracks are supposed to go in minor releases |
We had a long, long, chat about tracks, threats pages, etc... at yesterday's meeting. Do we still like this PR or do we want to simplify it some? I have no opinion at this time. |
fixes: #1178
The threats.md file is intended to be a one-stop summary of all the threats mitigated by SLSA.
From this page, we should plan to redirect to other parts of the documentation.