-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dotnet workload restore
runtime jumps from 25 seconds to 4 minutes when nuget feed is AzDO instead of nuget.org
#43870
Comments
Triage: @AArnott does that feed exist and does it work with dotnet restore but not with workloads? Can you try adding --ignore-failed-sources? Something is not configured right with that feed and my guess is that the error case is not failing fast? |
@marcpopMSFT Yes, the feed exists, and the
My nuget.config only has one source specified, with |
@nkolev92 @aortiz-msft are there logs we can collect to see the time spent in nuget for this? Have you heard of any discrepancy between nuget.org and AzDO? |
Since we're talking about dotnet workload restore, I'd need to get some hints to that code so we can figure out which logs might be available. Normally that would be on informational verbosity, assuming the logger is passed down.
I'd generally expect nuget.org to be faster than AzDO. I don't know if it fully explains the 25s to 4 mins jump, as that'd be driven by the number of calls made, packages download and their size. The logs posted here being full of lines saying the service index wasn't loaded are probably what I'd look into first, why there's so many failures fetching the service index resource. Another thing I'd look into is whether the SourceRepository object is shared across all of these workload calls. |
Probably not. Most msbuild args aren't copied to the inner invocation of msbuild, sadly. |
@AArnott you could set the msbuild args as properties and they should get picked up by the inner invocation. Let us know if you have more info for us to dig into based on nkolev92's reply. We probably need to take a look at improving how we report nuget errors and data since we're a pass through in a lot of cases and it's hard to track down. |
also, you say you're set up for auth. Do you get the unable to load message on all attempts to hit AzDO? I wonder if auth is not configured correctly (reminder that secure feeds don't work well with workloads on mac in particular because of limitations of the credential manager extension) |
@AArnott any update whether you were seeting the unable to load message whenever hitting AzDO? Can you try enabling nuget logging and try on windows? NUGET_PLUGIN_ENABLE_LOG set to true and NUGET_PLUGIN_LOG_DIRECTORY_PATH to an available path |
I'm not familiar with how to do this. I have filed other bugs about
Definitely not. Ultimately it works, as attested to by the fact that the build succeeds, and I'm on agents that have no local nuget cache to begin with, so nuget packages (and presumably workloads) must have been downloaded (from AzDO).
Yes, I'll try that and get back to you. |
@nkolev92 what should we be looking for in those logs? |
I quickly looked at the plugin logs focusing on the timings, to see if acquiring a token is taking long or anything like that. |
@nkolev92 so how do we determine what is taking the time? If it's not in the nuget logs, where might it be? Is the time on our side and we need our own perf trace from the SDK? |
We haven't looked at NuGet specific logs yet, so I think we need to go back to the original idea and run When I run dotnet workload restore --verbosity diag locally, I do see see a lot of log messages:
Each HTTP request will be logged and it'll tell us if some are taking longer. @AArnott can you try that? |
It's hard to test this, because #45447 is now blocking me. :( |
I've found a workaround for that, and now all the agents are at least running again except for the mac, which is failing with:
This, despite the fact that the immediately preceding Azure Pipelines task is the NuGet authenticate task, which authenticates this feed:
|
Can you add |
I am still blocked on those logs from the mac. But here are the logs from linux, which in this run show it took 4.5 minutes, compared to the 27 seconds baseline. The increased time I attribute to the switch from nuget.org to Azure Artifacts feeds and (likely) the diagnostic level of the logs. |
@nkolev92 anything in the new logs that would help? |
Looking at the logs, they match the 4.5 mins Andrew is talking about.
There are ~166 http requests. There seems to be steps in the process: Workload manifests download:
The total time is just over 3s, but there's plenty of http requests that take ~400ms to download. Then for the workload, 34 different packs are installed. Installing pack Microsoft.Android.Sdk.Linux version 35.0.24...
Notable: 5s for this 1 nupkg to be downloaded. Then it's: Installing pack Microsoft.Android.Sdk.Linux version 34.0.145...
Going we can see that the installation of these workloads is happening sequentially and the download times of the nupkgs are:
By the time they're done, we've spent 2.5 mins there. 2024-12-13T03:24:25.4920842Z tldr; The majority of the time is caught up in download the nupkgs from the packs (guessing they're large files, I don't have access to check the size). This is just the difference between AzDO & nuget.org. |
Describe the bug
Frequent pipeline failures due to nuget.org not responding to package restore requests drives me to consider using a private Azure Artifacts feed instead of nuget.org in nuget.config.
But this change has an unexpected side-effect: the time to run
dotnet workload restore
jumps extremely:The macOS agent is the only one to display the following errors, which it does only after the change:
Eventually the work completes (successfully I guess, because the rest of the build succeeds.)
This is repeatable. And the macOS agent has the authentication necessary to pull from the private feed.
Further technical details
The text was updated successfully, but these errors were encountered: