-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
javascript: npm for web packages #2
Comments
@justinfagnani I think, specially since we're talking about it over here, that it would be very interesting to imagine what a bespoke package manager just for Polymer would be like. We've talked about having what we call an assets-specific package manager recently, though, and it looks something like this:
So, looking at a Polymer example, based on its install guide -- and assuming this package manager is integrated directly into
<head>
<script src="/assets/webcomponentsjs/webcomponents-loader.js"></script>
<link rel="import" href="/assets/polymer/polymer.html">
<link rel="import" href="/assets/app-layout/app-layout.html">
</head>
<body>
<app-header reveals>
<app-toolbar>
<!-- stuff -->
</app-toolbar>
</app-header>
<app-drawer id="drawer" swipe-open></app-drawer>
</body>
I would like to think this feels right for your needs, and is not very different from the bower workflow -- except you can host everything on a registry, and also you can have build steps before publish using the usual N.B.: if you're not comfortable adding a couple of subcommands to As far as your suggestions go, let me address them real quick:
|
I was thinking about this sort of this a bit earlier today, particularly in the realm of browsers and ES6 modules and the need for those to be flat dependencies without node's auto-magic handling of the node_modules folder. A similar, but not identical use case, is Cordova plugins distributing native source code for Swift/C#/ObjectiveC/Java in their npm packages and needing to ensure their plugins have a flat install. Getting two copies of the same code there results in duplicate symbol linking errors coming out of Xcode, and no JS developer (or any developer for that matter) wants to see those. Currently What you're proposing looks good 👍 |
I want to make another note about effort levels required for this: @mikesherov, with some help from npm CLI folks, is currently working on throwing together cipm. This is relevant and significant because a big chunk of that has involved extracting a bunch of code from npm into standalone libraries, and gluing them together. More, higher-level libraries might come out of this as we approach 1.0, and those will all reduce the effort needed when writing a whole new package manager from scratch. The goal here is that someone could toss together their own package manager using production-ready/production-proven tools like pacote in a single weekend. The main missing piece once cipm is done is the Molinillo port because cipm doesn't have any actual tree-building capabilities, so we won't have that extracted. The rest is just finding pieces in cipm that work well together on their own and turning those into single-call libraries. Once all that's in place, though... people can pick and choose what to get creative with ^_^ -- that's why I think having this sort of thing embedded directly into |
We wouldn't want to have our own package manager because that would be a barrier between Polymer and all the other front-end code out there. Polymer is just a library and has no special requirements outside of any other front-end package that uses web-compliant modules - that imports are done by path, which leads to needing a flat package layout. If we did have our own package manager, what would a project with Angular + Polymer do? Use npm or polymer-pm? Same with React, Vue, vanilla JS, etc. What would happen if all the frameworks each had their own package manager? Yarn at that point is a much better option because it's not specific to any particular library, and we may build up using flat as standard practice over time for web development. @zkat I'm curious about this point:
If flat were opt-in and done on a per-target or per-dependency basis, how would that break existing contracts? Right now flatening and deduplication must be done as a post-installation step, so the community is adhoc layering this contract ontop of what npm provides.
I was hoping to learn more about the idea for assets/, since I've only heard a little bit though @notwaldorf so far. It sounded like assets/ would be flat, so does that clash with the point about not supporting flat installations above, or is there some other interpretation here? |
I was going to say "I think @zkat's question was purely hypothetical; I don't think she was actually proposing a separate package manager" but then I read her entire comment and now I'm not so sure 😄
Personally I'd love to see something where the installation fails in case of conflicts that can't be resolved. That's what pretty much every other major ecosystem from the past 20 years has chosen to do, even in cases where "the npm way" would have worked. I know that forcing a flat package list is not "the npm way", but I also don't want 10 different versions of left-pad in my app. I already use Currently, package maintainers can just ignore unresolvable conflicts since npm happily allows them. If the problem were more obvious, I think more developers would be open to upgrading their dependencies more often (or having their projects forked to upgrade them, for unresponsive developers) |
@zkat I'm curious about
Is this to avoid dep hell? I would think if I were working on a plugin or component for a thing I'd want to declare what versions its compatible with, and flat resolution as you describe with attempting to install a dependencie's assets seems like it would do that. Otherwise, may need a |
Word of Warning: this is textbook tl;drSo this got really long and I'm sorry, and I hope you'll forgive me for dropping a text bomb on you. I think it's safe for at least some people here to skim most of this, and the rest is direct responses to people's questions, so just search for your name and you'll see my response to that, specifically. This topic involves a lot of detail and tricky corner cases, and since we're talking about the constraints we're working here, I think it's best to at least try to have this level of detail in the conversation. The Rationale for FlatteningBefore I get into it, I want to talk a bit about why we're having this conversation, and make sure that I understand the concerns involved. This is both for my benefit (in case I missed something), and for the sake of folks reading this thread. A lot of this is probably obvious to people, but I think by putting this into these terms, we'll be able to look at individual decisions and say "ok this solves X and Y but not Z" and such. Sorry in advance for it being so long, but I think we'll benefit from the detail:
In summary:
Again for the folks in the back: Polymer absolutely requires 2 and 3, but 1 is a "nice-to-have" that happens to fall out of this. Therefore, I'd like to focus on ways to solve those two concerns, rather than treating tree flattening as the only solution to bundle size reduction (or, tbh, a significant one). Project-specific Package Managers
The suggestion here is, of course, to have a standardized way that all of these can work -- the tl;dr it was an example and my answer was not premised on Polymer doing that. Opt-in Flat Dependencies
Keep in mind that I'm talking specifically about a command line option. I think your suggestion of a The Idea behind
|
Since I was pinged, I'll offer my opinion! AbstractSimply stated, bundle size and where things live on disk is really a bundler/resolver concern, not a package manager concern. How did we get here?The only reason we even think it's a pm concern is because both npm and yarn are built by default to know about Node's resolver. And they are only built this way because almost packages on npm are node programs. As a consequence, we have Browserify, Webpack, etc. And they have adopted the node resolver algo as the default way of bundling/resolving. As stated before, this breaks fragile singletons (like jQuery etc). However, this is the ultimate minority case. Yes, jQuery use is widespread, but not specified as a dep in many packages. And in most cases, peerDeps solve this problem. But, we still need to solve this. Solution: Resolution / Bundling is a bundler concern.I personally have interconnected personal/company projects and have seen this problem. I have two DOM libs, both of which express deps on seemingly incompatible versions of jQuery that I then have as deps of my main application. So, what do I do? Well, seeing as I know I need jQuery 1.x in my application, I express that exact version as a dependency of my application, which gaurantees it'll be installed at Now, this may seem less than ideal, having to configure both Webpack and package.json to achieve this, but the alternatives have footguns:
ConclusionLet bundlers bundle, let pms install packages. Ok, now I'm going to stop talking before Sean Larkin's Webpack radar catches this convo. |
i wanna absorb and share more thoughts on this later, but i'll drop a quick note now to say that i think this is a promising approach |
I don't have time yet for a full response to the previous books :) but I did want to drop a quick comment on bundling being the solution to flat/resolving: We strongly believe that bundling should be an optional optimization used for production, but not necessary during development. Polymer-only projects have always worked directly from an installation with no build step. Which reminds me of another feature that I'd like to have out of a package manager (which I know isn't going to go over well already, but hey): name resolution on installation The names of modules are specific to the package manager their published to. To me it makes sense that a web package manager do name resolution on installation so that the installed package folder is loadable by browsers. Also, it's a bit of begging question to claim that we can't have flat installs because we don't have flat installs. There are other JS package managers that install flat, like Bower, and while npm "won", I don't think we can say it's because of nested packages vs other network effects. Nested packages also has much less negative implications on node than the web and npm is still a node-focused package manager, so multiple version being common on the web may just be an historical artifact of how web project migrated to where there were a lot of packages. |
Then Polymer should have a package manager specific to it's resolution algorithm. You're saying there's no build step, but package install is a build step. If you want your package manager to also do name resolution, what it sounds like to me is that you mean you want a postinstall script to run that handles the specific "flat" resolution algorithm.
I don't personally view multiple versions to be an artifact. IMO, multiple versions is a strength of the javascript ecosystem that specifically avoids dependency hell, and I consider bundle size to be production optimization and "I need to be the only copy of jquery" to be the true artifact. |
It should be clarified that this is not Polymer's resolution algorithm. This is driven by how web browsers resolve paths almost universally.
It's fair to say that installing a project and its dependencies counts as a build step. For the purposes of discussion, it's probably best to distinguish the broader concept of "build" step from source transformations. In typical cases, Polymer projects can be installed and run in dev without source transformations because they rely on the package resolving affordances that are readily available in a browser.
Here we should distinguish two hells: dependency hell and version resolution hell. NPM has successfully avoided version resolution hell, but with a notable set of tradeoffs that magnify dependency hell. In many environments it is considered virtuous to require one canonical version of each package. |
I thought this was about named module resolution (as opposed to path resolution), right? Excuse me if I an mistaken, but |
Yes, I was talking about npm doing node module resolution on installation, so that installed packages are ready to be consumed by the browser, which obviously doesn't do node module resolution. I think this is within the realm of responsibility of a package manager, since the names themselves are specific to the package manager. The transform would be from |
If it's of any interest, I build flatn a while ago, especially for the purpose of using npm packages in browser, for bundling it in a sane way with electron and in combination with SystemJS and lively.modules. flatn can use one or multiple "package collection directories", those hold the dependency packages in a package-name/version structure. It then knows about "package directories" and "package development directories" that directly point to packages. It indexes all of that and when resolving a name from one package to another, it figures out the right version and path and pulls the dependency in. This has the advantage that even though the dir structure is flat, multiple versions of the same package for different deps still work. The index can be updated at runtime if new packages should dynamically be added. flatn itself is currently nodejs only, it's output is understood, however, by lively.modules for in-browser usage. When using it to export a package index, general in-browser usage should be possible. Currently flatn doesn't work on windows and there are other things that can be improved. It's on my long-term plan to work on that but in case there is a wider interest in it I can focus on that more. |
The names are specific to the registry and the resolver, not the pm. Yarn and npm share names if they are both using the npm registry. Also, I can make names mean something different by tweaking my resolver (Webpack).
"resolves to" according to the node algo specifically. This sounds exactly like what a bundler/resolver does. You just want it to run postinstall rather than perpetually. As soon as you're inspecting package.json for a "main" entry, youve gone away from the "package resolving affordances provided by the browser", no? I may be missing a key understanding about browsers ability to resolve names. Am I?
I'm unsure why directory structure transformations are preferred to source transformation. With all that said, if we end up with a flattening directive in package.json, IMO, it should satisfy several requirements:
This way, if I introduce depC, which requires jQuery 2.x, I get a warning, because I'm unsure whether depC really can use jQuery 3.x. Also, if depA updates to jQuery 2.x, I don't know whether it also works with 3.x. So the resolutions field needs to know which packages it conflict resolved and for which version ranges.
|
We're having terminology conflict I guess. By package manager I mean the complete system. i'll try to use "registry" from now on.
Yes, because this is the resolution algorithm that names in npm packages assume right now.
The browser has no ability to resolve names - it only loads by path. This is why I want a package manager for the web to resolve on install, so that projects are loadable by browsers out-of-the-box after install.
Not following. I don't think I suggested directory structure transformations, just rewriting import specifiers to be web-compatible by resolving names.
Cool, thanks for thinking this through!
We're not just concerned with "highlanders", but with packages that references other packages by path. In that case you're not necessarily saying that the package needs to be a singleton, but that its dependencies need to be installed to known locations (siblings being the simplest known location). For instance, all of Polymer's packages published to npm now are web-compatible out-of-the-box, and import their dependencies with
This sounds great. It would be nice if Yarn's resolutions worked this way.
Agreed So back to this:
Is there anything we can do to help this along, anything from more motivating use-cases, gathering other interested parties, to defining semantics, or contributing code? I would love to not have to be tied to Yarn, and maybe get some convergence for web-compatible installs. |
Thanks for clarifying. However I'm still confused because it seems like different folks are asking for different things. Let me ask you specifically @justinfagnani, which of the two ways you want this to work. Ignoring the conceptual split between a bundler/resolver and a pm, you could have a program that either:
IMO, the first choice is clearly superior, and you could always layer on aggressive flattening at the rewriter level... almost no connection to what's on disk nor represents what actually gets shipped to the user. Please let me know where the above falls apart for you so I can empathize better. Thanks! |
This is my take: Node.js modules and Web modules (for lack of a better name) are essentially two different languages. They have different resolution semantics, different language capabilities and should be installed in different locations. Node.js modules are installed in Web modules are installed in Web modules and Node.js modules are best thought of as different languages. They can't interoperate without software assistance (Node.js modules would need webpack-type help), they have different mechanisms for loading dependencies, etc. |
@iarna, where does this leave the class of modules that are useful in both node and browser contexts, or the increasingly common isomorphic/universal JavaScript approach to rendering web app UI? |
@graynorton The differences in package loader semantics means that they can't actually be cross-loadable if they have dependencies, without a build step. If you have a build step you can pretty much do what you want. I mean, with build steps we have CJS in the browser today. If the ESM loader spec is ever finished and implemented then that'll be the right solution, but without that I don't see a path forward. (The Node.js ecosystem's behavior is pretty well locked in and no one is likely to change it in the foreseeable future. 500,000 modules is a LOT of inertia.) |
One thing to keep in mind: bower resolutions quickly became an unmaintainable mess on large projects. I believe lock files should massively alleviate some of that pain, but whatever solution is chosen should definitely consider the maintenance burden of resolutions. |
@ChadKillingsworth, CLI args like Does that make sense, or did I miss something? |
Or you can imagine |
@justinfagnani, btw, I'm definitely interested in solving this problem in npm despite the fact that I push back hard. It helps define the problem for me. |
@mikesherov Yes that makes sense - but doesn't alleviate the concern. My lock file comment was more a nod to storing the resolved version in the lock file instead of in the equivalent of bower.json. With bower, I saw a lot of CI builds with the I too am extremely interested in solving the problem. Any valid solution to me would require:
|
In npm's case, Also, i personally would not imagine them to be transitive, and to be similar to a lockfile in that regard in that they are really only used for applications, not libs. Libs would use liberal semver ranges to express greater tolerance and flexibility, whereas as resolutions are for "leave me alone, I know what I'm doing". Unsure what you mean by maintainable resolutions. Can you elaborate? |
BTW, it's worth clarifying that I'm not speaking on behalf of the npm team (despite saying things like "in npm's case"). |
Yeah bower did that too - but I still saw CI builds use the
In an ideal world, I agree. But I rarely see that level of care and concern to the library versions so in practice this didn't actually work out in my experiences. |
@bmeck The real point of Number 3 is that you be able to consume modules written for whatever environment, as long as they are actually ESM. If you want it to support (I would note that The premise of the demo, is to show what it might be like to have the installer, at install time, implement Node.js loader semantics and translate that into something browser compatible. (The actual implementation in the demo is… as stupid as possible.) |
Let me know if the meeting is still desirable. |
Hope to see what y'all come up with! The idea of an |
@iarna @zkat This is Kevin from the Polymer team. Sorry we let this thread go stale, it's that time of year. We've been discussing the npm assets proposal a lot recently and really like the direction and would like to help move the conversation/POC forward. We'd be up for joining a meeting/conference call if that's the next step -- happy to organize the logistics if that helps. FWIW, I did a little hacking on the assetize POC to implement a couple missing features noted in the comments ("loading dirs and having it find |
I really like the idea of having frontend/assets dependencies (flat) and backend/node dependencies separate. Awesome work :)
what do you think? |
@justinfagnani I'm around |
Relevant update from the web loading side of things: We just published a proposal to support bare specifiers on the web: https://github.com/domenic/package-name-maps This is done via a file that maps names to URLs. One goal is that package managers could manage this file during installation. |
@justinfagnani has there been progress on such a postinstall script? if so it's either well hidden or I'm just blind 🙈 |
friendly ghost from the past is taking a look and wondering if anything changed... |
I've been following the journey of ES6 modules for many years, and it seems like things are coming full circle back to the AMD pattern in requireJS. I do not want to sound like a fan-boi and whine about the way things used to be. However, I think there are things that AMD got right that I think should be considered for this discussion: Module IDs are not file pathsI died a little bit when I first saw the ES6 import syntax that let you do something like:
Allowing this meant that people immediately thought that it's just as simple as pointing to a file on disk, when the reality is that once you get into packaging and dependencies, there's no way you can just put a raw filepath in the import directive. They never should have allowed this from the beginning. To put a point on this, is it obvious that there's a problem trying to share code like this:
The 'ceremony' of module declarationAnother thing that I think AMD got right was that modules should be bound by some lexical scope in code, not the bounds of a file. Critics dismissed this as cumbersome ceremony:
So, the approach was cast aside, and now we have .mjs file extensions. Who really 'won' here? And does anyone else notice the resemblance between the module definition syntax used in AMD and the dynamic imports in ES6?
Loader configuration was part of the solutionSo, the above points were a bit of a rant, and I apologize, but to get to the crux of this particular thread: requireJS tried to solve the problem of different dependency module resolution via configuration. Consider the case described earlier in the thread where Module A wants someLib1.9 and module B wants 2.3, but your main application wants to use the 3.0 version because it is cool. In RequireJS world, you'd set up paths to locate modules and then maps to resolve special dependency conflicts from module loading. Example:
What made this difficult is that all of this was manually curated, and the developer working at the root level of the application needed to understand the entire dependency graph to know which modules needed to be directed to which physical module location. There was never any official tooling to support this, node came along and 'won', and so we're left with a dependency resolution algorithm that is ill-suited for the web. But, as others have pointed out: we do have package.json which declares he 'intended' module dependencies, and package-lock.json which tells you where the actual dependencies are. It seems that this information could be used to generate the appropriate loader config so that the right modules are loaded for the proper dependencies. I'm hoping that the enthusiasm that this thread had at the beginning of the year can continue to carry through to a full blown implementation. So, to echo @daKmoR , please continue to push this effort forward. Having a standardized module dependency declaration syntax that would enable web applications to just 'add dependency, refresh map, reload browser' workflow would be really really awesome. |
It's worth noting that:
Is not a file path in ES6. This kind of specifier, where it begins with a module-name, is currently undefined behavior in the specification. Various implementations have taken different routes with this. My expectation is that resolving First, something like package name maps, will give browsers a way to statically map names to resources. Second, loader hooks to allow customization of how mapping of names to resources occurs and to control where and how resources are loaded (eg, from a bundle, from the network, from some entirely novel source). Your observation regarding lock-files similarity to mappings is one we (at npm) have had too. The first iteration of tink used a variation on the name package name maps proposal. (It now works directly from the lock-file.) And I expect that generating maps directly from package managers will be a central feature in future. This also vastly simplifies the life of bundlers, as they can just read the map instead of having to walk the node_modules folders themselves. |
Thanks for the feedback @Irana, and I've corrected my previous post to put the Package name maps sounds like a great idea, and solves the issue where the browser can't poll a webserver to locate resources, but rather the resources will be known up front and the dynamic loader can decide which order to pull them in. Hope it materializes soon! |
To clarify, to make it be a file path you have to lead with |
That's another difference with AMD module IDs which I think should be considered here: the module references are always module IDs, and the 'relative path' that is in a module identifier is module relative to the current module. Let me give a concrete example: You have a top level package 'myPackage', with 3 sub-paths in it: Common, ModuleA, ModuleB. Within ModuleA, you have a file modAService (moduleID: ModuleA/modAService) which wants to use something in the package's Common:
Likewise, within a module inside ModuleB
when someone wants to use something out of 'myPackage' the could use my package by setting up a depenency in their application to 'myPackage' (npm install myPackage) and then get subModules from it via:
So, I don't want to derail the discussion about package maps and the problem hat they'd solve, but just wanted to understand that what these maps are doing is mapping the The reason why I don't think of the |
In what way do you think this isn't how es6 bare identifiers would work? That's how they work in all existing implementations, afaik… |
Right, bare modifiers would work this way, I was just wishing that the only identifiers supported were bare identifiers, forcing the issue that you need to map identifiers to resources via some standardize configuration which would be interpreted either in browser or by bundlers, etc. For me, I've avoided implementing es6 style modules until bare modifiers are supported in the standard...hence I really want to see this feature moved through the standardization process as quickly as possible. But, the use of URLs/file paths are so common; that ship has sorta sailed on that one . Maybe the 'best practice' will evolve to be 'always use bare identifiers', and in the future the'll drop support for URL-based identifiers. Without bare identifiers + mapping, how would dependency conflicts be resolved, or de-duplication of dependencies be handled? |
@notwaldorf and @zkat might have some more context here, but I heard that npm was thinking about ways to make things better for web development. I'd love to hear what ideas are being discussed and contribute some of Polymer's use-cases and requirements if that's helpful.
Polymer's in the middle of a transition to npm from Bower. Mostly things are great except a few complications that arise out of our requirement to publish modules that work natively on the web.
Currently we're recommending the use of Yarn because it support flat installations. We need flat because web-standard JavaScript modules which can only import other modules by path. This means to import a dependency, even cross-package, we need to know exactly where that module lives relative to the importer in filesystem/URL space. Thus yarn and
yarn install --flat
.This mostly works except that many tools can't be properly installed flat. To workaround that we've been structuring projects with a
tools/
subfolder with it's own package.json. Then we need to wire up scripts so that tools are installed when working on the top-level project,npm run
works well, etc.There's also problem with just client-side packages when some have version conflicts among dependencies and can't be installed flat. While our packages really do need to be flat, not every package does, but flat in Yarn applies to an entire node_modules folder.
A few thing could possibly help here:
npm
.dependencies
anddevDependencies
. Multiple targets would allow us to put client-side dependencies in a separate configuration (which would hopefully be flat installed), so that tools, server-side dependencies, etc. would be installed separately. Ex: we could define awebDependencies
stanza which installed its dependencies to aweb_modules
folder.flat
on a per-dependency basis.Those are just a few ideas we've had, I'm sure there's a lot of other ways to slice this problem. We're very motivated to help, not only to make our user's experience better, but to help create a relatively standard way to install web-compatible packages so that Polymer isn't going off on some unique workflow.
The text was updated successfully, but these errors were encountered: