-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try to compress better than pngcrush
#444
Comments
Using many tools can approach state-of-the-art. It'd be instructive to try FileOptimizer any way you can — it's the best I've found in the last decade or so, & almost certainly can do even better. |
Thanks, but it's windows-only. I'm a Linux user. I'm just trying to remove |
Optimum PNG compression is always time-consuming. If you're optimizing filesize for web use, maybe consider .WebP or .JxL? E.g., the large PNG from your example link above quite quickly becomes a 43 638 670 B lossless WebP. |
Also, FO runs fine on ReactOS, which, in turn, runs well on Linux via VirtualBox 5. |
One also can look at File Optimizer's source code as linked from #440 (comment). It looks to me like a big shell script that happens to be written in C++: it works by running other programs on the file to be optimized. One could find the part of the linked |
This would be true iff many of the programs run weren't Windows-only binaries themselves. In simpler cases, that's actually quite doable, but the PNG toolchain is the most complex & has many such components. |
Running through already-max-optimized PNGs that were optimized with 6.0.1 with the new 7.0.0, I am seeing additional 2% to 28% decrease. Unfortunately, retesting against your example, I get worse results.
Rerunning the 6.0.1 output against 7.0.0 provides zero new filesavings.
|
Maybe try again w/ original input (to avoid local minima) &/or against recent v8.𝑥, while tweaking options further? (This is mostly why I tend to shun PNG altogether nowadays in favor of something more modern, or fire up FO if it's something serious & unavoidable.) |
Its a libdeflate problem, see ebiggers/libdeflate#241 |
The master branch of libdeflate now compresses the IDAT of |
The improvements were released in libdeflate v1.17. This project uses the Rust bindings https://github.com/adamkewley/libdeflater, so I guess you'll need to first get libdeflater to update libdeflate, then update libdeflater in this project. |
@shssoichiro Is there an ETA for when this updated library will be available in an OxiPNG release? |
@shssoichiro friendly ping to update to the latest libdeflater :) |
Hi all! I have provided a patch with updated Cargo.toml, where libdeflater and Zopfli are updated. Here is the gist for the test, and you can see that libdeflater 0.13.0 makes a huge difference.
All tests have been made the same file the OP was mentioning. gist for the test |
This is great! Thanks for y'all's help! I'll close this when it lands in a release. |
This has landed in #525. |
Still hoping for a full release & associated binaries. |
Alternatively, one could modify the GitHub Actions to create and upload the binaries for each workflow run. |
A PR for this, if possible? 🙇🏾♂️ |
As commented in issues #444 and #518, there is some user interest for distributing binaries for each unstable commit, and target ARM64 platforms. Personally, I think both suggestions are useful for the project, as uploading binary artifacts for each commit might help interested users to catch regressions and give feedback earlier, and powerful ARM64 platforms are becoming increasingly popular due to some cloud services (e.g., Amazon EC2, Azure VMs, Oracle Cloud) offering cheaper plans for this hardware, in addition to the well-known push for ARM by Apple with their custom M1 chips. These changes make the CI target ARM64 as a first-class citizen. Because the public GitHub actions runners can only be hosted on x64 for now, I resorted to cross-compilation, [Debian's multiarch](https://elinux.org/images/d/d8/Multiarch_and_Why_You_Should_Care-_Running%2C_Installing_and_Crossbuilding_With_Multiple_Architectures.pdf), and QEMU to build, get ARM64 C library dependencies, and run tests, respectively. When the CI workflow finishes, a release CLI binary artifact is now uploaded, which can be downloaded from the workflow run page on the GitHub web interface. In addition, these changes also introduce some cleanup and miscellaneous improvements and changes to the CI workflow: - Tests are run using [`nextest`](https://nexte.st/) instead of `cargo test`, which substantially speeds up their execution. (On my development workstation, `cargo test --release` takes around 10.67 s, while `cargo nextest run --release` takes around 6.02 s.) - The dependencies on unmaintained `actions-rs` actions were dropped in favor of running Cargo commands directly, or using `giraffate/clippy-action` for pretty inline annotations for Clippy. This gets rid of the deprecation warnings for each workflow run. - Most CI steps are run with a nightly Rust toolchain now, which allows to take advantage of the latest Clippy lints and codegen improvements. In my experience, when not relying on specific nightly features or compiler internals, Rust does a pretty good job at making it possible to rely on a rolling-release compiler for CI, as breakage is extremely rare and thus offset by the improved features. - The MSRV check was moved to a separate job with less steps, so that it takes less of a toll on total workflow run minutes.
… and more As commented in issues #444 and #518, there is some user interest for distributing binaries for each unstable commit, and target ARM64 platforms. Personally, I think both suggestions are useful for the project, as uploading binary artifacts for each commit might help interested users to catch regressions and give feedback earlier, and powerful ARM64 platforms are becoming increasingly popular due to some cloud services (e.g., Amazon EC2, Azure VMs, Oracle Cloud) offering cheaper plans for this hardware, in addition to the well-known push for ARM by Apple with their custom M1 chips. These changes make the CI target ARM64 as a first-class citizen. Because the public GitHub actions runners can only be hosted on x64 for now, I resorted to cross-compilation, [Debian's multiarch](https://elinux.org/images/d/d8/Multiarch_and_Why_You_Should_Care-_Running%2C_Installing_and_Crossbuilding_With_Multiple_Architectures.pdf), and QEMU to build, get ARM64 C library dependencies, and run tests, respectively. When the CI workflow finishes, a release CLI binary artifact is now uploaded, which can be downloaded from the workflow run page on the GitHub web interface. In addition, these changes also introduce some cleanup and miscellaneous improvements and changes to the CI workflow: - Tests are run using [`nextest`](https://nexte.st/) instead of `cargo test`, which substantially speeds up their execution. (On my development workstation, `cargo test --release` takes around 10.67 s, while `cargo nextest run --release` takes around 6.02 s.) - The dependencies on unmaintained `actions-rs` actions were dropped in favor of running Cargo commands directly, or using `giraffate/clippy-action` for pretty inline annotations for Clippy. This gets rid of the deprecation warnings for each workflow run. - Most CI steps are run with a nightly Rust toolchain now, which allows to take advantage of the latest Clippy lints and codegen improvements. In my experience, when not relying on specific nightly features or compiler internals, Rust does a pretty good job at making it possible to rely on a rolling-release compiler for CI, as breakage is extremely rare and thus offset by the improved features. - The MSRV check was moved to a separate job with less steps, so that it takes less of a toll on total workflow run minutes.
… and more As commented in issues #444 and #518, there is some user interest for distributing binaries for each unstable commit, and target ARM64 platforms. Personally, I think both suggestions are useful for the project, as uploading binary artifacts for each commit might help interested users to catch regressions and give feedback earlier, and powerful ARM64 platforms are becoming increasingly popular due to some cloud services (e.g., Amazon EC2, Azure VMs, Oracle Cloud) offering cheaper plans for this hardware, in addition to the well-known push for ARM by Apple with their custom M1 chips. These changes make the CI target ARM64 as a first-class citizen. Because the public GitHub actions runners can only be hosted on x64 for now, I resorted to cross-compilation, [Debian's multiarch](https://elinux.org/images/d/d8/Multiarch_and_Why_You_Should_Care-_Running%2C_Installing_and_Crossbuilding_With_Multiple_Architectures.pdf), and QEMU to build, get ARM64 C library dependencies, and run tests, respectively. When the CI workflow finishes, a release CLI binary artifact is now uploaded, which can be downloaded from the workflow run page on the GitHub web interface. In addition, these changes also introduce some cleanup and miscellaneous improvements and changes to the CI workflow: - Tests are run using [`nextest`](https://nexte.st/) instead of `cargo test`, which substantially speeds up their execution. (On my development workstation, `cargo test --release` takes around 10.67 s, while `cargo nextest run --release` takes around 6.02 s.) - The dependencies on unmaintained `actions-rs` actions were dropped in favor of running Cargo commands directly, or using `giraffate/clippy-action` for pretty inline annotations for Clippy. This gets rid of the deprecation warnings for each workflow run. - Most CI steps are run with a nightly Rust toolchain now, which allows to take advantage of the latest Clippy lints and codegen improvements. In my experience, when not relying on specific nightly features or compiler internals, Rust does a pretty good job at making it possible to rely on a rolling-release compiler for CI, as breakage is extremely rare and thus offset by the improved features. - The MSRV check was moved to a separate job with less steps, so that it takes less of a toll on total workflow run minutes.
… and more As commented in issues #444 and #518, there is some user interest for distributing binaries for each unstable commit, and target ARM64 platforms. Personally, I think both suggestions are useful for the project, as uploading binary artifacts for each commit might help interested users to catch regressions and give feedback earlier, and powerful ARM64 platforms are becoming increasingly popular due to some cloud services (e.g., Amazon EC2, Azure VMs, Oracle Cloud) offering cheaper plans for this hardware, in addition to the well-known push for ARM by Apple with their custom M1 chips. These changes make the CI target ARM64 as a first-class citizen. Because the public GitHub actions runners can only be hosted on x64 for now, I resorted to cross-compilation, [Debian's multiarch](https://elinux.org/images/d/d8/Multiarch_and_Why_You_Should_Care-_Running%2C_Installing_and_Crossbuilding_With_Multiple_Architectures.pdf), and QEMU to build, get ARM64 C library dependencies, and run tests, respectively. When the CI workflow finishes, a release CLI binary artifact is now uploaded, which can be downloaded from the workflow run page on the GitHub web interface. In addition, these changes also introduce some cleanup and miscellaneous improvements and changes to the CI workflow: - Tests are run using [`nextest`](https://nexte.st/) instead of `cargo test`, which substantially speeds up their execution. (On my development workstation, `cargo test --release` takes around 10.67 s, while `cargo nextest run --release` takes around 6.02 s.) - The dependencies on unmaintained `actions-rs` actions were dropped in favor of running Cargo commands directly, or using `giraffate/clippy-action` for pretty inline annotations for Clippy. This gets rid of the deprecation warnings for each workflow run. - Most CI steps are run with a nightly Rust toolchain now, which allows to take advantage of the latest Clippy lints and codegen improvements. In my experience, when not relying on specific nightly features or compiler internals, Rust does a pretty good job at making it possible to rely on a rolling-release compiler for CI, as breakage is extremely rare and thus offset by the improved features. - The MSRV check was moved to a separate job with less steps, so that it takes less of a toll on total workflow run minutes.
… and more (#534) As commented in issues #444 and #518, there is some user interest for distributing binaries for each unstable commit, and target ARM64 platforms. Personally, I think both suggestions are useful for the project, as uploading binary artifacts for each commit might help interested users to catch regressions and give feedback earlier, and powerful ARM64 platforms are becoming increasingly popular due to some cloud services (e.g., Amazon EC2, Azure VMs, Oracle Cloud) offering cheaper plans for this hardware, in addition to the well-known push for ARM by Apple with their custom M1 chips. These changes make the CI target ARM64 as a first-class citizen. Because the public GitHub actions runners can only be hosted on x64 for now, I resorted to cross-compilation, [Debian's multiarch](https://elinux.org/images/d/d8/Multiarch_and_Why_You_Should_Care-_Running%2C_Installing_and_Crossbuilding_With_Multiple_Architectures.pdf), and QEMU to build, get ARM64 C library dependencies, and run tests, respectively. When the CI workflow finishes, a release CLI binary artifact is now uploaded, which can be downloaded from the workflow run page on the GitHub web interface. In addition, these changes also introduce some cleanup and miscellaneous improvements and changes to the CI workflow: - Tests are run using [`nextest`](https://nexte.st/) instead of `cargo test`, which substantially speeds up their execution. (On my development workstation, `cargo test --release` takes around 10.67 s, while `cargo nextest run --release` takes around 6.02 s.) - The dependencies on unmaintained `actions-rs` actions were dropped in favor of running Cargo commands directly, or using `giraffate/clippy-action` for pretty inline annotations for Clippy. This gets rid of the deprecation warnings for each workflow run. - Most CI steps are run with a nightly Rust toolchain now, which allows to take advantage of the latest Clippy lints and codegen improvements. In my experience, when not relying on specific nightly features or compiler internals, Rust does a pretty good job at making it possible to rely on a rolling-release compiler for CI, as breakage is extremely rare and thus offset by the improved features. - The MSRV check was moved to a separate job with less steps, so that it takes less of a toll on total workflow run minutes. ## Pending tasks - [x] Generate universal macOS binaries with `lipo` (i.e., containing both `aarch64` and `x64` code) - [x] Tirelessly fix the stupid errors that tend to happen when deploying a new CI workflow for the first time - [x] Think what to do with the `deploy.yml` workflow. Should it fetch artifacts from the CI job instead of building them again? - [x] Maybe bring back 32-bit Windows binaries. Are they actually useful for somebody, or just a way to remember the good old days? --------- Co-authored-by: Josh Holmer <[email protected]>
Now released in v9. |
… and more (shssoichiro#534) As commented in issues shssoichiro#444 and shssoichiro#518, there is some user interest for distributing binaries for each unstable commit, and target ARM64 platforms. Personally, I think both suggestions are useful for the project, as uploading binary artifacts for each commit might help interested users to catch regressions and give feedback earlier, and powerful ARM64 platforms are becoming increasingly popular due to some cloud services (e.g., Amazon EC2, Azure VMs, Oracle Cloud) offering cheaper plans for this hardware, in addition to the well-known push for ARM by Apple with their custom M1 chips. These changes make the CI target ARM64 as a first-class citizen. Because the public GitHub actions runners can only be hosted on x64 for now, I resorted to cross-compilation, [Debian's multiarch](https://elinux.org/images/d/d8/Multiarch_and_Why_You_Should_Care-_Running%2C_Installing_and_Crossbuilding_With_Multiple_Architectures.pdf), and QEMU to build, get ARM64 C library dependencies, and run tests, respectively. When the CI workflow finishes, a release CLI binary artifact is now uploaded, which can be downloaded from the workflow run page on the GitHub web interface. In addition, these changes also introduce some cleanup and miscellaneous improvements and changes to the CI workflow: - Tests are run using [`nextest`](https://nexte.st/) instead of `cargo test`, which substantially speeds up their execution. (On my development workstation, `cargo test --release` takes around 10.67 s, while `cargo nextest run --release` takes around 6.02 s.) - The dependencies on unmaintained `actions-rs` actions were dropped in favor of running Cargo commands directly, or using `giraffate/clippy-action` for pretty inline annotations for Clippy. This gets rid of the deprecation warnings for each workflow run. - Most CI steps are run with a nightly Rust toolchain now, which allows to take advantage of the latest Clippy lints and codegen improvements. In my experience, when not relying on specific nightly features or compiler internals, Rust does a pretty good job at making it possible to rely on a rolling-release compiler for CI, as breakage is extremely rare and thus offset by the improved features. - The MSRV check was moved to a separate job with less steps, so that it takes less of a toll on total workflow run minutes. - [x] Generate universal macOS binaries with `lipo` (i.e., containing both `aarch64` and `x64` code) - [x] Tirelessly fix the stupid errors that tend to happen when deploying a new CI workflow for the first time - [x] Think what to do with the `deploy.yml` workflow. Should it fetch artifacts from the CI job instead of building them again? - [x] Maybe bring back 32-bit Windows binaries. Are they actually useful for somebody, or just a way to remember the good old days? --------- Co-authored-by: Josh Holmer <[email protected]>
Currently
pngcrush
at max settings always compresses a little more thanoxipng
at max settings.Source used is
“Cosmic Cliffs” in the Carina Nebula (NIRCam and MIRI Composite Image)
from webbtelescope.org.The text was updated successfully, but these errors were encountered: