Skip to content

Releases: withcatai/node-llama-cpp

v3.5.0

31 Jan 01:09
63a1066
Compare
Choose a tag to compare

3.5.0 (2025-01-31)

Features

Bug Fixes

  • add missing Jinja features for DeepSeek (#425) (6e4bf3d)

Shipped with llama.cpp release b4600

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.4.3

30 Jan 22:52
6e4bf3d
Compare
Choose a tag to compare

3.4.3 (2025-01-30)

Bug Fixes


Shipped with llama.cpp release b4599

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.4.2

27 Jan 19:17
314d7e8
Compare
Choose a tag to compare

3.4.2 (2025-01-27)

Bug Fixes


Shipped with llama.cpp release b4567

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.4.1

23 Jan 19:30
86e1bee
Compare
Choose a tag to compare

3.4.1 (2025-01-23)

Bug Fixes


Shipped with llama.cpp release b4529

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.4.0

08 Jan 00:33
d1b4416
Compare
Choose a tag to compare

3.4.0 (2025-01-08)

Features

Bug Fixes

  • check for Rosetta usage on macOS x64 when using the inspect gpu command (#405) (632a7bf)
  • detect running under Rosetta on Apple Silicone and show an error message instead of crashing (#405) (632a7bf)
  • switch from "nextTick" to "nextCycle" for the default batch dispatcher (#405) (632a7bf)
  • remove deprecated CLS token (#405) (632a7bf)
  • pipe error logs in inspect gpu command (#405) (632a7bf)

Shipped with llama.cpp release b4435

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.3.2

27 Dec 22:18
e2c5c3f
Compare
Choose a tag to compare

3.3.2 (2024-12-27)

Bug Fixes


Shipped with llama.cpp release b4291

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.3.1

09 Dec 00:33
6a54163
Compare
Choose a tag to compare

3.3.1 (2024-12-09)

Bug Fixes

  • align embedding input with WPM vocabulary type models (#393) (28c7984)

Shipped with llama.cpp release b4291

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.3.0

02 Dec 20:32
4d387de
Compare
Choose a tag to compare

3.3.0 (2024-12-02)

Bug Fixes

  • improve binary compatibility testing on Electron apps (#386) (97abbca)
  • too many abort signal listeners (#386) (97abbca)
  • log level of some lower level logs (#386) (97abbca)
  • context window missing response during generation on specific extreme conditions (#386) (97abbca)
  • adapt to breaking llama.cpp changes (#386) (97abbca)
  • automatically resolve compiler is out of heap space CUDA build error (#386) (97abbca)

Features

  • Llama 3.2 3B function calling support (#386) (97abbca)
  • use llama.cpp backend registry for GPUs instead of custom implementations (#386) (97abbca)
  • getLlama: build: "try" option (#386) (97abbca)
  • init command: --model flag (#386) (97abbca)
  • JSON Schema grammar: array prefixItems, minItems, maxItems support (#388) (4d387de)
  • JSON Schema grammar: object additionalProperties, minProperties, maxProperties support (#388) (4d387de)
  • JSON Schema grammar: string minLength, maxLength, format support (#388) (4d387de)
  • JSON Schema grammar: improve inferred types (#388) (4d387de)
  • function calling: params description support (#388) (4d387de)
  • function calling: document JSON Schema type properties on Functionary chat function types (#388) (4d387de)

Shipped with llama.cpp release b4234

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.2.0

31 Oct 01:39
6405ee9
Compare
Choose a tag to compare

3.2.0 (2024-10-31)

Bug Fixes

  • Electron crash with some models on macOS when not using Metal (#375) (ea12dc5)
  • adapt to llama.cpp breaking changes (#375) (ea12dc5)
  • support rejectattr in Jinja templates (#376) (ea12dc5)
  • build warning on macOS (#377) (6405ee9)

Features


Shipped with llama.cpp release b3995

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.1.1

06 Oct 20:32
8145c94
Compare
Choose a tag to compare

3.1.1 (2024-10-06)

Features

  • minor: reference common classes on the Llama instance (#360) (8145c94)

Shipped with llama.cpp release b3889

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)