-
Notifications
You must be signed in to change notification settings - Fork 10.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CANN] Add Ascend NPU backend #6035
Conversation
For those struggling to find FTW is CANN : https://support.huaweicloud.com/intl/en-us/usermanual-cce/cce_10_0239.html Great! |
65a4236
to
5fec9cb
Compare
Good news! @ggerganov @slaren @phymbert, The most basic functions for this new backend is ready for review now. Based on the reference to cuda's implementation, the basic functions of this backend is working now, I add some GGML_OPs (which is build-in in CANN package) and it pass the test(test-backend-ops). More features will be submitted in independent PRs later. Which including:
Considering that Ascend NPU is not so easy to obtain. Here's my screenshots of compilation and testing (I got two NPUs at hand): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cannot comment on the CANN code, but the changes to the common files look good. However, I am not sure that there is any reason to merge a non-functional backend, especially considering that it is for hardware that does not seem to be publicly available. Currently, this backend does not seem to implement matrix multiplication.
Thank you very much for your review. Yes, this PR has not implemented all the features yet. Currently, only device access and some operators to verify these basic functionalities have been implemented. More operators are still under development, and mat-mul is also in progress, and mat-mul relies on quantization, which will be implemented after quantization. Ascend NPU is a publicly available hardware that can be purchased or used in virtual machine on Huawei Cloud. In China, Ascend NPU already has a considerable user base, especially among many Chinese internet companies. Many of them have already used Ascend NPU to build AI training or inference platforms. Due to high demand and limited production capacity, it may not be as convenient for individual developers to purchase Ascend NPU. However, I am very willing to donate an Ascend NPU machine to the llama.cpp community for running CI and other validation work. Currently, many popular AI projects support Ascend NPU as a hardware backend, such as PyTorch (through private use1), DeepSpeed, OpenCV, stable-diffusion-webui, and diffusers. Additionally, many other projects are also in development. We believe that llama.cpp is an excellent large language model inference engine, so we hope to prioritize its adaptation and attract more Ascend developers and users. I agree that not merge this non-functional backend now, but wait for all main features have been implemented. Thanks. |
If there is a dedicated node with the necessary hardware, adding it to ggml-ci is a relatively simple task. It will run a collection of unit and integration tests on each commit and it will make integration much smoother. I can either send configuration instructions, or if I can get SSH access I can login directly and set it up. Let me know |
Sure. I will. |
ggml-cann/aclnn_ops.cpp
Outdated
aclnn_permute(ctx, tmp_im2col_tensor, acl_dst, permute_dim, 3, dst); | ||
} | ||
aclrtSynchronizeStream(ctx.stream()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里是不需要sync的,因为我们不需要在这里获取数据
5b01aa6
to
f1bde5d
Compare
I failed to run models with this branch, with
|
This bug is due to not init before using CANN. The latest version has fix this. |
@hipudding Great work. I have a server with 8 *910b, can I test this PR on the 910b? |
Yes, you can test operators on 910b. But it still can't inference LLM now. mkdir build ./bin/test-backend-ops test -b CANN0 -o {OP_NAME} |
Ascend is a full-stack AI computing infrastructure for industry applications and services based on Huawei Ascend processors and software. CANN (Compute Architecture of Neural Networks), developped by Huawei, is a heterogeneous computing architecture for AI. Co-authored-by: wangshuai09 <[email protected]>
ggml/include/ggml-cann.h
Outdated
/** | ||
* @def GGML_CANN_NAME | ||
* @brief Define for the name of the CANN backend. | ||
*/ | ||
#define GGML_CANN_NAME "CANN" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably not necessary on this backend. A similar macro is used in the CUDA backend only to differentiate between CUDA and HIP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have delete it. But I need to hard code "CANN" in my code. Mainly for print logs.
And this backend Name will never changed. So use a Macro or hard code?
Currently, CPU Result
warning: not compiled with GPU offload support, --gpu-layers option will be ignored
warning: see main README.md for information on enabling GPU BLAS support
Log start
main: build = 3401 (f8c345d5)
main: built with cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 for aarch64-linux-gnu
main: seed = 1024
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /home/wangshuai/models/hermes_gguf/Hermes-2-Pro-Llama-3-8B-F16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Hermes-2-Pro-Llama-3-8B
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 1
llama_model_loader: - kv 11: llama.vocab_size u32 = 128288
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128288] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128288] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128003
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 21: tokenizer.chat_template str = {{bos_token}}{% for message in messag...
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type f16: 226 tensors
llm_load_vocab: special tokens cache size = 288
llm_load_vocab: token to piece cache size = 0.8007 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128288
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 14.96 GiB (16.00 BPW)
llm_load_print_meta: general.name = Hermes-2-Pro-Llama-3-8B
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128003 '<|im_end|>'
llm_load_print_meta: PAD token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128003 '<|im_end|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.14 MiB
llm_load_tensors: CPU buffer size = 15317.52 MiB
.........................................................................................
llama_new_context_with_model: n_ctx = 256
llama_new_context_with_model: n_batch = 256
llama_new_context_with_model: n_ubatch = 256
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 32.00 MiB
llama_new_context_with_model: KV self size = 32.00 MiB, K (f16): 16.00 MiB, V (f16): 16.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.49 MiB
ggml_gallocr_reserve_n: reallocating CPU buffer from size 0.00 MiB to 129.28 MiB
llama_new_context_with_model: CPU compute buffer size = 129.28 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
system_info: n_threads = 192 / 192 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 |
sampling:
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 256, n_batch = 2048, n_predict = -1, n_keep = 1
how to build a website in 10 steps: a beginner's guide
Building a website can feel like a daunting task if you’ve never done it before. But with the right guidance and tools, it’s possible to create a professional-looking site even if you don’t have any coding experience. Here’s a step-by-step guide to building a website in 10 easy steps.
1. Choose a web hosting provider
To build a website, you first need to find a web hosting provider. This is the company that will store your website files and make your site available to visitors on the internet. There are many web hosting providers to choose from, but we recommend using Bluehost. They are one of the largest and most reliable web hosting providers, and they offer a user-friendly platform that makes it easy to build a website.
2. Select a domain name
Once you have chosen a web hosting provider, you will need to select a domain name for your website. This is the URL that people will use to visit your site, such as [www.yourwebsite.com](http://www.yourwebsite.com/). When selecting a domain name, choose something that is easy to remember and relevant to your site’s content.
3. Install WordPress
WordPress is a content management system (CMS) that makes it easy to create and manage a website. It is the most popular CMS in the world, powering over 30% of all websites. Bluehost makes it easy to install WordPress with just one click.
4. Choose a theme
After installing WordPress, you will need to choose a theme for your website. A theme is a set of pre-designed templates and styles that determine the look and feel of your site. There are thousands of free and premium themes available for WordPress, so you can find one that suits your needs and preferences.
5. Customize your website
Once you have chosen a theme, you can customize your website by adding pages, posts, images, and other content. You can also change the colors, fonts, and other design elements to make your site unique. There are also many plugins available for WordPress that can add additional functionality to your site, such as social media integration, contact forms, and e-commerce capabilities.
6. Publish and promote your website
After customizing your website, you can publish it and start promoting it to attract visitors and potential customers. You can use search engine optimization (SEO) techniques to improve your site's ranking in search engine results, as well as social media and other marketing strategies to drive traffic to your site. With time and effort, your website can become a valuable asset for your business or personal brand.
7. Update and maintain your website
Finally, it's essential to regularly update and maintain your website to keep it relevant and engaging for your audience. This includes keeping your content fresh, fixing any technical issues that arise, and staying up-to-date with the latest trends and best practices in web design and development. By investing in the ongoing maintenance and improvement of your website, you can ensure that it continues to serve its purpose effectively and effectively meets the needs of your users.
Overall, creating a website requires time, effort, and expertise, but the rewards can be significant. With a well-designed and well-maintained website, you can establish a strong online presence, reach a wider audience, and achieve your business or personal goals. Whether you're building a website for the first time or updating an existing one, following these steps can help you create a professional, effective, and user-friendly website that delivers results. # What is the best website builder for a small business? # What are the best website builders for small businesses? # What are the top website builders for small businesses? # What is the best website builder for small businesses? # NPU Result
warning: not compiled with GPU offload support, --gpu-layers option will be ignored
warning: see main README.md for information on enabling GPU BLAS support
Log start
main: build = 3329 (ef676b0a)
main: built with cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 for aarch64-linux-gnu
main: seed = 1024
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /home/wangshuai/models/hermes_gguf/Hermes-2-Pro-Llama-3-8B-F16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Hermes-2-Pro-Llama-3-8B
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 1
llama_model_loader: - kv 11: llama.vocab_size u32 = 128288
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128288] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128288] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128003
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 21: tokenizer.chat_template str = {{bos_token}}{% for message in messag...
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type f16: 226 tensors
llm_load_vocab: special tokens cache size = 288
llm_load_vocab: token to piece cache size = 0.8007 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128288
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 14.96 GiB (16.00 BPW)
llm_load_print_meta: general.name = Hermes-2-Pro-Llama-3-8B
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128003 '<|im_end|>'
llm_load_print_meta: PAD token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128003 '<|im_end|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.27 MiB
llm_load_tensors: CPU buffer size = 15317.52 MiB
llm_load_tensors: CANN0 buffer size = 13313.00 MiB
.........................................................................................
llama_new_context_with_model: n_ctx = 256
llama_new_context_with_model: n_batch = 256
llama_new_context_with_model: n_ubatch = 256
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CANN0 KV buffer size = 32.00 MiB
llama_new_context_with_model: KV self size = 32.00 MiB, K (f16): 16.00 MiB, V (f16): 16.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.49 MiB
llama_new_context_with_model: CANN0 compute buffer size = 1131.53 MiB
llama_new_context_with_model: CPU compute buffer size = 4.25 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 4
system_info: n_threads = 192 / 192 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 |
sampling:
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 256, n_batch = 2048, n_predict = -1, n_keep = 1
how to build a website in 10 steps: a beginner's guide
Building a website can feel like a daunting task if you’ve never done it before. But with the right guidance and tools, it’s possible to create a professional-looking site even if you don’t have any coding experience. Here’s a step-by-step guide to building a website in 10 easy steps.
1. Choose a web hosting provider
To build a website, you first need to find a web hosting provider. This is the company that will store your website files and make your site available to visitors on the internet. There are many web hosting providers to choose from, but we recommend using Bluehost. They are one of the largest and most reliable web hosting providers, and they offer a user-friendly platform that makes it easy to build a website.
2. Select a domain name
Once you have chosen a web hosting provider, you will need to select a domain name for your website. This is the URL that people will use to visit your site, such as [www.yourwebsite.com](http://www.yourwebsite.com/). When selecting a domain name, choose something that is easy to remember and relevant to your site’s content.
3. Install WordPress
WordPress is a content management system (CMS) that makes it easy to create and manage a website. It is the most popular CMS in the world, powering over 30% of all websites. Bluehost makes it easy to install WordPress with just one click.
4. Choose a theme
After installing WordPress, you will need to choose a theme for your website. A theme is a set of pre-designed templates and styles that determine the look and feel of your site. There are thousands of free and premium themes available for WordPress, so you can find one that suits your needs and preferences.
5. Customize your website
Once you have chosen a theme, you can customize your website by adding pages, posts, images, and other content. You can also change the colors, fonts, and other design elements to make your site unique. There are also many plugins available for WordPress that can add additional functionality to your site, such as social media integration, contact forms, and e-commerce capabilities.
6. Publish and promote your website
After customizing your website, you can publish it and start promoting it to attract visitors and potential customers. You can use search engine optimization (SEO) techniques to improve your site's ranking in search engine results, as well as social media and other marketing strategies to drive traffic to your site. With time and effort, your website can become a valuable asset for your business or personal brand.
7. Update and maintain your website
Finally, it's essential to regularly update and maintain your website to keep it relevant and engaging for your audience. This includes keeping your content fresh, fixing any technical issues that arise, and staying up-to-date with the latest trends and best practices in web design and development. By investing in the ongoing maintenance and improvement of your website, you can ensure that it continues to serve its purpose effectively and effectively meets the needs of your users.
Overall, creating a website requires time, effort, and expertise, but the rewards can be significant. With a well-designed and well-maintained website, you can establish a strong online presence, reach a wider audience, and achieve your business or personal goals. Whether you're building a website for the first time or updating an existing one, following these steps can help you create a professional, effective, and user-friendly website that delivers results. # What is the best website builder for a small business? # What are the best website builders for small businesses? # |
Is there any output speed infomation? thanks! |
@elcky fp16 8B,about 6~8 tokens/s |
Thanks for your great work, but this speed seems a bit slow. On our 910B device, testing Qwen-14B based on |
Yes, It's slow now. This is the first version that can works with llama, and there's a lot to do for improve performance. We will continue working on it for performance, quantization, more models, split tensor etc,. |
qwen2-7b 12 tokens/s. It only provide gguf model for 7b. |
@ggerganov Please trigger CI again. I just make a new commit for logging. |
Last CI run failed because trailing whitespace. It has been fixed. |
About CI machines. For better resource utilization, we decide to add a robot which can run CI instead of adding it into githun runner. And Of course, we will also provide access to members of this project and developers of the CANN backend, to develop or verify with CANN backend. Now this work is under progress, I will add the bot and accesss after it finished. |
* [CANN] Add Ascend NPU backend Ascend is a full-stack AI computing infrastructure for industry applications and services based on Huawei Ascend processors and software. CANN (Compute Architecture of Neural Networks), developped by Huawei, is a heterogeneous computing architecture for AI. Co-authored-by: wangshuai09 <[email protected]> * delete trailing whitespaces * Modify the code based on review comment * Rename LLAMA_CANN to GGML_CANN * Make ggml-common.h private * add ggml_cann prefix for acl funcs * Add logging for CANN backend * Delete Trailing whitespace --------- Co-authored-by: wangshuai09 <[email protected]>
Ascend is a full-stack AI computing infrastructure for industry
applications and services based on Huawei Ascend processors and
software.
CANN (Compute Architecture of Neural Networks), developped by
Huawei, is a heterogeneous computing architecture for AI.
This commit adding Ascend NPU as a new backend.
@sa #6034