File tree Expand file tree Collapse file tree 4 files changed +20
-18
lines changed
Expand file tree Collapse file tree 4 files changed +20
-18
lines changed Original file line number Diff line number Diff line change @@ -28,8 +28,9 @@ frontends and hardware backends.
2828
2929## Contribute
3030
31- If you'd like to contribute to XLA, review [ How to Contribute] ( CONTRIBUTING.md )
32- and then see the [ developer guide] ( docs/developer_guide.md ) .
31+ If you'd like to contribute to XLA, review
32+ [ How to Contribute] ( docs/contributing.md ) and then see the
33+ [ developer guide] ( docs/developer_guide.md ) .
3334
3435## Contacts
3536
@@ -44,4 +45,3 @@ and then see the [developer guide](docs/developer_guide.md).
4445While under TensorFlow governance, all community spaces for SIG OpenXLA are
4546subject to the
4647[ TensorFlow Code of Conduct] ( https://github.com/tensorflow/tensorflow/blob/master/CODE_OF_CONDUCT.md ) .
47-
Original file line number Diff line number Diff line change 22
33This document describes how to build XLA components.
44
5- If you did not clone the XLA repository or install Bazel, please check out the
6- "Get started" section of the README document .
5+ If you did not clone the XLA repository or install Bazel, check out the initial
6+ sections of the [ XLA Developer Guide ] ( developer_guide.md ) .
77
88## Linux
99
@@ -66,7 +66,7 @@ docker exec xla_gpu bazel build --test_output=all --spawn_strategy=sandboxed //x
6666```
6767
6868For more details regarding
69- [ TensorFlow's GPU docker images you can check out this document.] ( https://www.tensorflow.org/install/source#gpu_support_3 )
69+ [ TensorFlow's GPU docker images you can check out this document.] ( https://www.tensorflow.org/install/source#gpu_support_2 )
7070
7171You can build XLA targets with GPU support without Docker as well. Configure and
7272build targets using the following commands:
@@ -78,4 +78,4 @@ bazel build --test_output=all --spawn_strategy=sandboxed //xla/...
7878```
7979
8080For more details regarding
81- [ hermetic CUDA you can check out this document.] ( docs/ hermetic_cuda.md)
81+ [ hermetic CUDA you can check out this document.] ( hermetic_cuda.md )
Original file line number Diff line number Diff line change @@ -4,7 +4,7 @@ This guide shows you how to get started developing the XLA project.
44
55Before you begin, complete the following prerequisites:
66
7- 1 . Go to [ CONTRIBUTING.md ] ( ../CONTRIBUTING .md) and review the contribution
7+ 1 . Go to [ Contributing page ] ( contributing .md) and review the contribution
88 process.
992 . If you haven't already done so, sign the
1010 [ Contributor License Agreement] ( https://cla.developers.google.com/about ) .
@@ -19,14 +19,20 @@ the repository, and create a pull request.
1919
20201 . Create a fork of the [ XLA repository] ( https://github.com/openxla/xla ) .
21212 . Clone your fork of the repo, replacing ` <USER> ` with your GitHub username:
22- ``` sh
23- git clone https://github.com/< USER> /xla.git
24- ```
22+ <pre class =" devsite-click-to-copy " >
23+ <code class =" devsite-terminal " >
24+ git clone https://github.com/<USER >/xla.git
25+ </code >
26+ </pre >
27+
25283 . Change into the ` xla ` directory: ` cd xla `
29+
26304 . Configure the remote upstream repo:
27- ` ` ` sh
28- git remote add upstream https://github.com/openxla/xla.git
29- ` ` `
31+ <pre class =" devsite-click-to-copy " >
32+ <code class =" devsite-terminal " >
33+ git remote add upstream https://github.com/openxla/xla.git
34+ </code >
35+ </pre >
3036
3137## Set up an environment
3238
Original file line number Diff line number Diff line change @@ -4,10 +4,6 @@ XLA (Accelerated Linear Algebra) is an open-source compiler for machine
44learning. The XLA compiler takes models from popular frameworks such as PyTorch,
55TensorFlow, and JAX, and optimizes the models for high-performance execution
66across different hardware platforms including GPUs, CPUs, and ML accelerators.
7- For example, in a
8- [ BERT MLPerf submission] ( https://blog.tensorflow.org/2020/07/tensorflow-2-mlperf-submissions.html ) ,
9- using XLA with 8 Volta V100 GPUs achieved a ~ 7x performance improvement and ~ 5x
10- batch-size improvement compared to the same GPUs without XLA.
117
128As a part of the OpenXLA project, XLA is built collaboratively by
139industry-leading ML hardware and software companies, including
You can’t perform that action at this time.
0 commit comments