Skip to content

Commit

Permalink
forum link (#805)
Browse files Browse the repository at this point in the history
  • Loading branch information
xiaotinghe authored May 8, 2021
1 parent e3d093b commit 52ff6ee
Show file tree
Hide file tree
Showing 15 changed files with 27 additions and 27 deletions.
4 changes: 2 additions & 2 deletions chapter_computational-performance/async-computation.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,9 +208,9 @@ Python前端线程和C++后端线程之间的简化交互可以概括如下:
:end_tab:

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/361)
[Discussions](https://discuss.d2l.ai/t/2792)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/2564)
[Discussions](https://discuss.d2l.ai/t/2791)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_computational-performance/auto-parallelism.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,9 +181,9 @@ with d2l.Benchmark('在GPU1上运行并复制到CPU'):
1. 设计包含更复杂数据依赖关系的计算任务,并运行实验,以查看是否可以在提高性能的同时获得正确的结果。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/362)
[Discussions](https://discuss.d2l.ai/t/2795)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1681)
[Discussions](https://discuss.d2l.ai/t/2794)
:end_tab:
2 changes: 1 addition & 1 deletion chapter_computational-performance/hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,4 +214,4 @@ GPU内存的带宽要求甚至更高,因为它们的处理单元比CPU多得
1. 看看Turing T4 GPU的性能数字。为什么从FP16到INT8和INT4的性能只翻倍?
1. 从旧金山到阿姆斯特丹的往返旅行,一个网络包需要多长时间?提示:你可以假设距离为10000公里。

[Discussions](https://discuss.d2l.ai/t/363)
[Discussions](https://discuss.d2l.ai/t/2798)
6 changes: 3 additions & 3 deletions chapter_computational-performance/hybridize.md
Original file line number Diff line number Diff line change
Expand Up @@ -380,13 +380,13 @@ net(x)
:end_tab:

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/360)
[Discussions](https://discuss.d2l.ai/t/2789)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/2490)
[Discussions](https://discuss.d2l.ai/t/2788)
:end_tab:

:begin_tab:`tensorflow`
[Discussions](https://discuss.d2l.ai/t/2492)
[Discussions](https://discuss.d2l.ai/t/2787)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_computational-performance/multiple-gpus-concise.md
Original file line number Diff line number Diff line change
Expand Up @@ -264,9 +264,9 @@ train(net, num_gpus=2, batch_size=512, lr=0.2)
:end_tab:

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/365)
[Discussions](https://discuss.d2l.ai/t/2804)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1403)
[Discussions](https://discuss.d2l.ai/t/2803)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_computational-performance/multiple-gpus.md
Original file line number Diff line number Diff line change
Expand Up @@ -357,9 +357,9 @@ train(num_gpus=2, batch_size=256, lr=0.2)
1. 实现多GPU测试精度计算。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/364)
[Discussions](https://discuss.d2l.ai/t/2801)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1669)
[Discussions](https://discuss.d2l.ai/t/2800)
:end_tab:
2 changes: 1 addition & 1 deletion chapter_computational-performance/parameterserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,4 +98,4 @@ $$\mathbf{g}_{i} = \sum_{k \in \text{workers}} \sum_{j \in \text{GPUs}} \mathbf{
1. 是否可以允许异步通信(而计算仍在进行)?它如何影响性能?
1. 如果我们在长时间运行的计算过程中丢失了一台服务器,该怎么办?我们如何设计一种容错机制来避免完全重新启动计算?

[Discussions](https://discuss.d2l.ai/t/366)
[Discussions](https://discuss.d2l.ai/t/2807)
2 changes: 1 addition & 1 deletion chapter_recurrent-modern/beam-search.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,4 +71,4 @@ $$ \frac{1}{L^\alpha} \log P(y_1, \ldots, y_{L}) = \frac{1}{L^\alpha} \sum_{t'=1
1. 在 :numref:`sec_seq2seq` 的机器翻译问题中应用束搜索。束宽如何影响结果和预测速度?
1. 在 :numref:`sec_rnn_scratch` 中,我们使用语言模型来生成用户提供前缀的文本。它使用了哪种搜索策略?你能改进吗?

[Discussions](https://discuss.d2l.ai/t/338)
[Discussions](https://discuss.d2l.ai/t/2786)
4 changes: 2 additions & 2 deletions chapter_recurrent-modern/bi-rnn.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,9 +166,9 @@ d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
1. 一词多义在自然语言中很常见。例如,“bank”一词在“i went to the bank to deposit cash”和“i went to the bank to sit down”中有不同的含义。我们如何设计一个神经网络模型,使其在给定上下文序列和单词的情况下,返回该单词在上下文中的向量表示?哪种类型的神经结构更适合处理一词多义?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/339)
[Discussions](https://discuss.d2l.ai/t/2774)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1059)
[Discussions](https://discuss.d2l.ai/t/2773)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_recurrent-modern/deep-rnn.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,9 +97,9 @@ d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
4. 在为文本建模时,是否要合并不同作者的来源?为什么这是个好主意?会出什么问题?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/340)
[Discussions](https://discuss.d2l.ai/t/2771)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1058)
[Discussions](https://discuss.d2l.ai/t/2770)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_recurrent-modern/encoder-decoder.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,9 +121,9 @@ class EncoderDecoder(nn.Module):
1. 除了机器翻译,你能想到另一个可以适用于”编码器-解码器“结构的应用吗?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/341)
[Discussions](https://discuss.d2l.ai/t/2780)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1061)
[Discussions](https://discuss.d2l.ai/t/2779)
:end_tab:
2 changes: 1 addition & 1 deletion chapter_recurrent-modern/gru.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
1. 如果你只实现门控循环单元的一部分,例如,只有一个重置门或只有一个更新门,会发生什么情况?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/342)
[Discussions](https://discuss.d2l.ai/t/2764)
:end_tab:

:begin_tab:`pytorch`
Expand Down
4 changes: 2 additions & 2 deletions chapter_recurrent-modern/lstm.md
Original file line number Diff line number Diff line change
Expand Up @@ -255,9 +255,9 @@ d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device)
1. 为时间序列预测而不是字符序列预测实现LSTM模型。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/343)
[Discussions](https://discuss.d2l.ai/t/2766)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1057)
[Discussions](https://discuss.d2l.ai/t/2768)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_recurrent-modern/machine-translation-and-dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,9 +202,9 @@ for X, X_valid_len, Y, Y_valid_len in train_iter:
1. 某些语言(例如中文和日语)的文本没有单词边界指示符(例如,空格)。对于这种情况,单词级标记化仍然是个好主意吗?为什么?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/344)
[Discussions](https://discuss.d2l.ai/t/2777)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1060)
[Discussions](https://discuss.d2l.ai/t/2776)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_recurrent-modern/seq2seq.md
Original file line number Diff line number Diff line change
Expand Up @@ -550,9 +550,9 @@ for eng, fra in zip(engs, fras):
1. 有没有其他方法来设计解码器的输出层?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/345)
[Discussions](https://discuss.d2l.ai/t/2783)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1062)
[Discussions](https://discuss.d2l.ai/t/2782)
:end_tab:

0 comments on commit 52ff6ee

Please sign in to comment.