From 52ff6ee209b4c9a4027cd05514b5d7da98e3fecd Mon Sep 17 00:00:00 2001 From: xiaotinghe Date: Sun, 9 May 2021 04:39:26 +0800 Subject: [PATCH] forum link (#805) --- chapter_computational-performance/async-computation.md | 4 ++-- chapter_computational-performance/auto-parallelism.md | 4 ++-- chapter_computational-performance/hardware.md | 2 +- chapter_computational-performance/hybridize.md | 6 +++--- chapter_computational-performance/multiple-gpus-concise.md | 4 ++-- chapter_computational-performance/multiple-gpus.md | 4 ++-- chapter_computational-performance/parameterserver.md | 2 +- chapter_recurrent-modern/beam-search.md | 2 +- chapter_recurrent-modern/bi-rnn.md | 4 ++-- chapter_recurrent-modern/deep-rnn.md | 4 ++-- chapter_recurrent-modern/encoder-decoder.md | 4 ++-- chapter_recurrent-modern/gru.md | 2 +- chapter_recurrent-modern/lstm.md | 4 ++-- chapter_recurrent-modern/machine-translation-and-dataset.md | 4 ++-- chapter_recurrent-modern/seq2seq.md | 4 ++-- 15 files changed, 27 insertions(+), 27 deletions(-) diff --git a/chapter_computational-performance/async-computation.md b/chapter_computational-performance/async-computation.md index 432dcd985..b80597790 100644 --- a/chapter_computational-performance/async-computation.md +++ b/chapter_computational-performance/async-computation.md @@ -208,9 +208,9 @@ Python前端线程和C++后端线程之间的简化交互可以概括如下: :end_tab: :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/361) +[Discussions](https://discuss.d2l.ai/t/2792) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/2564) +[Discussions](https://discuss.d2l.ai/t/2791) :end_tab: diff --git a/chapter_computational-performance/auto-parallelism.md b/chapter_computational-performance/auto-parallelism.md index d1c16b167..0a1ec4b96 100644 --- a/chapter_computational-performance/auto-parallelism.md +++ b/chapter_computational-performance/auto-parallelism.md @@ -181,9 +181,9 @@ with d2l.Benchmark('在GPU1上运行并复制到CPU'): 1. 设计包含更复杂数据依赖关系的计算任务,并运行实验,以查看是否可以在提高性能的同时获得正确的结果。 :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/362) +[Discussions](https://discuss.d2l.ai/t/2795) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/1681) +[Discussions](https://discuss.d2l.ai/t/2794) :end_tab: diff --git a/chapter_computational-performance/hardware.md b/chapter_computational-performance/hardware.md index 7cd5f9f8d..08cdf75bd 100644 --- a/chapter_computational-performance/hardware.md +++ b/chapter_computational-performance/hardware.md @@ -214,4 +214,4 @@ GPU内存的带宽要求甚至更高,因为它们的处理单元比CPU多得 1. 看看Turing T4 GPU的性能数字。为什么从FP16到INT8和INT4的性能只翻倍? 1. 从旧金山到阿姆斯特丹的往返旅行,一个网络包需要多长时间?提示:你可以假设距离为10000公里。 -[Discussions](https://discuss.d2l.ai/t/363) +[Discussions](https://discuss.d2l.ai/t/2798) diff --git a/chapter_computational-performance/hybridize.md b/chapter_computational-performance/hybridize.md index 637632b1a..dc8c9947a 100644 --- a/chapter_computational-performance/hybridize.md +++ b/chapter_computational-performance/hybridize.md @@ -380,13 +380,13 @@ net(x) :end_tab: :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/360) +[Discussions](https://discuss.d2l.ai/t/2789) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/2490) +[Discussions](https://discuss.d2l.ai/t/2788) :end_tab: :begin_tab:`tensorflow` -[Discussions](https://discuss.d2l.ai/t/2492) +[Discussions](https://discuss.d2l.ai/t/2787) :end_tab: diff --git a/chapter_computational-performance/multiple-gpus-concise.md b/chapter_computational-performance/multiple-gpus-concise.md index ede65944c..c8f7b58a4 100644 --- a/chapter_computational-performance/multiple-gpus-concise.md +++ b/chapter_computational-performance/multiple-gpus-concise.md @@ -264,9 +264,9 @@ train(net, num_gpus=2, batch_size=512, lr=0.2) :end_tab: :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/365) +[Discussions](https://discuss.d2l.ai/t/2804) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/1403) +[Discussions](https://discuss.d2l.ai/t/2803) :end_tab: diff --git a/chapter_computational-performance/multiple-gpus.md b/chapter_computational-performance/multiple-gpus.md index d911856f4..5246cb06a 100644 --- a/chapter_computational-performance/multiple-gpus.md +++ b/chapter_computational-performance/multiple-gpus.md @@ -357,9 +357,9 @@ train(num_gpus=2, batch_size=256, lr=0.2) 1. 实现多GPU测试精度计算。 :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/364) +[Discussions](https://discuss.d2l.ai/t/2801) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/1669) +[Discussions](https://discuss.d2l.ai/t/2800) :end_tab: diff --git a/chapter_computational-performance/parameterserver.md b/chapter_computational-performance/parameterserver.md index b8aa9b924..284035f33 100644 --- a/chapter_computational-performance/parameterserver.md +++ b/chapter_computational-performance/parameterserver.md @@ -98,4 +98,4 @@ $$\mathbf{g}_{i} = \sum_{k \in \text{workers}} \sum_{j \in \text{GPUs}} \mathbf{ 1. 是否可以允许异步通信(而计算仍在进行)?它如何影响性能? 1. 如果我们在长时间运行的计算过程中丢失了一台服务器,该怎么办?我们如何设计一种容错机制来避免完全重新启动计算? -[Discussions](https://discuss.d2l.ai/t/366) +[Discussions](https://discuss.d2l.ai/t/2807) diff --git a/chapter_recurrent-modern/beam-search.md b/chapter_recurrent-modern/beam-search.md index 13be01b4c..7e6826e6f 100644 --- a/chapter_recurrent-modern/beam-search.md +++ b/chapter_recurrent-modern/beam-search.md @@ -71,4 +71,4 @@ $$ \frac{1}{L^\alpha} \log P(y_1, \ldots, y_{L}) = \frac{1}{L^\alpha} \sum_{t'=1 1. 在 :numref:`sec_seq2seq` 的机器翻译问题中应用束搜索。束宽如何影响结果和预测速度? 1. 在 :numref:`sec_rnn_scratch` 中,我们使用语言模型来生成用户提供前缀的文本。它使用了哪种搜索策略?你能改进吗? -[Discussions](https://discuss.d2l.ai/t/338) \ No newline at end of file +[Discussions](https://discuss.d2l.ai/t/2786) \ No newline at end of file diff --git a/chapter_recurrent-modern/bi-rnn.md b/chapter_recurrent-modern/bi-rnn.md index 0fe479c42..8c976b809 100644 --- a/chapter_recurrent-modern/bi-rnn.md +++ b/chapter_recurrent-modern/bi-rnn.md @@ -166,9 +166,9 @@ d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device) 1. 一词多义在自然语言中很常见。例如,“bank”一词在“i went to the bank to deposit cash”和“i went to the bank to sit down”中有不同的含义。我们如何设计一个神经网络模型,使其在给定上下文序列和单词的情况下,返回该单词在上下文中的向量表示?哪种类型的神经结构更适合处理一词多义? :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/339) +[Discussions](https://discuss.d2l.ai/t/2774) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/1059) +[Discussions](https://discuss.d2l.ai/t/2773) :end_tab: diff --git a/chapter_recurrent-modern/deep-rnn.md b/chapter_recurrent-modern/deep-rnn.md index 77de616f8..126e2d20b 100644 --- a/chapter_recurrent-modern/deep-rnn.md +++ b/chapter_recurrent-modern/deep-rnn.md @@ -97,9 +97,9 @@ d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device) 4. 在为文本建模时,是否要合并不同作者的来源?为什么这是个好主意?会出什么问题? :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/340) +[Discussions](https://discuss.d2l.ai/t/2771) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/1058) +[Discussions](https://discuss.d2l.ai/t/2770) :end_tab: diff --git a/chapter_recurrent-modern/encoder-decoder.md b/chapter_recurrent-modern/encoder-decoder.md index 86d57b082..96bfa99d4 100644 --- a/chapter_recurrent-modern/encoder-decoder.md +++ b/chapter_recurrent-modern/encoder-decoder.md @@ -121,9 +121,9 @@ class EncoderDecoder(nn.Module): 1. 除了机器翻译,你能想到另一个可以适用于”编码器-解码器“结构的应用吗? :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/341) +[Discussions](https://discuss.d2l.ai/t/2780) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/1061) +[Discussions](https://discuss.d2l.ai/t/2779) :end_tab: diff --git a/chapter_recurrent-modern/gru.md b/chapter_recurrent-modern/gru.md index d4f7a4285..f6de1de49 100644 --- a/chapter_recurrent-modern/gru.md +++ b/chapter_recurrent-modern/gru.md @@ -241,7 +241,7 @@ d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device) 1. 如果你只实现门控循环单元的一部分,例如,只有一个重置门或只有一个更新门,会发生什么情况? :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/342) +[Discussions](https://discuss.d2l.ai/t/2764) :end_tab: :begin_tab:`pytorch` diff --git a/chapter_recurrent-modern/lstm.md b/chapter_recurrent-modern/lstm.md index c43b32128..110312b95 100644 --- a/chapter_recurrent-modern/lstm.md +++ b/chapter_recurrent-modern/lstm.md @@ -255,9 +255,9 @@ d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, device) 1. 为时间序列预测而不是字符序列预测实现LSTM模型。 :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/343) +[Discussions](https://discuss.d2l.ai/t/2766) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/1057) +[Discussions](https://discuss.d2l.ai/t/2768) :end_tab: diff --git a/chapter_recurrent-modern/machine-translation-and-dataset.md b/chapter_recurrent-modern/machine-translation-and-dataset.md index 53f8c7e8f..c9fe6fc36 100644 --- a/chapter_recurrent-modern/machine-translation-and-dataset.md +++ b/chapter_recurrent-modern/machine-translation-and-dataset.md @@ -202,9 +202,9 @@ for X, X_valid_len, Y, Y_valid_len in train_iter: 1. 某些语言(例如中文和日语)的文本没有单词边界指示符(例如,空格)。对于这种情况,单词级标记化仍然是个好主意吗?为什么? :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/344) +[Discussions](https://discuss.d2l.ai/t/2777) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/1060) +[Discussions](https://discuss.d2l.ai/t/2776) :end_tab: \ No newline at end of file diff --git a/chapter_recurrent-modern/seq2seq.md b/chapter_recurrent-modern/seq2seq.md index 65a8888e1..5b309ea2d 100644 --- a/chapter_recurrent-modern/seq2seq.md +++ b/chapter_recurrent-modern/seq2seq.md @@ -550,9 +550,9 @@ for eng, fra in zip(engs, fras): 1. 有没有其他方法来设计解码器的输出层? :begin_tab:`mxnet` -[Discussions](https://discuss.d2l.ai/t/345) +[Discussions](https://discuss.d2l.ai/t/2783) :end_tab: :begin_tab:`pytorch` -[Discussions](https://discuss.d2l.ai/t/1062) +[Discussions](https://discuss.d2l.ai/t/2782) :end_tab: \ No newline at end of file