-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathatom.xml
509 lines (284 loc) · 219 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>路过看看</title>
<subtitle>歇歇再走</subtitle>
<link href="/atom.xml" rel="self"/>
<link href="http://yoursite.com/"/>
<updated>2020-09-07T06:42:01.150Z</updated>
<id>http://yoursite.com/</id>
<author>
<name>Yi Li</name>
</author>
<generator uri="http://hexo.io/">Hexo</generator>
<entry>
<title>Set up TPCH-hadoop-hive environment</title>
<link href="http://yoursite.com/2020/09/07/tpch-hadoop-hive-environment-setup/"/>
<id>http://yoursite.com/2020/09/07/tpch-hadoop-hive-environment-setup/</id>
<published>2020-09-07T01:24:09.000Z</published>
<updated>2020-09-07T06:42:01.150Z</updated>
<content type="html"><![CDATA[<h2 id="set-up-containers-that-can-ping-each-other-even-cross-different-host"><a href="#set-up-containers-that-can-ping-each-other-even-cross-different-host" class="headerlink" title="set up containers that can ping each other even cross different host"></a>set up containers that can ping each other even cross different host</h2><p>mainly reference: <a href="https://github.com/JuntaoLiu01/Hadoop-Hive" target="_blank" rel="noopener">tpch-hadoop-hive</a></p><ul><li>Build centos-ssh images</li></ul><p>centos-ssh Dockerfile</p><figure class="highlight dockerfile"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">FROM</span> centos</span><br><span class="line"><span class="keyword">MAINTAINER</span> <span class="string">'yili'</span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> yum install -y openssh-server sudo</span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> sed -i <span class="string">'s/UsePAM yes/UsePAM no/g'</span> /etc/ssh/sshd_config</span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> yum install -y openssh-clients</span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> <span class="built_in">echo</span> <span class="string">"root:root"</span> | chpasswd</span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> <span class="built_in">echo</span> <span class="string">"root ALL=(ALL) ALL"</span> >> /etc/sudoers</span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key</span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key</span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> mkdir /var/run/sshd</span></span><br><span class="line"><span class="keyword">EXPOSE</span> <span class="number">22</span></span><br><span class="line"><span class="keyword">CMD</span><span class="bash"> [<span class="string">"/usr/sbin/sshd"</span>, <span class="string">"-D"</span>]</span></span><br></pre></td></tr></table></figure><figure class="highlight armasm"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="symbol">docker</span> <span class="keyword">build </span>-t centos-ssh:latest .</span><br></pre></td></tr></table></figure><ul><li>Build centos-hadoop images</li></ul><p>centos-hadoop dockerfile, prepare <em>jdk</em> and <em>hadoop</em> file first</p><p> <a href="https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html" target="_blank" rel="noopener">oracle-jdk-downlaod</a></p><p> <a href="https://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/" target="_blank" rel="noopener">hadoop download</a></p><figure class="highlight dockerfile"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">FROM</span> centos-ssh</span><br><span class="line"><span class="keyword">ADD</span><span class="bash"> jdk-8u161-linux-x64.tar.gz /usr/<span class="built_in">local</span>/ <span class="comment">#you should register oracle first </span></span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> mv /usr/<span class="built_in">local</span>/jdk1.8.0_161 /usr/<span class="built_in">local</span>/jdk1.8 <span class="comment">#modify for jdk version</span></span></span><br><span class="line"><span class="keyword">ENV</span> JAVA_HOME /usr/local/jdk1.<span class="number">8</span></span><br><span class="line"><span class="keyword">ENV</span> PATH $JAVA_HOME/bin:$PATH</span><br><span class="line"><span class="keyword">ADD</span><span class="bash"> hadoop-2.7.3.tar.gz /usr/<span class="built_in">local</span></span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> mv /usr/<span class="built_in">local</span>/hadoop-2.7.3 /usr/<span class="built_in">local</span>/hadoop</span></span><br><span class="line"><span class="keyword">ENV</span> HADOOP_HOME /usr/local/hadoop</span><br><span class="line"><span class="keyword">ENV</span> PATH $HADOOP_HOME/bin:$PATH </span><br><span class="line"></span><br><span class="line">or </span><br><span class="line"></span><br><span class="line"><span class="keyword">FROM</span> centos-ssh</span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> yum install -y java-1.8.0-openjdk <span class="comment">#using open-jdk</span></span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> yum install -y java-1.8.0-openjdk-devel.x86_64 <span class="comment">#for install jps tools</span></span></span><br><span class="line"><span class="keyword">ENV</span> JAVA_HOME /usr/lib/jvm/java-<span class="number">1.8</span>.<span class="number">0</span>-openjdk-<span class="number">1.8</span>.<span class="number">0.242</span>.b08-<span class="number">0.1</span>.al7.x86_64/jre <span class="comment">#modify for open-jdk version </span></span><br><span class="line"><span class="keyword">ENV</span> PATH $JAVA_HOME/bin:$PATH</span><br><span class="line"><span class="keyword">ADD</span><span class="bash"> hadoop-2.7.3.tar.gz /usr/<span class="built_in">local</span></span></span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> mv /usr/<span class="built_in">local</span>/hadoop-2.7.3 /usr/<span class="built_in">local</span>/hadoop</span></span><br><span class="line"><span class="keyword">ENV</span> HADOOP_HOME /usr/local/hadoop</span><br><span class="line"><span class="keyword">ENV</span> PATH $HADOOP_HOME/bin:$PATH</span><br><span class="line"><span class="keyword">RUN</span><span class="bash"> yum install -y net-tools <span class="comment">#for testing network communication</span></span></span><br></pre></td></tr></table></figure><figure class="highlight armasm"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="symbol">docker</span> <span class="keyword">build </span>-t centos-hadoop:latest .</span><br></pre></td></tr></table></figure><ul><li>Build centos-hadoop-hive image</li></ul><p><a href="https://archive.apache.org/dist/hadoop/core/hadoop-2.1.1-beta/" target="_blank" rel="noopener">hive-download</a><br><a href="https://dev.mysql.com/downloads/connector/j/" target="_blank" rel="noopener">mysql-connector-download</a></p><figure class="highlight crystal"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">FROM centos-<span class="symbol">hadoop:</span>latest</span><br><span class="line">ADD apache-hive-<span class="number">2.1</span>.<span class="number">1</span>-bin.tar.gz /usr/local/</span><br><span class="line">RUN mv /usr/local/apache-hive-<span class="number">2.1</span>.<span class="number">1</span>-bin/ <span class="regexp">/usr/local</span><span class="regexp">/hive/</span></span><br><span class="line">ADD mysql-connector-java-<span class="number">5.1</span>.<span class="number">46</span>-bin.jar /usr/local/hive/<span class="class"><span class="keyword">lib</span>/ <span class="comment">#hive-metadata-mysql-connector</span></span></span><br><span class="line">ENV HIVE_HOME /usr/local/hive</span><br><span class="line">ENV PATH $HIVE_HOME/<span class="symbol">bin:</span>$PATH</span><br></pre></td></tr></table></figure><ul><li>Build centos-hadoop-hive-tpch images</li></ul><p><a href="http://www.tpc.org/tpch/default5.asp" target="_blank" rel="noopener">TPCH</a> is a benchmark of database<br><a href="http://www.tpc.org/tpc_documents_current_versions/current_specifications5.asp" target="_blank" rel="noopener">TPCH-degen-downlaod</a><br><a href="https://issues.apache.org/jira/browse/HIVE-600" target="_blank" rel="noopener">TPCH-hive-query-script-download</a></p><p>For dbgen, download and unzip</p><figure class="highlight cpp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">vi dbgen/makefile</span><br><span class="line"></span><br><span class="line"> CC = GCC </span><br><span class="line"> DATABASE = SQLSERVER</span><br><span class="line"> MACHINE=LINUX</span><br><span class="line"> WORKLOAD = TPCH</span><br></pre></td></tr></table></figure><figure class="highlight cpp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">vi dbgen tpcd.h</span><br><span class="line"></span><br><span class="line"> <span class="meta">#<span class="meta-keyword">define</span> GEN_QUERY_PLAN <span class="meta-string">"EXPLAIN;"</span></span></span><br><span class="line"> <span class="meta">#<span class="meta-keyword">define</span> START_TRAN <span class="meta-string">"START TRANSACTION;\n"</span></span></span><br><span class="line"> <span class="meta">#<span class="meta-keyword">define</span> END_TRAN <span class="meta-string">"COMMIT;\n"</span></span></span><br><span class="line"> <span class="meta">#<span class="meta-keyword">define</span> SET_OUTPUT <span class="meta-string">""</span></span></span><br><span class="line"> <span class="meta">#<span class="meta-keyword">define</span> SET_ROWCOUNT <span class="meta-string">"limit %d;\n"</span></span></span><br><span class="line"> <span class="meta">#<span class="meta-keyword">define</span> SET_DBASE <span class="meta-string">"use %s;\n"</span></span></span><br></pre></td></tr></table></figure><p>Now we continuouslly edit and add these cmdlines to the centos-hadoop-hive-tpch dockerfile </p><figure class="highlight routeros"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="builtin-name">RUN</span> mkdir -p /bigdata</span><br><span class="line"><span class="builtin-name">ADD</span> TPC-H_on_Hive_2009-08-14.tar.gz /bigdata/ #hive script</span><br><span class="line"><span class="builtin-name">RUN</span> mkdir -p /bigdata/tpch-gen/</span><br><span class="line"><span class="builtin-name">ADD</span> 2.18.0_rc2 /bigdata/tpch-gen/ #dbgen script</span><br></pre></td></tr></table></figure><figure class="highlight armasm"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="symbol">docker</span> <span class="keyword">bulild </span>-t centos-hadoop-tpch:latest .</span><br></pre></td></tr></table></figure><h2 id="hadoop-environment-setup"><a href="#hadoop-environment-setup" class="headerlink" title="hadoop environment setup"></a>hadoop environment setup</h2><ul><li>Cross host communication by ssh-keygen and ssh-copy-id</li></ul><p>for docker, we use a docker-swarm to build <em>my-attachable-overlay</em> network<br>for pouch, we use a flannel+etcd service to build overlay network</p><p>for docker, you should clarify my-attachable-overlay netwrok, we deploy different containers on differnt servers, you can also put them in the same server if you can.</p><figure class="highlight brainfuck"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">server1</span></span><br><span class="line"><span class="comment">docker</span> <span class="comment">run</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">name</span> <span class="comment">hadoop0</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">hostname</span> <span class="comment">hadoop0</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">runtime</span> <span class="comment">runc</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">net=my</span><span class="literal">-</span><span class="comment">attachable</span><span class="literal">-</span><span class="comment">overlay</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">ulimit</span> <span class="comment">nofile=102400:102400</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">cpus</span> <span class="comment">8</span> <span class="literal">-</span><span class="comment">m</span> <span class="comment">16g</span> <span class="literal">-</span><span class="comment">d</span> <span class="literal">-</span><span class="comment">P</span> <span class="literal">-</span><span class="comment">p</span> <span class="comment">50070:50070</span> <span class="literal">-</span><span class="comment">p</span> <span class="comment">8088:8088</span> <span class="comment">centos</span><span class="literal">-</span><span class="comment">hadoop</span><span class="literal">-</span><span class="comment">hive</span><span class="literal">-</span><span class="comment">tpch:latest</span></span><br><span class="line"><span class="comment"></span></span><br><span class="line"><span class="comment">server2</span></span><br><span class="line"><span class="comment">docker</span> <span class="comment">run</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">name</span> <span class="comment">hadoop1</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">hostname</span> <span class="comment">hadoop1</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">runtime</span> <span class="comment">runc</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">net=my</span><span class="literal">-</span><span class="comment">attachable</span><span class="literal">-</span><span class="comment">overlay</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">ulimit</span> <span class="comment">nofile=102400:102400</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">cpus</span> <span class="comment">8</span> <span class="literal">-</span><span class="comment">m</span> <span class="comment">16g</span> <span class="literal">-</span><span class="comment">d</span> <span class="literal">-</span><span class="comment">P</span> <span class="comment">centos</span><span class="literal">-</span><span class="comment">hadoop:latest</span> </span><br><span class="line"></span><br><span class="line"><span class="comment">server3</span></span><br><span class="line"><span class="comment">docker</span> <span class="comment">run</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">name</span> <span class="comment">hadoop2</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">hostname</span> <span class="comment">hadoop2</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">runtime</span> <span class="comment">runc</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">net=my</span><span class="literal">-</span><span class="comment">attachable</span><span class="literal">-</span><span class="comment">overlay</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">ulimit</span> <span class="comment">nofile=102400:102400</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">cpus</span> <span class="comment">8</span> <span class="literal">-</span><span class="comment">m</span> <span class="comment">16g</span> <span class="literal">-</span><span class="comment">d</span> <span class="literal">-</span><span class="comment">P</span> <span class="comment">centos</span><span class="literal">-</span><span class="comment">hadoop:latest</span></span><br><span class="line"><span class="comment"></span></span><br><span class="line"><span class="comment">server4</span></span><br><span class="line"><span class="comment">docker</span> <span class="comment">run</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">name</span> <span class="comment">hadoop3</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">hostname</span> <span class="comment">hadoop3</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">runtime</span> <span class="comment">runc</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">net=my</span><span class="literal">-</span><span class="comment">attachable</span><span class="literal">-</span><span class="comment">overlay</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">ulimit</span> <span class="comment">nofile=102400:102400</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">cpus</span> <span class="comment">8</span> <span class="literal">-</span><span class="comment">m</span> <span class="comment">16g</span> <span class="literal">-</span><span class="comment">d</span> <span class="literal">-</span><span class="comment">P</span> <span class="comment">centos</span><span class="literal">-</span><span class="comment">hadoop:latest</span></span><br></pre></td></tr></table></figure><p>for pouch, no need to specify network since all under flannel net, using defalut bridge</p><figure class="highlight brainfuck"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">server1</span></span><br><span class="line"><span class="comment">pouch</span> <span class="comment">run</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">name</span> <span class="comment">hadoop0</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">hostname</span> <span class="comment">hadoop0</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">runtime</span> <span class="comment">runc</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">ulimit</span> <span class="comment">nofile=102400:102400</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">cpus</span> <span class="comment">8</span> <span class="literal">-</span><span class="comment">m</span> <span class="comment">16g</span> <span class="literal">-</span><span class="comment">d</span> <span class="literal">-</span><span class="comment">P</span> <span class="literal">-</span><span class="comment">p</span> <span class="comment">50070:50070</span> <span class="literal">-</span><span class="comment">p</span> <span class="comment">8088:8088</span> <span class="comment">centos</span><span class="literal">-</span><span class="comment">hadoop</span><span class="literal">-</span><span class="comment">hive</span><span class="literal">-</span><span class="comment">tpch:latest</span></span><br><span class="line"><span class="comment"></span></span><br><span class="line"><span class="comment">server2</span></span><br><span class="line"><span class="comment">pouch</span> <span class="comment">run</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">name</span> <span class="comment">hadoop1</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">hostname</span> <span class="comment">hadoop1</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">runtime</span> <span class="comment">runc</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">ulimit</span> <span class="comment">nofile=102400:102400</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">cpus</span> <span class="comment">8</span> <span class="literal">-</span><span class="comment">m</span> <span class="comment">16g</span> <span class="literal">-</span><span class="comment">d</span> <span class="literal">-</span><span class="comment">P</span> <span class="comment">centos</span><span class="literal">-</span><span class="comment">hadoop:latest</span> </span><br><span class="line"></span><br><span class="line"><span class="comment">server3</span></span><br><span class="line"><span class="comment">pouch</span> <span class="comment">run</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">name</span> <span class="comment">hadoop2</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">hostname</span> <span class="comment">hadoop2</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">runtime</span> <span class="comment">runc</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">ulimit</span> <span class="comment">nofile=102400:102400</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">cpus</span> <span class="comment">8</span> <span class="literal">-</span><span class="comment">m</span> <span class="comment">16g</span> <span class="literal">-</span><span class="comment">d</span> <span class="literal">-</span><span class="comment">P</span> <span class="comment">centos</span><span class="literal">-</span><span class="comment">hadoop:latest</span></span><br><span class="line"><span class="comment"></span></span><br><span class="line"><span class="comment">server4</span></span><br><span class="line"><span class="comment">pouch</span> <span class="comment">run</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">name</span> <span class="comment">hadoop3</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">hostname</span> <span class="comment">hadoop3</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">runtime</span> <span class="comment">runc</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">ulimit</span> <span class="comment">nofile=102400:102400</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">cpus</span> <span class="comment">8</span> <span class="literal">-</span><span class="comment">m</span> <span class="comment">16g</span> <span class="literal">-</span><span class="comment">d</span> <span class="literal">-</span><span class="comment">P</span> <span class="comment">centos</span><span class="literal">-</span><span class="comment">hadoop:latest</span></span><br></pre></td></tr></table></figure><p>for each container, set its host and ips, if you don’t know its ip,</p><figure class="highlight gradle"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">docker <span class="keyword">inspect</span> conatiner_name|<span class="keyword">grep</span> Addr</span><br></pre></td></tr></table></figure><p>for each container</p><figure class="highlight lsl"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">vi etc/hosts</span><br><span class="line"> <span class="number">172.18</span><span class="number">.0</span><span class="number">.2</span> hadoop0</span><br><span class="line"> <span class="number">172.18</span><span class="number">.0</span><span class="number">.3</span> hadoop1</span><br><span class="line"> <span class="number">172.18</span><span class="number">.0</span><span class="number">.4</span> hadoop2</span><br><span class="line"> <span class="number">172.18</span><span class="number">.0</span><span class="number">.5</span> hadoop3</span><br></pre></td></tr></table></figure><p>set up loginin without password</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">##hadoop0</span></span><br><span class="line"><span class="built_in">cd</span> ~</span><br><span class="line">mkdir .ssh</span><br><span class="line"><span class="built_in">cd</span> .ssh</span><br><span class="line">ssh-keygen -t rsa</span><br><span class="line">ssh-copy-id -i localhost</span><br><span class="line">ssh-copy-id -i hadoop0</span><br><span class="line">ssh-copy-id -i hadoop1</span><br><span class="line">ssh-copy-id -i hadoop2</span><br><span class="line"><span class="comment">##hadoop1</span></span><br><span class="line"><span class="built_in">cd</span> ~</span><br><span class="line"><span class="built_in">cd</span> .ssh</span><br><span class="line">ssh-keygen -t rsa</span><br><span class="line">ssh-copy-id -i localhost</span><br><span class="line">ssh-copy-id -i hadoop1</span><br><span class="line"><span class="comment">##hadoop2</span></span><br><span class="line"><span class="built_in">cd</span> ~</span><br><span class="line"><span class="built_in">cd</span> .ssh</span><br><span class="line">ssh-keygen -t rsa</span><br><span class="line">ssh-copy-id -i localhost</span><br><span class="line">ssh-copy-id -i hadoop2</span><br><span class="line"><span class="comment">##hadoop3</span></span><br><span class="line"><span class="built_in">cd</span> ~</span><br><span class="line"><span class="built_in">cd</span> .ssh</span><br><span class="line">ssh-keygen -t rsa</span><br><span class="line">ssh-copy-id -i localhost</span><br><span class="line">ssh-copy-id -i hadoop3</span><br></pre></td></tr></table></figure><ul><li>configure hadoop<figure class="highlight awk"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">cd <span class="regexp">/usr/</span>local<span class="regexp">/hadoop/</span>etc<span class="regexp">/hadoop/</span></span><br></pre></td></tr></table></figure></li></ul><p>vi slaves</p><figure class="highlight smali"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">hadoop1</span><br><span class="line">hadoop2</span><br><span class="line">hadoop3</span><br></pre></td></tr></table></figure><p>vi core-site.xml</p><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="tag"><<span class="name">configuration</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>fs.defaultFS<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>hdfs://hadoop0:9000<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>hadoop.tmp.dir<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>/usr/local/hadoop/tmp<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>fs.trash.interval<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>1440<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"><span class="tag"></<span class="name">configuration</span>></span></span><br></pre></td></tr></table></figure><p>set larger mapred.child.java.opts, otherwise query will fail</p><p>vi mapred-site.xml</p><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="tag"><<span class="name">configuration</span>></span></span><br><span class="line"><span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>mapreduce.framework.name<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>yarn<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"><span class="tag"></<span class="name">property</span>></span></span><br><span class="line"><span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>mapred.child.java.opts<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>-Xmx3072m<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"><span class="tag"></<span class="name">property</span>></span></span><br><span class="line"><span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>mapreduce.map.memory.mb<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>4096<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"><span class="tag"></<span class="name">property</span>></span></span><br><span class="line"><span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>mapreduce.reduce.memory.mb<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>4096<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"><span class="tag"></<span class="name">property</span>></span></span><br><span class="line"><span class="tag"></<span class="name">configuration</span>></span></span><br></pre></td></tr></table></figure><p>vi hdfs-site.xml</p><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="tag"><<span class="name">configuration</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>dfs.replication<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>3<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>dfs.permissions<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>false<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"><span class="tag"></<span class="name">configuration</span>></span></span><br></pre></td></tr></table></figure><p>vi yarn-site.xml</p><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line"><span class="tag"><<span class="name">configuration</span>></span></span><br><span class="line"><span class="comment"><!-- Site specific YARN configuration properties --></span></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>yarn.nodemanager.aux-services<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>mapreduce_shuffle<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>yarn.log-aggregation-enable<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>true<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">description</span>></span>The hostname of the RM.<span class="tag"></<span class="name">description</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>yarn.resourcemanager.hostname<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>hadoop0<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>yarn.nodemanager.vmem-check-enabled<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>false<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">description</span>></span>Whether virtual memory limits will be enforced for containers<span class="tag"></<span class="name">description</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>yarn.nodemanager.vmem-pmem-ratio<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>4<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">description</span>></span>Ratio between virtual memory to physical memory when setting memory limits for containers<span class="tag"></<span class="name">description</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br><span class="line"><span class="tag"></<span class="name">configuration</span>></span></span><br></pre></td></tr></table></figure><p><a href="https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_yarn_tuning.html" target="_blank" rel="noopener">other configuration references</a></p><p>edit hadoop-env.sh, set larger heapsize and client memory size</p><p>vi hadoop-env.sh</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># The maximum amount of heap to use, in MB. Default is 1000.</span></span><br><span class="line"><span class="built_in">export</span> HADOOP_HEAPSIZE=4096</span><br><span class="line"><span class="built_in">export</span> HADOOP_PORTMAP_OPTS=<span class="string">"-Xmx3072m <span class="variable">$HADOOP_PORTMAP_OPTS</span>"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># The following applies to multiple commands (fs, dfs, fsck, distcp etc)</span></span><br><span class="line"><span class="built_in">export</span> HADOOP_CLIENT_OPTS=<span class="string">"-Xmx3072m <span class="variable">$HADOOP_CLIENT_OPTS</span>"</span></span><br><span class="line"></span><br><span class="line"><span class="built_in">export</span> JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/</span><br></pre></td></tr></table></figure><ul><li>useful cmd to check hadoop and hdfs status</li></ul><p>check-unhealthy node status <a href="http://hadoop0-ip:8088/cluster/nodes/unhealthy" target="_blank" rel="noopener">http://hadoop0-ip:8088/cluster/nodes/unhealthy</a><br>hadoop overview <a href="http://hadoop0-ip:8088/" target="_blank" rel="noopener">http://hadoop0-ip:8088/</a> and <a href="http://hadoop0-ip:50070/" target="_blank" rel="noopener">http://hadoop0-ip:50070/</a></p><p>check current dfs status</p><figure class="highlight ebnf"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="attribute">hdfs dfsadmin -report</span></span><br></pre></td></tr></table></figure><ul><li><p>start hadoop, it will fail for not unified cluster ids</p><figure class="highlight stata"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">cd</span> /usr/<span class="keyword">local</span>/hadoop/</span><br><span class="line">sbin/start-all.<span class="keyword">sh</span></span><br></pre></td></tr></table></figure></li><li><p>format hdfs and restart all<br>running namenode</p><figure class="highlight dos"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">hdfs namenode -<span class="built_in">format</span></span><br></pre></td></tr></table></figure></li></ul><p>The most import steps:</p><figure class="highlight lasso"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="number">0.</span> master: sbin/stopp<span class="params">-all.sh</span></span><br><span class="line"><span class="number">1.</span> slaves: rm <span class="params">-rf</span> /usr/<span class="built_in">local</span>/hadoop/tmp #clean hdfs <span class="built_in">data</span></span><br><span class="line"><span class="number">2.</span> master: rm <span class="params">-rf</span> /usr/<span class="built_in">local</span>/hadoop/tmp #clena hdfs metadata</span><br><span class="line"><span class="number">3.</span> master: hdfs namenode <span class="params">-format</span> #namenode clean <span class="built_in">data</span> <span class="literal">and</span> generate <span class="literal">new</span> cluster Ids</span><br><span class="line"><span class="number">4.</span> master: scp <span class="params">-rq</span> /usr/<span class="built_in">local</span>/hadoop root@slaves:/usr/<span class="built_in">local</span> #scp <span class="keyword">to</span> slaves(hadoop1,<span class="number">2</span>,<span class="number">3</span>), both tmp(include hdfs cluster id) <span class="literal">and</span> conf(<span class="built_in">xml</span> files)</span><br><span class="line"><span class="number">5.</span> master: sbin/start<span class="params">-all.sh</span></span><br></pre></td></tr></table></figure><p>check current jps-namenode</p><figure class="highlight lsl"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">#jps</span><br><span class="line"><span class="number">56546</span> Jps</span><br><span class="line"><span class="number">5762</span> RunJar</span><br><span class="line"><span class="number">4262</span> ResourceManager</span><br><span class="line"><span class="number">3866</span> NameNode</span><br><span class="line"><span class="number">4078</span> SecondaryNameNode</span><br></pre></td></tr></table></figure><p>check current jps-datanode</p><figure class="highlight lsl"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">#jps</span><br><span class="line"><span class="number">883</span> DataNode</span><br><span class="line"><span class="number">997</span> NodeManager</span><br><span class="line"><span class="number">95716</span> Jps</span><br></pre></td></tr></table></figure><h2 id="hive-environment-setup"><a href="#hive-environment-setup" class="headerlink" title="hive environment setup"></a>hive environment setup</h2><ul><li>configure hive <figure class="highlight stata"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">cd</span> /usr/<span class="keyword">local</span>/hive/<span class="keyword">conf</span></span><br></pre></td></tr></table></figure></li></ul><p>init these config files</p><figure class="highlight stylus"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">cp hive-env<span class="selector-class">.sh</span><span class="selector-class">.template</span> hive-env.sh</span><br><span class="line">cp hive-default<span class="selector-class">.xml</span><span class="selector-class">.template</span> hive-site.xml</span><br><span class="line">cp hive-log4j2<span class="selector-class">.properties</span><span class="selector-class">.template</span> hive-log4j2.properties</span><br><span class="line">cp hive-exec-log4j2<span class="selector-class">.properties</span><span class="selector-class">.template</span> hive-exec-log4j2.properties</span><br></pre></td></tr></table></figure><p>vi hive-site.xml</p><figure class="highlight puppet"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">change</span> {<span class="literal">system</span>:java.io.tmpdir} <span class="keyword">to</span> /home/hadoop/hive/tmp</span><br><span class="line"><span class="keyword">change</span> {<span class="literal">system</span>:user.<span class="literal">name</span>} <span class="keyword">to</span> {user.<span class="literal">name</span>}</span><br></pre></td></tr></table></figure><p>vi hive-env.sh</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">export</span> JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64</span><br><span class="line"><span class="built_in">export</span> HADOOP_HOME=/usr/<span class="built_in">local</span>/hadoop</span><br><span class="line"><span class="built_in">export</span> HIVE_HOME=/usr/<span class="built_in">local</span>/hive</span><br><span class="line"><span class="built_in">export</span> HIVE_CONF_DIR=/usr/<span class="built_in">local</span>/hive/conf</span><br></pre></td></tr></table></figure><ul><li>mysql</li></ul><p>if docker</p><figure class="highlight routeros"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">docker <span class="builtin-name">run</span> --name mysql -e <span class="attribute">MYSQL_ROOT_PASSWORD</span>=111111 <span class="attribute">--net</span>=my-attachable-overlay -d mysql:latest</span><br></pre></td></tr></table></figure><p>if pouch </p><figure class="highlight routeros"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">pouch <span class="builtin-name">run</span> --name mysql -e <span class="attribute">MYSQL_ROOT_PASSWORD</span>=111111 -d mysql:latest</span><br></pre></td></tr></table></figure><p>in mysql</p><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">docker exec -it mysql bash</span><br><span class="line">mysql -u root -p</span><br><span class="line"> </span><br><span class="line"><span class="comment">#in mysql </span></span><br><span class="line"><span class="keyword">create</span> <span class="keyword">database</span> metastore;</span><br></pre></td></tr></table></figure><figure class="highlight lsl"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">docker inspect mysql|grep Addr</span><br><span class="line"></span><br><span class="line"><span class="number">10.0</span><span class="number">.0</span><span class="number">.9</span></span><br></pre></td></tr></table></figure><p>configure mysql connector address,useSSL=False,allowPublicKeyRetrieval=True,separete by “&”</p><p>vi /usr/local/hive/conf/hive-site.yaml</p><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="tag"><<span class="name">property</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">name</span>></span>javax.jdo.option.ConnectionURL<span class="tag"></<span class="name">name</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">value</span>></span>jdbc:mysql://10.0.0.9:3306/metastore?createDatabaseIfNotExist=true&amp;characterEncoding=UTF-8&amp;useSSL=false&amp;allowPublicKeyRetrieval=true<span class="tag"></<span class="name">value</span>></span></span><br><span class="line"> <span class="tag"><<span class="name">description</span>></span></span><br><span class="line"> JDBC connect string for a JDBC metastore.</span><br><span class="line"> To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.</span><br><span class="line"> For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.</span><br><span class="line"> <span class="tag"></<span class="name">description</span>></span></span><br><span class="line"> <span class="tag"></<span class="name">property</span>></span></span><br></pre></td></tr></table></figure><p>for hive</p><figure class="highlight 1c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">schematool -dbType mysql -initSchema</span><br><span class="line">hive --service metastore <span class="meta">&</span></span><br></pre></td></tr></table></figure><p>Now, hive environment is ready</p><h2 id="tpch-dbgen-and-execute-hive-queries"><a href="#tpch-dbgen-and-execute-hive-queries" class="headerlink" title="tpch-dbgen and execute hive queries"></a>tpch-dbgen and execute hive queries</h2><ul><li>generate data hdfs to 3 datanodes<br>generate data<figure class="highlight verilog"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">cd /bigdata/tpch-gen/dbgen</span><br><span class="line">./dbgen -s <span class="number">10</span> # <span class="keyword">generate</span> <span class="number">10</span>G scale data, <span class="keyword">and</span> it <span class="keyword">use</span> <span class="keyword">generate</span> <span class="number">8</span> tables.</span><br><span class="line"></span><br><span class="line">mv /bigdata/tpch-gen/dbgen<span class="comment">/*.tbl /bigdata/TPC-H_on_Hive/data/ #move the data to TPC-H_on_Hive/data folder</span></span><br></pre></td></tr></table></figure></li></ul><p>copy data to hdfs </p><p>vi /bigdata/TPC-H_on_Hive/data/tpch_prepare_data.sh</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -mkdir /tpch/</span><br><span class="line"></span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -mkdir /tpch/customer</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -mkdir /tpch/lineitem</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -mkdir /tpch/nation</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -mkdir /tpch/orders</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -mkdir /tpch/part</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -mkdir /tpch/partsupp</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -mkdir /tpch/region</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -mkdir /tpch/supplier</span><br><span class="line"></span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -copyFromLocal customer.tbl /tpch/customer/</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -copyFromLocal lineitem.tbl /tpch/lineitem/</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -copyFromLocal nation.tbl /tpch/nation/</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -copyFromLocal orders.tbl /tpch/orders/</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -copyFromLocal part.tbl /tpch/part/</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -copyFromLocal partsupp.tbl /tpch/partsupp/</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -copyFromLocal region.tbl /tpch/region/</span><br><span class="line"><span class="variable">$HADOOP_HOME</span>/bin/hadoop fs -copyFromLocal supplier.tbl /tpch/supplier/</span><br></pre></td></tr></table></figure><p>check the hdfs status</p><figure class="highlight ebnf"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="attribute">hdfs dfsadmin -report</span></span><br></pre></td></tr></table></figure><p>now you can check the dfs used capabilities:</p><figure class="highlight ldif"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br></pre></td><td class="code"><pre><span class="line"><span class="attribute">Configured Capacity</span>: 126421438464 (117.74 GB)</span><br><span class="line"><span class="attribute">Present Capacity</span>: 90621472768 (84.40 GB)</span><br><span class="line"><span class="attribute">DFS Remaining</span>: 50902175744 (47.41 GB)</span><br><span class="line"><span class="attribute">DFS Used</span>: 39719297024 (36.99 GB)</span><br><span class="line"><span class="attribute">DFS Used%</span>: 43.83%</span><br><span class="line"><span class="attribute">Under replicated blocks</span>: 6</span><br><span class="line"><span class="attribute">Blocks with corrupt replicas</span>: 0</span><br><span class="line"><span class="attribute">Missing blocks</span>: 0</span><br><span class="line"><span class="attribute">Missing blocks (with replication factor 1)</span>: 0</span><br><span class="line"></span><br><span class="line"><span class="literal">-------------------------------------------------</span></span><br><span class="line"><span class="attribute">Live datanodes (3):</span></span><br><span class="line"><span class="attribute"></span></span><br><span class="line"><span class="attribute">Name</span>: 10.0.0.7:50010 (hadoop3.my-attachable-overlaynet)</span><br><span class="line"><span class="attribute">Hostname</span>: hadoop3</span><br><span class="line"><span class="attribute">Decommission Status </span>: Normal</span><br><span class="line"><span class="attribute">Configured Capacity</span>: 42140479488 (39.25 GB)</span><br><span class="line"><span class="attribute">DFS Used</span>: 13239783424 (12.33 GB)</span><br><span class="line"><span class="attribute">Non DFS Used</span>: 11855196160 (11.04 GB)</span><br><span class="line"><span class="attribute">DFS Remaining</span>: 17045499904 (15.87 GB)</span><br><span class="line"><span class="attribute">DFS Used%</span>: 31.42%</span><br><span class="line"><span class="attribute">DFS Remaining%</span>: 40.45%</span><br><span class="line"><span class="attribute">Configured Cache Capacity</span>: 0 (0 B)</span><br><span class="line"><span class="attribute">Cache Used</span>: 0 (0 B)</span><br><span class="line"><span class="attribute">Cache Remaining</span>: 0 (0 B)</span><br><span class="line"><span class="attribute">Cache Used%</span>: 100.00%</span><br><span class="line"><span class="attribute">Cache Remaining%</span>: 0.00%</span><br><span class="line"><span class="attribute">Xceivers</span>: 1</span><br><span class="line"><span class="attribute">Last contact</span>: Mon Sep 07 05:57:10 UTC 2020</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="attribute">Name</span>: 10.0.0.5:50010 (hadoop2.my-attachable-overlaynet)</span><br><span class="line"><span class="attribute">Hostname</span>: hadoop2</span><br><span class="line"><span class="attribute">Decommission Status </span>: Normal</span><br><span class="line"><span class="attribute">Configured Capacity</span>: 42140479488 (39.25 GB)</span><br><span class="line"><span class="attribute">DFS Used</span>: 13239767040 (12.33 GB)</span><br><span class="line"><span class="attribute">Non DFS Used</span>: 11703111680 (10.90 GB)</span><br><span class="line"><span class="attribute">DFS Remaining</span>: 17197600768 (16.02 GB)</span><br><span class="line"><span class="attribute">DFS Used%</span>: 31.42%</span><br><span class="line"><span class="attribute">DFS Remaining%</span>: 40.81%</span><br><span class="line"><span class="attribute">Configured Cache Capacity</span>: 0 (0 B)</span><br><span class="line"><span class="attribute">Cache Used</span>: 0 (0 B)</span><br><span class="line"><span class="attribute">Cache Remaining</span>: 0 (0 B)</span><br><span class="line"><span class="attribute">Cache Used%</span>: 100.00%</span><br><span class="line"><span class="attribute">Cache Remaining%</span>: 0.00%</span><br><span class="line"><span class="attribute">Xceivers</span>: 1</span><br><span class="line"><span class="attribute">Last contact</span>: Mon Sep 07 05:57:10 UTC 2020</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="attribute">Name</span>: 10.0.0.18:50010 (hadoop1.my-attachable-overlaynet)</span><br><span class="line"><span class="attribute">Hostname</span>: hadoop1</span><br><span class="line"><span class="attribute">Decommission Status </span>: Normal</span><br><span class="line"><span class="attribute">Configured Capacity</span>: 42140479488 (39.25 GB)</span><br><span class="line"><span class="attribute">DFS Used</span>: 13239746560 (12.33 GB)</span><br><span class="line"><span class="attribute">Non DFS Used</span>: 12241657856 (11.40 GB)</span><br><span class="line"><span class="attribute">DFS Remaining</span>: 16659075072 (15.51 GB)</span><br><span class="line"><span class="attribute">DFS Used%</span>: 31.42%</span><br><span class="line"><span class="attribute">DFS Remaining%</span>: 39.53%</span><br><span class="line"><span class="attribute">Configured Cache Capacity</span>: 0 (0 B)</span><br><span class="line"><span class="attribute">Cache Used</span>: 0 (0 B)</span><br><span class="line"><span class="attribute">Cache Remaining</span>: 0 (0 B)</span><br><span class="line"><span class="attribute">Cache Used%</span>: 100.00%</span><br><span class="line"><span class="attribute">Cache Remaining%</span>: 0.00%</span><br><span class="line"><span class="attribute">Xceivers</span>: 1</span><br><span class="line"><span class="attribute">Last contact</span>: Mon Sep 07 05:57:10 UTC 2020</span><br></pre></td></tr></table></figure><ul><li>run tpch-hive script<br>you can exec queries one by one,e.g. <figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="selector-tag">hive</span> <span class="selector-tag">-f</span> <span class="selector-tag">q15_top_supplier</span><span class="selector-class">.hive</span></span><br></pre></td></tr></table></figure></li></ul><figure class="highlight stylus"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">cd /bigdata/TPC-H_on_Hive/tpch</span><br><span class="line">ls</span><br><span class="line"></span><br><span class="line">q10_returned_item<span class="selector-class">.hive</span> q15_top_supplier<span class="selector-class">.hive</span> q1_pricing_summary_report<span class="selector-class">.hive</span> q3_shipping_priority<span class="selector-class">.hive</span> q8_national_market_share.hive</span><br><span class="line">q11_important_stock<span class="selector-class">.hive</span> q16_parts_supplier_relationship<span class="selector-class">.hive</span> q20_potential_part_promotion<span class="selector-class">.hive</span> q4_order_priority<span class="selector-class">.hive</span> q9_product_type_profit.hive</span><br><span class="line">q12_shipping<span class="selector-class">.hive</span> q17_small_quantity_order_revenue<span class="selector-class">.hive</span> q21_suppliers_who_kept_orders_waiting<span class="selector-class">.hive</span> q5_local_supplier_volume.hive</span><br><span class="line">q13_customer_distribution<span class="selector-class">.hive</span> q18_large_volume_customer<span class="selector-class">.hive</span> q22_global_sales_opportunity<span class="selector-class">.hive</span> q6_forecast_revenue_change.hive</span><br><span class="line">q14_promotion_effect<span class="selector-class">.hive</span> q19_discounted_revenue<span class="selector-class">.hive</span> q2_minimum_cost_supplier<span class="selector-class">.hive</span> q7_volume_shipping.hive</span><br></pre></td></tr></table></figure><p>or you can sequentialy execute one by one</p><p>cat benchmark.conf</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="meta">#!/usr/bin/env bash</span></span><br><span class="line"></span><br><span class="line">BASE_DIR=<span class="string">"/bigdata/TPC-H_on_Hive/"</span></span><br><span class="line"></span><br><span class="line">TIME_CMD=<span class="string">"/usr/bin/time -f Time:%e"</span></span><br><span class="line"></span><br><span class="line">NUM_OF_TRIALS=1</span><br><span class="line"></span><br><span class="line">LOG_FILE=<span class="string">"/bigdata/TPC-H_on_Hive/benchmark.log"</span></span><br><span class="line"></span><br><span class="line">LOG_DIR=<span class="string">"<span class="variable">$BASE_DIR</span>/logs"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># hadoop</span></span><br><span class="line">HADOOP_CMD=<span class="string">"<span class="variable">$HADOOP_HOME</span>/bin/hadoop"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># hive</span></span><br><span class="line">HIVE_CMD=<span class="string">"<span class="variable">$HIVE_HOME</span>/bin/hive"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># hive tpch queries</span></span><br><span class="line"><span class="comment"># hive all benchmark queries</span></span><br><span class="line">HIVE_TPCH_QUERIES_ALL=( \</span><br><span class="line"> <span class="string">"tpch/q1_pricing_summary_report.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q2_minimum_cost_supplier.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q3_shipping_priority.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q4_order_priority.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q5_local_supplier_volume.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q6_forecast_revenue_change.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q7_volume_shipping.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q8_national_market_share.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q9_product_type_profit.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q10_returned_item.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q11_important_stock.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q12_shipping.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q13_customer_distribution.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q14_promotion_effect.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q15_top_supplier.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q16_parts_supplier_relationship.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q17_small_quantity_order_revenue.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q18_large_volume_customer.hive"</span> \</span><br><span class="line"> <span class="string">"/tpch/q19_discounted_revenue.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q20_potential_part_promotion.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q21_suppliers_who_kept_orders_waiting.hive"</span> \</span><br><span class="line"> <span class="string">"tpch/q22_global_sales_opportunity.hive"</span> \</span><br><span class="line">)</span><br></pre></td></tr></table></figure><p>cat tpch_benchmark.sh</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#!/usr/bin/env bash</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># set up configurations</span></span><br><span class="line"><span class="built_in">source</span> /bigdata/TPC-H_on_Hive/benchmark.conf;</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> [ -e <span class="string">"<span class="variable">$LOG_FILE</span>"</span> ]; <span class="keyword">then</span></span><br><span class="line"> timestamp=`date <span class="string">"+%F-%R"</span> --reference=<span class="variable">$LOG_FILE</span>`</span><br><span class="line"> backupFile=<span class="string">"<span class="variable">$LOG_FILE</span>.<span class="variable">$timestamp</span>"</span></span><br><span class="line"> mv <span class="variable">$LOG_FILE</span> <span class="variable">$LOG_DIR</span>/<span class="variable">$backupFile</span></span><br><span class="line"><span class="keyword">fi</span></span><br><span class="line"></span><br><span class="line"><span class="built_in">echo</span> <span class="string">""</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">"***********************************************"</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">"* PC-H benchmark on Hive *"</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">"***********************************************"</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">" "</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">"Running Hive from <span class="variable">$HIVE_HOME</span>"</span> | tee -a <span class="variable">$LOG_FILE</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">"Running Hadoop from <span class="variable">$HADOOP_HOME</span>"</span> | tee -a <span class="variable">$LOG_FILE</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">"See <span class="variable">$LOG_FILE</span> for more details of query errors."</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">""</span></span><br><span class="line"></span><br><span class="line">trial=0</span><br><span class="line"><span class="keyword">while</span> [ <span class="variable">$trial</span> -lt <span class="variable">$NUM_OF_TRIALS</span> ]; <span class="keyword">do</span></span><br><span class="line"> trial=`expr <span class="variable">$trial</span> + 1`</span><br><span class="line"> <span class="built_in">echo</span> <span class="string">"Executing Trial #<span class="variable">$trial</span> of <span class="variable">$NUM_OF_TRIALS</span> trial(s)..."</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">for</span> query <span class="keyword">in</span> <span class="variable">${HIVE_TPCH_QUERIES_ALL[@]}</span>; <span class="keyword">do</span></span><br><span class="line"> <span class="built_in">echo</span> <span class="string">"Running Hive query: <span class="variable">$query</span>"</span> | tee -a <span class="variable">$LOG_FILE</span></span><br><span class="line"> <span class="variable">$TIME_CMD</span> <span class="variable">$HIVE_CMD</span> -f <span class="variable">$BASE_DIR</span>/<span class="variable">$query</span> 2>&1 | tee -a <span class="variable">$LOG_FILE</span> | grep <span class="string">'^Time:'</span></span><br><span class="line"> returncode=<span class="variable">${PIPESTATUS[0]}</span></span><br><span class="line"> <span class="keyword">if</span> [ <span class="variable">$returncode</span> -ne 0 ]; <span class="keyword">then</span></span><br><span class="line"> <span class="built_in">echo</span> <span class="string">"ABOVE QUERY FAILED:<span class="variable">$returncode</span>"</span></span><br><span class="line"> <span class="keyword">fi</span></span><br><span class="line"> <span class="keyword">done</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">done</span> <span class="comment"># TRIAL</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">"***********************************************"</span></span><br><span class="line"><span class="built_in">echo</span> <span class="string">""</span></span><br></pre></td></tr></table></figure><p>stop_tpch_benchmark.sh</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#! /bin/bash</span></span><br><span class="line"><span class="built_in">kill</span> $(ps aux |grep <span class="string">'[t]pch'</span> | awk <span class="string">'{print $2}'</span>)</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<h2 id="set-up-containers-that-can-ping-each-other-even-cross-different-host"><a href="#set-up-containers-that-can-ping-each-other-even-cros
</summary>
<category term="etcd" scheme="http://yoursite.com/tags/etcd/"/>
<category term="flannel" scheme="http://yoursite.com/tags/flannel/"/>
<category term="overlay" scheme="http://yoursite.com/tags/overlay/"/>
<category term="pouch" scheme="http://yoursite.com/tags/pouch/"/>
<category term="docker" scheme="http://yoursite.com/tags/docker/"/>
<category term="container" scheme="http://yoursite.com/tags/container/"/>
<category term="TPCH" scheme="http://yoursite.com/tags/TPCH/"/>
<category term="hadoop" scheme="http://yoursite.com/tags/hadoop/"/>
<category term="hive" scheme="http://yoursite.com/tags/hive/"/>
<category term="mysql" scheme="http://yoursite.com/tags/mysql/"/>
<category term="jdk" scheme="http://yoursite.com/tags/jdk/"/>
<category term="workload" scheme="http://yoursite.com/tags/workload/"/>
</entry>
<entry>
<title>Kata Container — VM with conatiner APIs</title>
<link href="http://yoursite.com/2020/09/04/kata-container/"/>
<id>http://yoursite.com/2020/09/04/kata-container/</id>
<published>2020-09-04T07:24:23.000Z</published>
<updated>2020-09-04T08:09:41.168Z</updated>
<content type="html"><![CDATA[<h2 id="kata-environment"><a href="#kata-environment" class="headerlink" title="kata environment"></a>kata environment</h2><ul><li><p>install kata on centos7</p><figure class="highlight elixir"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">source /etc/os-release</span><br><span class="line"></span><br><span class="line">sudo yum -y install yum-utils</span><br><span class="line"></span><br><span class="line">sudo -E yum-config-manager --add-repo <span class="string">"http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/CentOS_7/home:katacontainers:releases:${ARCH}:${BRANCH}.repo"</span></span><br><span class="line"></span><br><span class="line">e.g.</span><br><span class="line"></span><br><span class="line">sudo -E yum-config-manager --add-repo <span class="symbol">http:</span>/<span class="regexp">/download.opensuse.org/repositories</span><span class="regexp">/home:/katacontainers</span><span class="symbol">:/releases</span><span class="symbol">:/x86_64</span><span class="symbol">:/master/CentOS_7/home</span><span class="symbol">:katacontainers</span><span class="symbol">:releases</span><span class="symbol">:x86_64</span><span class="symbol">:master</span>.repo</span><br><span class="line"></span><br><span class="line">sudo -E yum -y install kata-runtime kata-proxy kata-shim</span><br></pre></td></tr></table></figure></li><li><p>check kata status</p><figure class="highlight routeros"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">kata-runtime kata-check</span><br><span class="line"></span><br><span class="line">System is capable of running Kata Containers</span><br><span class="line">System can currently create Kata Containers</span><br></pre></td></tr></table></figure></li><li><p>configure kata</p></li></ul><figure class="highlight groovy"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kata-<span class="string">path:</span> <span class="regexp">/usr/</span>bin/kata-runtime</span><br><span class="line">kata-<span class="string">config:</span> <span class="regexp">/usr/</span>share<span class="regexp">/defaults/</span>kata-containers/configuration.toml</span><br></pre></td></tr></table></figure><p><em>Sandbox_cgroup_only -> true</em> <a href="https://github.com/kata-containers/documentation/blob/74ebc0945ed33817f851c12a0dbf0c37632d161a/design/host-cgroups.md" target="_blank" rel="noopener">Host cgroup management<br>-include kata-qemu into kata cgroup taskset path</a></p><p><em>default_vcpus = -1</em> <a href="https://github.com/kata-containers/documentation/blob/89120e8d8a6429bd631126c9bf32cefb17cd5652/design/vcpu-handling.md" target="_blank" rel="noopener">to let cpuset-cpus work</a></p><ul><li>running docker/pouch with kata,<figure class="highlight dts"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">vi <span class="meta-keyword">/etc/</span>docker/daemon.json </span><br><span class="line">vi <span class="meta-keyword">/etc/</span>pouch/config.json</span><br></pre></td></tr></table></figure></li></ul><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line">#docker daemon</span><br><span class="line">{</span><br><span class="line"> <span class="attr">"runtimes"</span>: {</span><br><span class="line"> <span class="attr">"kata-qemu"</span>: {</span><br><span class="line"> <span class="attr">"path"</span>: <span class="string">"/usr/bin/kata-runtime"</span></span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">#pouch daemon</span><br><span class="line">{</span><br><span class="line"> <span class="attr">"volume-config"</span>: {},</span><br><span class="line"> <span class="attr">"network-config"</span>: {</span><br><span class="line"> <span class="attr">"bridge-config"</span>: {</span><br><span class="line"> <span class="attr">"bip"</span>: <span class="string">"192.168.38.1/24"</span>,</span><br><span class="line"> <span class="attr">"default-gateway"</span>: <span class="string">"192.168.38.1"</span>,</span><br><span class="line"> <span class="attr">"iptables"</span>: <span class="literal">false</span>,</span><br><span class="line"> <span class="attr">"ipforward"</span>: <span class="literal">false</span>,</span><br><span class="line"> <span class="attr">"userland-proxy"</span>: <span class="literal">false</span></span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> <span class="attr">"cri-config"</span>: {},</span><br><span class="line"> <span class="attr">"TLS"</span>: {},</span><br><span class="line"> <span class="attr">"default-log-config"</span>: {},</span><br><span class="line"> <span class="attr">"registry-service"</span>: {},</span><br><span class="line"> <span class="attr">"add-runtime"</span>: {</span><br><span class="line"> <span class="attr">"kata-qemu"</span>: {</span><br><span class="line"> <span class="attr">"path"</span>: <span class="string">"/opt/kata/bin/kata-runtime"</span>,</span><br><span class="line"> <span class="attr">"runtimeArgs"</span>: <span class="literal">null</span></span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>check current runtime<figure class="highlight routeros"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">docker info</span><br><span class="line"><span class="built_in">..</span>.</span><br><span class="line">runtimes runc,kata-qemu</span><br><span class="line"><span class="built_in">..</span>.</span><br></pre></td></tr></table></figure></li></ul><h2 id="kata-usage"><a href="#kata-usage" class="headerlink" title="kata usage"></a>kata usage</h2><ul><li><p>setting runtime when start a container, default runc</p><figure class="highlight applescript"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">docker <span class="built_in">run</span> -d -<span class="built_in">name</span> name1 <span class="comment">--runtime kata-qemu images1 cmd1</span></span><br></pre></td></tr></table></figure></li><li><p>default vcpu is 1 and default vmemory size is 2g, so you should setting when start a kata conatiner</p></li></ul><figure class="highlight applescript"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">docker <span class="built_in">run</span> -d -<span class="built_in">name</span> name1 <span class="comment">--runtime kata-qemu --cpus 8 -m 16g images1 cmd1</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">or</span></span><br><span class="line"></span><br><span class="line">docker <span class="built_in">run</span> -d -<span class="built_in">name</span> name1 <span class="comment">--runtime kata-qemu --cpu-period 100000 --cpu-quota 800000 -m 16g images1 cmd1</span></span><br></pre></td></tr></table></figure><ul><li>if you want to use the cpuset-cpus, you should also –cpus or –cpu-quota first, otherwise core pining will fail</li></ul><figure class="highlight lsl"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">docker run -d -name name1 --runtime kata-qemu --cpus <span class="number">8</span> --cpuset-cpus <span class="number">7</span><span class="number">-15</span> -m <span class="number">16</span>g images1 cmd1</span><br></pre></td></tr></table></figure><ul><li>if you want to check core pining really works, you can enter the container<figure class="highlight tcl"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">docker <span class="keyword">exec</span> -it name1 /bin/bash nproc</span><br><span class="line"></span><br><span class="line"><span class="number">8</span></span><br><span class="line"></span><br><span class="line">docker <span class="keyword">exec</span> -it name1 /bin/bash cat /<span class="keyword">proc</span>/self/status |<span class="title"> grep</span> Cpus_allowed_list</span><br><span class="line"></span><br><span class="line">7-15</span><br></pre></td></tr></table></figure></li></ul>]]></content>
<summary type="html">
<h2 id="kata-environment"><a href="#kata-environment" class="headerlink" title="kata environment"></a>kata environment</h2><ul>
<li><p>insta
</summary>
<category term="pouch" scheme="http://yoursite.com/tags/pouch/"/>
<category term="docker" scheme="http://yoursite.com/tags/docker/"/>
<category term="kata" scheme="http://yoursite.com/tags/kata/"/>
<category term="conatiner" scheme="http://yoursite.com/tags/conatiner/"/>
<category term="perf" scheme="http://yoursite.com/tags/perf/"/>
<category term="cgroup" scheme="http://yoursite.com/tags/cgroup/"/>
<category term="cycles" scheme="http://yoursite.com/tags/cycles/"/>
<category term="cpuset" scheme="http://yoursite.com/tags/cpuset/"/>
</entry>
<entry>
<title>How to use perf-stat to collect event of both runc and runv environment</title>
<link href="http://yoursite.com/2020/09/04/perf-stat/"/>
<id>http://yoursite.com/2020/09/04/perf-stat/</id>
<published>2020-09-04T05:24:23.000Z</published>
<updated>2020-09-04T07:59:21.732Z</updated>
<content type="html"><![CDATA[<h2 id="How-to-use-perf-stat"><a href="#How-to-use-perf-stat" class="headerlink" title="How to use perf stat"></a>How to use perf stat</h2><ul><li>basic usage <a href="https://man7.org/linux/man-pages/man1/perf-stat.1.html" target="_blank" rel="noopener">man-perf</a></li></ul><figure class="highlight fsharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">perf stat [-e <EVENT> | --event=EVENT] [-a] <command></span><br><span class="line">perf stat [-e <EVENT> | --event=EVENT] [-a] — <command> <span class="meta">[<options>]</span></span><br><span class="line">perf stat [-e <EVENT> | --event=EVENT] [-a] record [-o file] — <command> <span class="meta">[<options>]</span></span><br></pre></td></tr></table></figure><ul><li><p>examples</p><ul><li><p>specify a PMU counter, and output every 5 seconds</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf <span class="built_in">stat</span> -e cycles -I 5000</span><br></pre></td></tr></table></figure></li><li><p>output to a file, and setting sep to ‘,’</p><figure class="highlight stylus"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf stat -e cycles -o perf<span class="selector-class">.csv</span> -x , -I <span class="number">5000</span></span><br></pre></td></tr></table></figure></li><li><p>if you want more details please use -vv</p><figure class="highlight stylus"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf stat -e cycles -o perf<span class="selector-class">.csv</span> -x , -I <span class="number">5000</span> -vv</span><br></pre></td></tr></table></figure></li><li><p>specify pid</p><figure class="highlight stylus"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf stat -e cycles -o perf<span class="selector-class">.csv</span> -x , -I <span class="number">5000</span> -<span class="selector-tag">p</span> <span class="number">50082</span></span><br></pre></td></tr></table></figure></li><li><p>specify docker runc cgroup, also should with system mode “-a”</p><figure class="highlight llvm"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf stat -e cycles -o perf.csv -<span class="keyword">x</span> , -I <span class="number">5000</span> -G docker/b<span class="number">3</span>f<span class="number">9e0421</span>d<span class="number">084</span>a<span class="number">3</span>b<span class="number">02</span><span class="keyword">c</span><span class="number">811791214e932395</span>ed<span class="number">463560495</span><span class="keyword">ccc</span><span class="number">31e7</span>ce<span class="number">7</span>ead<span class="number">889</span>ac -a</span><br></pre></td></tr></table></figure></li><li><p>specify pouch runc cgroup, also should with system mode “-a”</p><figure class="highlight llvm"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf stat -e cycles -o perf.csv -<span class="keyword">x</span> , -I <span class="number">5000</span> -G <span class="keyword">default</span>/b<span class="number">3</span>f<span class="number">9e0421</span>d<span class="number">084</span>a<span class="number">3</span>b<span class="number">02</span><span class="keyword">c</span><span class="number">811791214e932395</span>ed<span class="number">463560495</span><span class="keyword">ccc</span><span class="number">31e7</span>ce<span class="number">7</span>ead<span class="number">889</span>ac -a</span><br></pre></td></tr></table></figure></li><li><p>specify docker runv cgroup, also should with system mode “-a”</p><figure class="highlight llvm"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf stat -e cycles -o perf.csv -<span class="keyword">x</span> , -I <span class="number">5000</span> -G docker/kata_b<span class="number">3</span>f<span class="number">9e0421</span>d<span class="number">084</span>a<span class="number">3</span>b<span class="number">02</span><span class="keyword">c</span><span class="number">811791214e932395</span>ed<span class="number">463560495</span><span class="keyword">ccc</span><span class="number">31e7</span>ce<span class="number">7</span>ead<span class="number">889</span>ac -a</span><br></pre></td></tr></table></figure></li></ul><p><em>check /sys/fs/cgroup/per_event/docker/xxx for cgroup path</em></p></li></ul><ul><li><p>no-aggr mode, so you can check details of every hyperthreading cores, this <strong>no-aggr</strong> shouldn’t work with cgroup or pid</p><figure class="highlight stylus"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf stat -e cycles,instructions -o perf<span class="selector-class">.csv</span> -x , -I <span class="number">5000</span> -<span class="selector-tag">a</span> --no-aggr</span><br></pre></td></tr></table></figure></li><li><p>for multi-plexing using an event group(3 fix counters + 4 other counters,no more than 4, otherwise a new envent group,the fix counters should be rewrite),e.g. -e {e1,e2,e3} -G c1</p><figure class="highlight autohotkey"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf stat -e '{inst_retired.any,cpu_clk_unhalted.<span class="keyword">thread</span>,cpu_clk_unhalted.ref_tsc,longest_lat_cache.miss,offcore_requests_outstanding.l3_miss_demand_dat<span class="built_in">a_rd</span>,offcore_requests.l3_miss_demand_dat<span class="built_in">a_rd</span>}' -o perf.csv -x , -I <span class="number">5000</span> -G docker/b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac -<span class="literal">a</span></span><br></pre></td></tr></table></figure><figure class="highlight llvm"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">perf stat -e '{inst_retired.any,cpu_clk_unhalted.thread,cpu_clk_unhalted.ref_tsc,longest_lat_cache.miss,offcore_requests_outstanding.l<span class="number">3</span>_miss_demand_data_rd,offcore_requests.l<span class="number">3</span>_miss_demand_data_rd},{inst_retired.any,cpu_clk_unhalted.thread,cpu_clk_unhalted.ref_tsc,dtlb_load_misses.walk_pending,dtlb_store_misses.walk_pending,itlb_misses.walk_pending:GH,ept.walk_pending}' -o perf.csv -<span class="keyword">x</span> , -I <span class="number">5000</span> -G docker/b<span class="number">3</span>f<span class="number">9e0421</span>d<span class="number">084</span>a<span class="number">3</span>b<span class="number">02</span><span class="keyword">c</span><span class="number">811791214e932395</span>ed<span class="number">463560495</span><span class="keyword">ccc</span><span class="number">31e7</span>ce<span class="number">7</span>ead<span class="number">889</span>ac -a</span><br></pre></td></tr></table></figure></li><li><p>if you want to collect more than one cgroups, you should rewrite same numbers of cgroups as counter groups, e.g. -e eg1,eg2,eg1,eg2 -G c1,c1,c2,c2, e.g. -e eg1,eg2,eg3,eg1,eg2,eg3,eg1,eg2,eg3 -G c1,c1,c1,c2,c2,c2,c3,c3,c3</p><figure class="highlight autohotkey"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br></pre></td><td class="code"><pre><span class="line"><span class="symbol"> sudo perf stat -e '{inst_retired.any:GH,cpu_clk_unhalted.thread:GH,cpu_clk_unhalted.ref_tsc:GH,longest_lat_cache.miss:GH,offcore_requests_outstanding.l3_miss_demand_data_rd:GH,offcore_requests.l3_miss_demand_data_rd:</span>GH},</span><br><span class="line"><span class="symbol">{inst_retired.any:GH,cpu_clk_unhalted.thread:GH,cpu_clk_unhalted.ref_tsc:GH,dtlb_load_misses.walk_pending:GH,dtlb_store_misses.walk_pending:GH,itlb_misses.walk_pending:GH,ept.walk_pending:</span>GH},</span><br><span class="line"><span class="symbol">{inst_retired.any:GH,cpu_clk_unhalted.thread:GH,cpu_clk_unhalted.ref_tsc:GH,longest_lat_cache.miss:GH,offcore_requests_outstanding.l3_miss_demand_data_rd:GH,offcore_requests.l3_miss_demand_data_rd:</span>GH},</span><br><span class="line"><span class="symbol">{inst_retired.any:GH,cpu_clk_unhalted.thread:GH,cpu_clk_unhalted.ref_tsc:GH,dtlb_load_misses.walk_pending:GH,dtlb_store_misses.walk_pending:GH,itlb_misses.walk_pending:GH,ept.walk_pending:</span>GH}' </span><br><span class="line">-G </span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_6cb9fc1fdc013fbb9320e9051681f1f62f8b4367e5290ea372f134b17574dbf4</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span>,</span><br><span class="line">docker/kat<span class="built_in">a_b3f9e0421d084a3b02c811791214e932395ed463560495ccc31e7ce7ead889ac</span></span><br><span class="line"></span><br><span class="line"> -<span class="literal">a</span> -I <span class="number">5000</span></span><br></pre></td></tr></table></figure><h2 id="The-differnce-between-perf-kvm-stat-and-perf-stat"><a href="#The-differnce-between-perf-kvm-stat-and-perf-stat" class="headerlink" title="The differnce between perf kvm stat and perf stat"></a>The differnce between <strong>perf kvm stat</strong> and <strong>perf stat</strong></h2><p><a href="https://man7.org/linux/man-pages/man2/perf_event_open.2.html" target="_blank" rel="noopener">perf_event_open manual </a></p><figure class="highlight cpp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">struct</span> <span class="title">perf_event_attr</span> {</span></span><br><span class="line"> __u32 type; <span class="comment">/* Type of event */</span></span><br><span class="line"> __u32 size; <span class="comment">/* Size of attribute structure */</span></span><br><span class="line"> __u64 config; <span class="comment">/* Type-specific configuration */</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">union</span> {</span><br><span class="line"> __u64 sample_period; <span class="comment">/* Period of sampling */</span></span><br><span class="line"> __u64 sample_freq; <span class="comment">/* Frequency of sampling */</span></span><br><span class="line"> };</span><br><span class="line"></span><br><span class="line"> __u64 sample_type; <span class="comment">/* Specifies values included in sample */</span></span><br><span class="line"> __u64 read_format; <span class="comment">/* Specifies values returned in read */</span></span><br><span class="line"></span><br><span class="line"> __u64 disabled : <span class="number">1</span>, <span class="comment">/* off by default */</span></span><br><span class="line"> inherit : <span class="number">1</span>, <span class="comment">/* children inherit it */</span></span><br><span class="line"> pinned : <span class="number">1</span>, <span class="comment">/* must always be on PMU */</span></span><br><span class="line"> exclusive : <span class="number">1</span>, <span class="comment">/* only group on PMU */</span></span><br><span class="line"> exclude_user : <span class="number">1</span>, <span class="comment">/* don't count user */</span></span><br><span class="line"> exclude_kernel : <span class="number">1</span>, <span class="comment">/* don't count kernel */</span></span><br><span class="line"> exclude_hv : <span class="number">1</span>, <span class="comment">/* don't count hypervisor */</span></span><br><span class="line"> exclude_idle : <span class="number">1</span>, <span class="comment">/* don't count when idle */</span></span><br><span class="line"> mmap : <span class="number">1</span>, <span class="comment">/* include mmap data */</span></span><br><span class="line"> comm : <span class="number">1</span>, <span class="comment">/* include comm data */</span></span><br><span class="line"> freq : <span class="number">1</span>, <span class="comment">/* use freq, not period */</span></span><br><span class="line"> inherit_stat : <span class="number">1</span>, <span class="comment">/* per task counts */</span></span><br><span class="line"> enable_on_exec : <span class="number">1</span>, <span class="comment">/* next exec enables */</span></span><br><span class="line"> task : <span class="number">1</span>, <span class="comment">/* trace fork/exit */</span></span><br><span class="line"> watermark : <span class="number">1</span>, <span class="comment">/* wakeup_watermark */</span></span><br><span class="line"> precise_ip : <span class="number">2</span>, <span class="comment">/* skid constraint */</span></span><br><span class="line"> mmap_data : <span class="number">1</span>, <span class="comment">/* non-exec mmap data */</span></span><br><span class="line"> sample_id_all : <span class="number">1</span>, <span class="comment">/* sample_type all events */</span></span><br><span class="line"> exclude_host : <span class="number">1</span>, <span class="comment">/* don't count in host */</span></span><br><span class="line"> exclude_guest : <span class="number">1</span>, <span class="comment">/* don't count in guest */</span></span><br><span class="line"> exclude_callchain_kernel : <span class="number">1</span>,</span><br><span class="line"> <span class="comment">/* exclude kernel callchains */</span></span><br><span class="line"> exclude_callchain_user : <span class="number">1</span>,</span><br><span class="line"> <span class="comment">/* exclude user callchains */</span></span><br><span class="line"> mmap2 : <span class="number">1</span>, <span class="comment">/* include mmap with inode data */</span></span><br><span class="line"> comm_exec : <span class="number">1</span>, <span class="comment">/* flag comm events that are</span></span><br><span class="line"><span class="comment"> due to exec */</span></span><br><span class="line"> use_clockid : <span class="number">1</span>, <span class="comment">/* use clockid for time fields */</span></span><br><span class="line"> context_switch : <span class="number">1</span>, <span class="comment">/* context switch data */</span></span><br><span class="line"></span><br><span class="line"> __reserved_1 : <span class="number">37</span>;</span><br></pre></td></tr></table></figure></li></ul><blockquote><pre><code>exclude_host (since Linux 3.2) When conducting measurements that include processes running VM instances (i.e., have executed a KVM_RUN ioctl(2)), only mea‐ sure events happening inside a guest instance. This is only meaningful outside the guests; this setting does not change counts gathered inside of a guest. Currently, this function‐ ality is x86 only.</code></pre></blockquote><blockquote><pre><code>exclude_guest (since Linux 3.2) When conducting measurements that include processes running VM instances (i.e., have executed a KVM_RUN ioctl(2)), do not measure events happening inside guest instances. This is only meaningful outside the guests; this setting does not change counts gathered inside of a guest. Currently, this function‐ ality is x86 only.</code></pre></blockquote><p>if the containers are running at runv environment, for example <em>kata-qemu</em></p><p>for details,espically the <em>perf_event_attr</em> attributes <em>exclude_guest</em> and <em>exclude_host</em>.</p><p>some examples:</p><ul><li>perf stat exclude guest os(default), the cycle value equals to 20*10^6, which is incorrect, see perf_stat_exclude_guest.log</li></ul><figure class="highlight"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line">#perf stat -e cycles -a -I 1000 -vv -C 1 -p 54927 sleep 5</span><br><span class="line">Using CPUID GenuineIntel-6-55</span><br><span class="line">intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch</span><br><span class="line">------------------------------------------------------------</span><br><span class="line">perf_event_attr:</span><br><span class="line"> size 112</span><br><span class="line"> sample_type IDENTIFIER</span><br><span class="line"> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING</span><br><span class="line"> disabled 1</span><br><span class="line"> inherit 1</span><br><span class="line"> exclude_guest 1</span><br><span class="line">------------------------------------------------------------</span><br><span class="line">sys_perf_event_open: pid 54927 cpu -1 group_fd -1 flags 0x8 = 3</span><br><span class="line">sys_perf_event_open: pid 54928 cpu -1 group_fd -1 flags 0x8 = 4</span><br><span class="line">sys_perf_event_open: pid 54931 cpu -1 group_fd -1 flags 0x8 = 5</span><br><span class="line">sys_perf_event_open: pid 54932 cpu -1 group_fd -1 flags 0x8 = 7</span><br><span class="line">sys_perf_event_open: pid 54957 cpu -1 group_fd -1 flags 0x8 = 8</span><br><span class="line">cycles: 0: 163568 54163 54163</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 10762722 548684965 548684965</span><br><span class="line">cycles: 0: 9167118 450593782 450593782</span><br><span class="line">cycles: 20093408 999332910 999332910</span><br><span class="line"># time counts unit events</span><br><span class="line"> 1.000140819 20,093,408 cycles</span><br><span class="line">cycles: 0: 329527 107629 107629</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 19858838 980411641 980411641</span><br><span class="line">cycles: 0: 20321440 1018679246 1018679246</span><br><span class="line">cycles: 20416397 999865606 999865606</span><br><span class="line"> 2.000282256 20,416,397 cycles</span><br><span class="line">cycles: 0: 489050 159391 159391</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 30871376 1534625512 1534625512</span><br><span class="line">cycles: 0: 28979494 1463296617 1463296617</span><br><span class="line">cycles: 19830115 998883004 998883004</span><br></pre></td></tr></table></figure><ul><li>perf stat include guest, the cycle value close to 3200* 10^6, which is correct</li></ul><figure class="highlight"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br></pre></td><td class="code"><pre><span class="line"># perf stat -e cycles:GH -a -I 1000 -vv -C 1 -p 54927 sleep 5</span><br><span class="line">Using CPUID GenuineIntel-6-55</span><br><span class="line">intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch</span><br><span class="line">------------------------------------------------------------</span><br><span class="line">perf_event_attr:</span><br><span class="line"> size 112</span><br><span class="line"> sample_type IDENTIFIER</span><br><span class="line"> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING</span><br><span class="line"> disabled 1</span><br><span class="line"> inherit 1</span><br><span class="line">------------------------------------------------------------</span><br><span class="line">sys_perf_event_open: pid 54927 cpu -1 group_fd -1 flags 0x8 = 3</span><br><span class="line">sys_perf_event_open: pid 54928 cpu -1 group_fd -1 flags 0x8 = 4</span><br><span class="line">sys_perf_event_open: pid 54931 cpu -1 group_fd -1 flags 0x8 = 5</span><br><span class="line">sys_perf_event_open: pid 54932 cpu -1 group_fd -1 flags 0x8 = 7</span><br><span class="line">sys_perf_event_open: pid 54957 cpu -1 group_fd -1 flags 0x8 = 8</span><br><span class="line">cycles:GH: 0: 150607 49705 49705</span><br><span class="line">cycles:GH: 0: 0 0 0</span><br><span class="line">cycles:GH: 0: 0 0 0</span><br><span class="line">cycles:GH: 0: 1851603233 578636477 578636477</span><br><span class="line">cycles:GH: 0: 1349466048 421718356 421718356</span><br><span class="line">cycles:GH: 3201219888 1000404538 1000404538</span><br><span class="line"># time counts unit events</span><br><span class="line"> 1.000141460 3,201,219,888 cycles:GH</span><br><span class="line">cycles:GH: 0: 318672 103892 103892</span><br><span class="line">cycles:GH: 0: 0 0 0</span><br><span class="line">cycles:GH: 0: 0 0 0</span><br><span class="line">cycles:GH: 0: 3186808454 995900872 995900872</span><br><span class="line">cycles:GH: 0: 3210871482 1003417209 1003417209</span><br><span class="line">cycles:GH: 3196778720 999017435 999017435</span><br><span class="line"> 2.000294863 3,196,778,720 cycles:GH</span><br><span class="line">cycles:GH: 0: 473344 153881 153881</span><br><span class="line">cycles:GH: 0: 0 0 0</span><br><span class="line">cycles:GH: 0: 0 0 0</span><br><span class="line">cycles:GH: 0: 4914634282 1535858571 1535858571</span><br><span class="line">cycles:GH: 0: 4681607691 1463029297 1463029297</span><br><span class="line">cycles:GH: 3198716709 999619776 999619776</span><br></pre></td></tr></table></figure><ul><li>perf kvm stat exclude host, the cycle value close to 3180*10^6, which is incorrect, because it doesn’t include counters in host side</li></ul><figure class="highlight"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line"># perf kvm stat -e cycles -a -I 1000 -vv -C 1 -p 54927 sleep 5</span><br><span class="line">Using CPUID GenuineIntel-6-55</span><br><span class="line">intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch</span><br><span class="line">------------------------------------------------------------</span><br><span class="line">perf_event_attr:</span><br><span class="line"> size 112</span><br><span class="line"> sample_type IDENTIFIER</span><br><span class="line"> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING</span><br><span class="line"> disabled 1</span><br><span class="line"> inherit 1</span><br><span class="line"> exclude_host 1</span><br><span class="line">------------------------------------------------------------</span><br><span class="line">sys_perf_event_open: pid 54927 cpu -1 group_fd -1 flags 0x8 = 3</span><br><span class="line">sys_perf_event_open: pid 54928 cpu -1 group_fd -1 flags 0x8 = 4</span><br><span class="line">sys_perf_event_open: pid 54931 cpu -1 group_fd -1 flags 0x8 = 5</span><br><span class="line">sys_perf_event_open: pid 54932 cpu -1 group_fd -1 flags 0x8 = 7</span><br><span class="line">sys_perf_event_open: pid 54957 cpu -1 group_fd -1 flags 0x8 = 8</span><br><span class="line">cycles: 0: 0 50959 50959</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 1876040767 589985556 589985556</span><br><span class="line">cycles: 0: 1302552550 409661083 409661083</span><br><span class="line">cycles: 3178593317 999697598 999697598</span><br><span class="line"># time counts unit events</span><br><span class="line"> 1.000144669 3,178,593,317 cycles</span><br><span class="line">cycles: 0: 0 104610 104610</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 3258715492 1024938302 1024938302</span><br><span class="line">cycles: 0: 3099070031 974456233 974456233</span><br><span class="line">cycles: 3179192206 999801547 999801547</span><br><span class="line"> 2.000287034 3,179,192,206 cycles</span><br><span class="line">cycles: 0: 0 154490 154490</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 5007308322 1574833169 1574833169</span><br><span class="line">cycles: 0: 4529377308 1424182546 1424182546</span><br><span class="line">cycles: 3178900107 999671060 999671060</span><br></pre></td></tr></table></figure><ul><li>perf kvm stat include host, the cycle value close to 3200*10^6, which is equal to perf stat include guest</li></ul><figure class="highlight"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br></pre></td><td class="code"><pre><span class="line"># perf kvm --host --guest stat -e cycles -a -I 1000 -vv -C 1 -p 54927 sleep 5</span><br><span class="line">Using CPUID GenuineIntel-6-55</span><br><span class="line">intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch</span><br><span class="line">------------------------------------------------------------</span><br><span class="line">perf_event_attr:</span><br><span class="line"> size 112</span><br><span class="line"> sample_type IDENTIFIER</span><br><span class="line"> read_format TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING</span><br><span class="line"> disabled 1</span><br><span class="line"> inherit 1</span><br><span class="line">------------------------------------------------------------</span><br><span class="line">sys_perf_event_open: pid 54927 cpu -1 group_fd -1 flags 0x8 = 3</span><br><span class="line">sys_perf_event_open: pid 54928 cpu -1 group_fd -1 flags 0x8 = 4</span><br><span class="line">sys_perf_event_open: pid 54931 cpu -1 group_fd -1 flags 0x8 = 5</span><br><span class="line">sys_perf_event_open: pid 54932 cpu -1 group_fd -1 flags 0x8 = 7</span><br><span class="line">sys_perf_event_open: pid 54957 cpu -1 group_fd -1 flags 0x8 = 8</span><br><span class="line">cycles: 0: 156465 51514 51514</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 1697781991 530571505 530571505</span><br><span class="line">cycles: 0: 1501254716 469153401 469153401</span><br><span class="line">cycles: 3199193172 999776420 999776420</span><br><span class="line"># time counts unit events</span><br><span class="line"> 1.000134636 3,199,193,172 cycles</span><br><span class="line">cycles: 0: 324906 105797 105797</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 3069636439 959290212 959290212</span><br><span class="line">cycles: 0: 3328363931 1040133163 1040133163</span><br><span class="line">cycles: 3199132104 999752752 999752752</span><br><span class="line"> 2.000277464 3,199,132,104 cycles</span><br><span class="line">cycles: 0: 482160 156477 156477</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 0 0 0</span><br><span class="line">cycles: 0: 4697639841 1468050819 1468050819</span><br><span class="line">cycles: 0: 4898815012 1530907460 1530907460</span><br><span class="line">cycles: 3198611737 999585584 999585584</span><br></pre></td></tr></table></figure><p>So, if you feel uncomfortable to use <em>perf kvm stat –host</em> on runv environment, you can use <em>perf stat -e event_name:GH</em> instead, although the results should be same.<br>If the containers are running at runc environment, don’t worry about it, just use <strong>perf stat</strong></p><h2 id="sar-is-also-a-good-tool-to-collect-performance"><a href="#sar-is-also-a-good-tool-to-collect-performance" class="headerlink" title="sar is also a good tool to collect performance"></a>sar is also a good tool to collect performance</h2><p>same as perf+no-aggr on utilization </p><figure class="highlight fortran"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">sar -P <span class="built_in">ALL</span> -o sar.<span class="keyword">data</span></span><br></pre></td></tr></table></figure><p>transfer the binary data into unix-format readable data</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="selector-tag">sadf</span> <span class="selector-tag">-d</span> <span class="selector-tag">-P</span> <span class="selector-tag">ALL</span> <span class="selector-tag">sar</span><span class="selector-class">.data</span> > <span class="selector-tag">sar</span><span class="selector-class">.csv</span></span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<h2 id="How-to-use-perf-stat"><a href="#How-to-use-perf-stat" class="headerlink" title="How to use perf stat"></a>How to use perf stat</h2><
</summary>
<category term="pouch" scheme="http://yoursite.com/tags/pouch/"/>
<category term="docker" scheme="http://yoursite.com/tags/docker/"/>
<category term="kata" scheme="http://yoursite.com/tags/kata/"/>
<category term="conatiner" scheme="http://yoursite.com/tags/conatiner/"/>
<category term="perf" scheme="http://yoursite.com/tags/perf/"/>
<category term="sar" scheme="http://yoursite.com/tags/sar/"/>
<category term="cgroup" scheme="http://yoursite.com/tags/cgroup/"/>
<category term="cycles" scheme="http://yoursite.com/tags/cycles/"/>
</entry>
<entry>
<title>Cross host container communication by swarm</title>
<link href="http://yoursite.com/2020/09/04/container-communication-coss-host-by-swarm/"/>
<id>http://yoursite.com/2020/09/04/container-communication-coss-host-by-swarm/</id>
<published>2020-09-04T01:54:13.000Z</published>
<updated>2020-09-07T01:18:09.666Z</updated>
<content type="html"><![CDATA[<ul><li>start swarm service on server 1<figure class="highlight ebnf"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="attribute">docker swarm init</span></span><br></pre></td></tr></table></figure></li></ul><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="string">docker</span> <span class="string">info</span> </span><br><span class="line"><span class="string">...</span></span><br><span class="line"><span class="attr">Swarm:</span> <span class="string">active</span></span><br><span class="line"><span class="attr"> NodeID:</span> <span class="string">mjlxz2xwr8n4nd8pr4abs90cf</span></span><br><span class="line"> <span class="string">Is</span> <span class="attr">Manager:</span> <span class="literal">true</span></span><br><span class="line"><span class="attr"> ClusterID:</span> <span class="string">mlbjnejhau2knc9octau5idnn</span></span><br><span class="line"><span class="attr"> Managers:</span> <span class="number">1</span></span><br><span class="line"><span class="attr"> Nodes:</span> <span class="number">4</span></span><br><span class="line"><span class="attr"> Orchestration:</span></span><br><span class="line"> <span class="string">Task</span> <span class="string">History</span> <span class="string">Retention</span> <span class="attr">Limit:</span> <span class="number">5</span></span><br><span class="line"><span class="attr"> Raft:</span></span><br><span class="line"> <span class="string">Snapshot</span> <span class="attr">Interval:</span> <span class="number">10000</span></span><br><span class="line"> <span class="string">Number</span> <span class="string">of</span> <span class="string">Old</span> <span class="string">Snapshots</span> <span class="string">to</span> <span class="attr">Retain:</span> <span class="number">0</span></span><br><span class="line"> <span class="string">Heartbeat</span> <span class="attr">Tick:</span> <span class="number">1</span></span><br><span class="line"> <span class="string">Election</span> <span class="attr">Tick:</span> <span class="number">10</span></span><br><span class="line"><span class="string">...</span></span><br></pre></td></tr></table></figure><ul><li>create an <strong>overaly</strong> network</li></ul><p>To create an overlay network for use with swarm services, use a command like the following:</p><figure class="highlight routeros"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ docker<span class="built_in"> network </span>create -d overlay my-overlay</span><br></pre></td></tr></table></figure><p>To create an overlay network which can be used by swarm services or standalone containers to communicate with other standalone containers running on other Docker daemons, add the <em>–attachable</em> flag:</p><figure class="highlight routeros"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">$ docker<span class="built_in"> network </span>create -d overlay --attachable </span><br><span class="line">my-attachable-overlay</span><br></pre></td></tr></table></figure><h2 id="other-servers"><a href="#other-servers" class="headerlink" title="other servers"></a>other servers</h2><ul><li>join the swarm<figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="selector-tag">docker</span> <span class="selector-tag">swarm</span> <span class="selector-tag">join</span> <span class="selector-tag">--token</span> <span class="selector-tag">SWMTKN-1-3vyjwjv6inlxr0wv1mmibbwzo5qt6cmultg2losufxd0nbi6z5-7veb80yv1im6ii8oma5q1k1s4</span> 172<span class="selector-class">.16</span><span class="selector-class">.190</span><span class="selector-class">.79</span><span class="selector-pseudo">:2377</span></span><br></pre></td></tr></table></figure></li></ul><p>then you can find server2 automatically join the overlay net</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">NETWORK ID NAME DRIVER SCOPE</span><br><span class="line">780463da98af bridge bridge <span class="built_in">local</span></span><br><span class="line">d0e6aefdfb8a docker_gwbridge bridge <span class="built_in">local</span></span><br><span class="line">d45984c66fd5 host host <span class="built_in">local</span></span><br><span class="line">jaf4kx0gku62 ingress overlay swarm</span><br><span class="line">20855b96e0b1 none null <span class="built_in">local</span></span><br><span class="line">6qmeomevipgt my-attachable-overlay overlay swarm</span><br></pre></td></tr></table></figure><p>and you can check in server1</p><figure class="highlight css"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"> <span class="selector-tag">docker</span> <span class="selector-tag">node</span> <span class="selector-tag">ls</span></span><br><span class="line"><span class="selector-tag">ID</span> <span class="selector-tag">HOSTNAME</span> <span class="selector-tag">STATUS</span> <span class="selector-tag">AVAILABILITY</span> <span class="selector-tag">MANAGER</span> <span class="selector-tag">STATUS</span> <span class="selector-tag">ENGINE</span> <span class="selector-tag">VERSION</span></span><br><span class="line"><span class="selector-tag">mjlxz2xwr8n4nd8pr4abs90cf</span> * <span class="selector-tag">iZbp1bzbzd1dmznk9xxxxxx</span> <span class="selector-tag">Ready</span> <span class="selector-tag">Active</span> <span class="selector-tag">Leader</span> 18<span class="selector-class">.06</span><span class="selector-class">.0-ce</span></span><br><span class="line"><span class="selector-tag">uf041tg0obzprppredut27bgg</span> <span class="selector-tag">iZbp1bzbzd1dmznk9xxxxxx</span> <span class="selector-tag">Ready</span> <span class="selector-tag">Active</span> 18<span class="selector-class">.06</span><span class="selector-class">.0-ce</span></span><br><span class="line"><span class="selector-tag">t3380b55qfen3fdtoj10x8lcf</span> <span class="selector-tag">iZbp1bzbzd1dmznk9xxxxxx</span> <span class="selector-tag">Ready</span> <span class="selector-tag">Active</span> 18<span class="selector-class">.06</span><span class="selector-class">.0-ce</span></span><br><span class="line"><span class="selector-tag">gzzalp8thajd0w55rrm5pvv3z</span> <span class="selector-tag">iZbp1bzbzd1dmznk9xxxxxx</span> <span class="selector-tag">Ready</span> <span class="selector-tag">Active</span> 18<span class="selector-class">.06</span><span class="selector-class">.0-ce</span></span><br></pre></td></tr></table></figure><ul><li>start container with the overlay net on server2</li></ul><figure class="highlight brainfuck"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">docker</span> <span class="comment">run</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">name</span> <span class="comment">test1</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">hostname</span> <span class="comment">name1</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">runtime</span> <span class="comment">runc</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">net=my</span><span class="literal">-</span><span class="comment">attachable</span><span class="literal">-</span><span class="comment">overlay</span></span><br><span class="line"><span class="comment"></span> <span class="literal">-</span><span class="literal">-</span><span class="comment">ulimit</span> <span class="comment">nofile=102400:102400</span> <span class="literal">-</span><span class="literal">-</span><span class="comment">cpus</span> <span class="comment">30</span> <span class="literal">-</span><span class="comment">d</span> <span class="literal">-</span><span class="comment">P</span> <span class="literal">-</span><span class="comment">p</span> <span class="comment">50070:50070</span> <span class="literal">-</span><span class="comment">p</span> <span class="comment">8088:8088</span> <span class="comment">docker</span><span class="literal">-</span><span class="comment">images1</span></span><br></pre></td></tr></table></figure><p>Now, containers on servers2 can access containers on server1</p><ul><li><p>server2 can also leave swarm node </p><figure class="highlight ebnf"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="attribute">docker swarm leave</span></span><br></pre></td></tr></table></figure></li><li><p>when alll the nodes leave swarm node,server 1 can stop/remove swarm</p><figure class="highlight ebnf"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="attribute">docker swarm leave</span></span><br></pre></td></tr></table></figure></li></ul><h3 id="reference"><a href="#reference" class="headerlink" title="reference"></a>reference</h3><ul><li><p><a href="https://docs.docker.com/engine/reference/commandline/swarm_init/" target="_blank" rel="noopener">docker swarm doc</a></p></li><li><p><a href="https://docs.docker.com/network/overlay/" target="_blank" rel="noopener">docker network doc</a></p></li></ul>]]></content>
<summary type="html">
<ul>
<li>start swarm service on server 1<figure class="highlight ebnf"><table><tr><td class="gutter"><pre><span class="line">1</span><br></p
</summary>
<category term="etcd" scheme="http://yoursite.com/tags/etcd/"/>
<category term="flannel" scheme="http://yoursite.com/tags/flannel/"/>
<category term="overlay" scheme="http://yoursite.com/tags/overlay/"/>
<category term="docker" scheme="http://yoursite.com/tags/docker/"/>
<category term="swarm" scheme="http://yoursite.com/tags/swarm/"/>
<category term="container" scheme="http://yoursite.com/tags/container/"/>
</entry>
<entry>
<title>Cross host container communication by flannel</title>
<link href="http://yoursite.com/2020/09/02/container-communication-corss-host-by-flannel/"/>
<id>http://yoursite.com/2020/09/02/container-communication-corss-host-by-flannel/</id>
<published>2020-09-02T07:54:13.000Z</published>
<updated>2020-09-04T07:23:35.514Z</updated>
<content type="html"><![CDATA[<p><a href="https://github.com/alibaba/pouch" target="_blank" rel="noopener">pouch container</a> doesn’t support <strong>swarm</strong> mode, so you can’t use <strong>docker swarm</strong> advantages to build <strong>overlay</strong> network if you lxc environment is pouch or sth else</p><ul><li><p>install flannel on each server</p><figure class="highlight cmake"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">yum <span class="keyword">install</span> flannel -y</span><br></pre></td></tr></table></figure></li><li><p>config flannel etcd url</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">vi /etc/sysconfig/flanneld</span><br></pre></td></tr></table></figure></li></ul><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># Flanneld configuration options</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># etcd url location. Point this to the server where etcd runs</span></span><br><span class="line"><span class="string">FLANNEL_ETCD_ENDPOINTS="http://172.16.190.73:2379"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># etcd config key. This is the configuration key that flannel queries</span></span><br><span class="line"><span class="comment"># For address range assignment</span></span><br><span class="line"><span class="string">FLANNEL_ETCD_PREFIX="/atomic.io/network"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># Any additional options that you want to pass</span></span><br><span class="line"><span class="comment">#FLANNEL_OPTIONS=""</span></span><br></pre></td></tr></table></figure><ul><li>remeber to set the container ip range on etcd server</li></ul><figure class="highlight javascript"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">etcdctl <span class="keyword">set</span> /atomic.io/network/config '{ <span class="string">"Network"</span>: <span class="string">"192.168.0.0/16"</span> }<span class="string">'</span></span><br></pre></td></tr></table></figure><p> or</p><figure class="highlight javascript"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">docker exec yl-etcd etcdctl <span class="keyword">set</span> /atomic.io/network/config '{ <span class="string">"Network"</span>: <span class="string">"192.168.0.0/16"</span> }<span class="string">'</span></span><br></pre></td></tr></table></figure><ul><li><p>start service</p><figure class="highlight routeros"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">systemctl <span class="builtin-name">enable</span> flanneld</span><br><span class="line">systemctl start flanneld</span><br></pre></td></tr></table></figure></li><li><p>config pouch or docker, update gateway</p></li></ul><p>pouch doesn’t support swarm, so we use pouch container mannager as example</p><p>start pouchd or dockerd</p><figure class="highlight dockerfile"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">source /<span class="keyword">run</span><span class="bash">/flannel/subnet.env</span></span><br></pre></td></tr></table></figure><figure class="highlight routeros"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">cat /run/flannel/subnet.env</span><br><span class="line"><span class="built_in">..</span>.</span><br><span class="line"><span class="attribute">FLANNEL_NETWORK</span>=192.168.0.0/16</span><br><span class="line"><span class="attribute">FLANNEL_SUBNET</span>=192.168.38.1/24</span><br><span class="line"><span class="attribute">FLANNEL_MTU</span>=1472</span><br><span class="line"><span class="attribute">FLANNEL_IPMASQ</span>=<span class="literal">false</span></span><br><span class="line"><span class="built_in">..</span>.</span><br></pre></td></tr></table></figure><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">systemctl stop pouch</span><br><span class="line"></span><br><span class="line">pouchd --bip=<span class="variable">${FLANNEL_SUBNET}</span> --mtu=<span class="variable">${FLANNEL_MTU}</span> --default-gateway 192.168.38.1 &</span><br><span class="line"></span><br><span class="line">pouchd --bip=<span class="variable">${FLANNEL_SUBNET}</span> --mtu=<span class="variable">${FLANNEL_MTU}</span> --default-gateway 192.168.65.1 &</span><br></pre></td></tr></table></figure><p>or you can update pouchd or dockerd <a href="https://github.com/alibaba/pouch/blob/master/docs/commandline/pouch_updatedaemon.md" target="_blank" rel="noopener">update pouch daemon</a></p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">pouch updatedaemon --debug=<span class="literal">true</span> --bip=<span class="variable">${FLANNEL_SUBNET}</span> --mtu=<span class="variable">${FLANNEL_MTU}</span> --default-gateway 192.168.38.1 </span><br><span class="line"></span><br><span class="line">pouch updatedaemon --bip 192.168.38.1/24 --default-gateway 192.168.38.1 --offline=<span class="literal">true</span></span><br></pre></td></tr></table></figure><p>or you can update the default config and restart pouch</p><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">{</span><br><span class="line"> <span class="attr">"volume-config"</span>: {},</span><br><span class="line"> <span class="attr">"network-config"</span>: {</span><br><span class="line"> <span class="attr">"bridge-config"</span>: {</span><br><span class="line"> <span class="attr">"bip"</span>: <span class="string">"192.168.38.1/24"</span>,</span><br><span class="line"> <span class="attr">"default-gateway"</span>: <span class="string">"192.168.38.1"</span>,</span><br><span class="line"> <span class="attr">"iptables"</span>: <span class="literal">false</span>,</span><br><span class="line"> <span class="attr">"ipforward"</span>: <span class="literal">false</span>,</span><br><span class="line"> <span class="attr">"userland-proxy"</span>: <span class="literal">false</span></span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> <span class="attr">"cri-config"</span>: {},</span><br><span class="line"> <span class="attr">"TLS"</span>: {},</span><br><span class="line"> <span class="attr">"default-log-config"</span>: {},</span><br><span class="line"> <span class="attr">"registry-service"</span>: {},</span><br><span class="line"> <span class="attr">"add-runtime"</span>: {</span><br><span class="line"> <span class="attr">"runv"</span>: {</span><br><span class="line"> <span class="attr">"path"</span>: <span class="string">"/opt/kata/bin/kata-runtime"</span>,</span><br><span class="line"> <span class="attr">"runtimeArgs"</span>: <span class="literal">null</span></span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>for each host, update ip tables input and output rules</li></ul><figure class="highlight tp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">iptables -<span class="keyword">P</span> INPUT <span class="keyword">ACC</span>EPT</span><br><span class="line">iptables -<span class="keyword">P</span> FORWARD <span class="keyword">ACC</span>EPT</span><br><span class="line">iptables -F</span><br><span class="line">iptables -L -n</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<p><a href="https://github.com/alibaba/pouch" target="_blank" rel="noopener">pouch container</a> doesn’t support <strong>swarm</strong> mode
</summary>
<category term="etcd" scheme="http://yoursite.com/tags/etcd/"/>
<category term="flannel" scheme="http://yoursite.com/tags/flannel/"/>
<category term="overlay" scheme="http://yoursite.com/tags/overlay/"/>
<category term="pouch" scheme="http://yoursite.com/tags/pouch/"/>
<category term="docker" scheme="http://yoursite.com/tags/docker/"/>
<category term="container" scheme="http://yoursite.com/tags/container/"/>
</entry>
<entry>
<title>etcd</title>
<link href="http://yoursite.com/2020/09/02/etcd/"/>
<id>http://yoursite.com/2020/09/02/etcd/</id>
<published>2020-09-02T07:29:24.000Z</published>
<updated>2020-09-04T01:26:14.444Z</updated>
<content type="html"><![CDATA[<h1 id="Start-a-etcd"><a href="#Start-a-etcd" class="headerlink" title="Start a etcd"></a>Start a etcd</h1><p>clean data</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">rm -rf /tmp/etcd-data.tmp</span><br></pre></td></tr></table></figure><p>deploy etcd</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">docker run \</span><br><span class="line"> -d \</span><br><span class="line"> -p 2379:2379 \</span><br><span class="line"> -p 2380:2380 \</span><br><span class="line"> -p 4001:4001 \</span><br><span class="line"> -p 7001:7001 \</span><br><span class="line"> -v /tmp/etcd-data.tmp:/etcd-data \</span><br><span class="line"> --name yl-etcd \</span><br><span class="line"> elcolio/etcd:latest \</span><br><span class="line"> --name s1 --data-dir /etcd-data --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379 --listen-peer-urls http://0.0.0.0:2380 --initial-advertise-peer-urls http://0.0.0.0:2380 --initial-cluster s1=http://0.0.0.0:2380 --initial-cluster-token tkn --initial-cluster-state new</span><br></pre></td></tr></table></figure><p>set some key-value store, and test get</p><figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">docker <span class="built_in">exec</span> yl-etcd etcdctl <span class="built_in">set</span> /atomic.io/network/config <span class="string">'{ "Network": "192.168.0.0/16" }'</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line">docker <span class="built_in">exec</span> yl-etcd etcdctl get /atomic.io/network/config</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<h1 id="Start-a-etcd"><a href="#Start-a-etcd" class="headerlink" title="Start a etcd"></a>Start a etcd</h1><p>clean data</p>
<figure class
</summary>
<category term="etcd" scheme="http://yoursite.com/tags/etcd/"/>
<category term="flannel" scheme="http://yoursite.com/tags/flannel/"/>
</entry>
<entry>
<title>avx512</title>
<link href="http://yoursite.com/2019/06/21/avx512/"/>
<id>http://yoursite.com/2019/06/21/avx512/</id>
<published>2019-06-21T05:31:00.000Z</published>
<updated>2020-09-02T07:35:03.942Z</updated>
<content type="html"><![CDATA[<p>dasds</p><p>scdscds</p><p>xscsd</p>]]></content>
<summary type="html">
<p>dasds</p>
<p>scdscds</p>
<p>xscsd</p>
</summary>
</entry>
<entry>
<title>kaggle</title>
<link href="http://yoursite.com/2019/06/21/kaggle/"/>
<id>http://yoursite.com/2019/06/21/kaggle/</id>
<published>2019-06-21T05:30:47.000Z</published>
<updated>2019-06-21T05:30:47.787Z</updated>
<summary type="html">
</summary>
</entry>
<entry>
<title>git</title>
<link href="http://yoursite.com/2019/06/21/git/"/>
<id>http://yoursite.com/2019/06/21/git/</id>
<published>2019-06-21T05:29:35.000Z</published>
<updated>2019-06-21T05:29:35.487Z</updated>
<summary type="html">
</summary>
</entry>
<entry>
<title>docker</title>
<link href="http://yoursite.com/2019/06/21/docker/"/>
<id>http://yoursite.com/2019/06/21/docker/</id>
<published>2019-06-21T05:29:09.000Z</published>
<updated>2019-06-21T05:29:09.132Z</updated>
<summary type="html">
</summary>
</entry>
<entry>
<title>xgboost</title>
<link href="http://yoursite.com/2019/06/21/xgboost/"/>
<id>http://yoursite.com/2019/06/21/xgboost/</id>
<published>2019-06-21T05:22:25.000Z</published>
<updated>2019-06-21T05:24:56.653Z</updated>
<summary type="html">
</summary>
<category term="boosting" scheme="http://yoursite.com/tags/boosting/"/>
</entry>
<entry>
<title>LR</title>
<link href="http://yoursite.com/2019/06/21/LR/"/>
<id>http://yoursite.com/2019/06/21/LR/</id>
<published>2019-06-21T05:21:42.000Z</published>
<updated>2019-06-21T05:23:59.986Z</updated>
<summary type="html">
</summary>
<category term="ML" scheme="http://yoursite.com/tags/ML/"/>
</entry>
<entry>
<title>IF</title>
<link href="http://yoursite.com/2019/06/21/IF/"/>
<id>http://yoursite.com/2019/06/21/IF/</id>
<published>2019-06-21T05:21:28.000Z</published>
<updated>2019-06-21T05:23:45.733Z</updated>
<summary type="html">
</summary>
<category term="Isolate Forest" scheme="http://yoursite.com/tags/Isolate-Forest/"/>
<category term="Desion Tree" scheme="http://yoursite.com/tags/Desion-Tree/"/>
</entry>
<entry>
<title>gdbt</title>
<link href="http://yoursite.com/2019/06/21/gdbt/"/>
<id>http://yoursite.com/2019/06/21/gdbt/</id>
<published>2019-06-21T05:21:13.000Z</published>
<updated>2019-06-27T06:10:07.686Z</updated>
<content type="html"><![CDATA[<p>csdcdcsddsdcs<br><a href="www.abc.com">ddd</a></p>]]></content>
<summary type="html">
<p>csdcdcsddsdcs<br><a href="www.abc.com">ddd</a></p>
</summary>
<category term="Desicion Tree" scheme="http://yoursite.com/tags/Desicion-Tree/"/>
</entry>
<entry>
<title>RF</title>
<link href="http://yoursite.com/2019/06/21/RF/"/>
<id>http://yoursite.com/2019/06/21/RF/</id>
<published>2019-06-21T05:20:17.000Z</published>
<updated>2019-06-21T05:24:39.514Z</updated>
<summary type="html">
</summary>
<category term="ML" scheme="http://yoursite.com/tags/ML/"/>
<category term="boosting" scheme="http://yoursite.com/tags/boosting/"/>
</entry>
<entry>
<title>LSTM</title>
<link href="http://yoursite.com/2019/06/21/LSTM/"/>
<id>http://yoursite.com/2019/06/21/LSTM/</id>
<published>2019-06-21T05:19:19.000Z</published>
<updated>2019-06-21T05:24:16.267Z</updated>
<summary type="html">
</summary>
<category term="ML" scheme="http://yoursite.com/tags/ML/"/>
<category term="NLP" scheme="http://yoursite.com/tags/NLP/"/>
</entry>
<entry>
<title>EM</title>
<link href="http://yoursite.com/2019/06/21/EM/"/>
<id>http://yoursite.com/2019/06/21/EM/</id>
<published>2019-06-21T05:18:59.000Z</published>
<updated>2019-06-21T05:22:53.411Z</updated>
<summary type="html">
</summary>
<category term="ML" scheme="http://yoursite.com/tags/ML/"/>
<category term="Bayesian" scheme="http://yoursite.com/tags/Bayesian/"/>
</entry>
<entry>
<title>prometheus</title>
<link href="http://yoursite.com/2019/06/21/prometheus/"/>
<id>http://yoursite.com/2019/06/21/prometheus/</id>
<published>2019-06-21T05:14:21.000Z</published>
<updated>2019-06-24T03:03:10.943Z</updated>
<content type="html"><![CDATA[<h2 id="Prometheus-start-by-docker"><a href="#Prometheus-start-by-docker" class="headerlink" title="Prometheus start by docker"></a>Prometheus start by docker</h2><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">docker run -p 9090:9090 --name prometheus -d -v /home/li/Downloads/prom/Prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus</span><br></pre></td></tr></table></figure><h2 id="设置mount"><a href="#设置mount" class="headerlink" title="设置mount"></a>设置mount</h2><p> -v 将local <em>Promethus.yml</em> 和 <em>/etc/prometheus/prometheus.yml</em> 文件挂载</p><p> 这样可以custom configure local pro.yml </p><ul><li><p>for example 设置</p><p> 一个 target 来源为json file<br> <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config" target="_blank" rel="noopener">https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config</a></p></li></ul><ul><li>for example 设置static<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br></pre></td><td class="code"><pre><span class="line"> │ File: Prometheus.yml</span><br><span class="line">───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────</span><br><span class="line"> 1 │ # my global config</span><br><span class="line"> 2 │ global:</span><br><span class="line"> 3 │ scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.</span><br><span class="line"> 4 │ evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.</span><br><span class="line"> 5 │ # scrape_timeout is set to the global default (10s).</span><br><span class="line"> 6 │</span><br><span class="line"> 7 │ # Alertmanager configuration</span><br><span class="line"> 8 │ alerting:</span><br><span class="line"> 9 │ alertmanagers:</span><br><span class="line"> 10 │ - static_configs:</span><br><span class="line"> 11 │ - targets:</span><br><span class="line"> 12 │ # - alertmanager:9093</span><br><span class="line"> 13 │</span><br><span class="line"> 14 │ # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.</span><br><span class="line"> 15 │ rule_files:</span><br><span class="line"> 16 │ # - "first_rules.yml"</span><br><span class="line"> 17 │ # - "second_rules.yml"</span><br><span class="line"> 18 │</span><br><span class="line"> 19 │ # A scrape configuration containing exactly one endpoint to scrape:</span><br><span class="line"> 20 │ # Here it's Prometheus itself.</span><br><span class="line"> 21 │ scrape_configs:</span><br><span class="line"> 22 │ # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.</span><br><span class="line"> 23 │ - job_name: 'prometheusyl'</span><br><span class="line"> 24 │</span><br><span class="line"> 25 │ # metrics_path defaults to '/metrics' #metric的路径默认就是'/metrics'</span><br><span class="line"> 26 │ # scheme defaults to 'http'.</span><br><span class="line"> 27 │</span><br><span class="line"> 28 │ static_configs: </span><br><span class="line"> 29 │ - targets: ['10.239.157.138:7071']</span><br><span class="line"> 30 │ - job_name: 'test-prometheus-from-yl' #1. 先设置job名</span><br><span class="line"> 31 │</span><br><span class="line"> 32 │ metrics_path: / #2. 可以自定义metric路径</span><br><span class="line"> 33 │</span><br><span class="line"> 34 │ static_configs: #3. 配置的方式,例如static,file</span><br><span class="line"> 35 │ - targets: ['10.239.157.141:8001'] #4. scrap目标来源,也就是将metric暴露给http的端口地址</span><br><span class="line"> 36 │</span><br><span class="line"> 37 │ - job_name: 'test_node_exporter'</span><br><span class="line"> 38 │</span><br><span class="line"> 39 │ static_configs:</span><br><span class="line"> 40 │ - targets: ['10.239.157.129:9100']</span><br><span class="line"> 41 │</span><br><span class="line"> 42 │</span><br><span class="line">───────┴─────────────────────────────────────────────────────────────</span><br></pre></td></tr></table></figure></li></ul><h2 id="Exporte-local-metric-to-http-xxx-port-by-official-Node-Exporter-of-Prometheus"><a href="#Exporte-local-metric-to-http-xxx-port-by-official-Node-Exporter-of-Prometheus" class="headerlink" title="Exporte local metric to http:xxx:port by official Node Exporter of Prometheus"></a>Exporte local metric to http:xxx:port by official Node Exporter of Prometheus</h2><blockquote><p>OWCA is able to export metrics to file acceptable by node_exporter (with “textfile” collector enabled) – because we haven’t released owca-kafka-consumer yet (and we’re blocked because it has http endpoint and python server is not suitable for production usage), so before we publish it (that probably require rewrite to go and start new SDL process), the official way to getting data to Promethes is to use “LogStorage(overwrite=True, outputfile_name=’metrics.prom’)” with node_exporter before Kubecon (ps. Kafka is still there for customer that need that) </p></blockquote><p><a href="https://github.com/prometheus/node_exporter" target="_blank" rel="noopener">https://github.com/prometheus/node_exporter</a></p><p>确定升级go version 到2.12xxx 不然build可能出错</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">go get github.com/prometheus/node_exporter</span><br><span class="line">cd ${GOPATH-$HOME/go}/src/github.com/prometheus/node_exporter</span><br><span class="line">make</span><br><span class="line">./node_exporter <flags></span><br></pre></td></tr></table></figure><p>build完成之后</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">./node_exporter --collector.textfile.directory /home/li/myfolder/</span><br></pre></td></tr></table></figure><p>for example</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">li@yl-machine:~/Go/src/github.com/prometheus/node_exporter$ ./node_exporter --collector.textfile.directory /home/li/Go/test_node</span><br></pre></td></tr></table></figure><p>这里collector flag 设置为 textfile 这样可以load本地磁盘的文件</p><p>for example:</p><ul><li><p>Polan team OWCA monitor metrics to <em>LogStorage(overwrite=True, outputfile_name=’metrics.prom’)</em>, 只要找到metric.prom 的local存储Path然后设置 <em>colletcor.textfile.directory</em> 为此Path 就可以源源不断将owca 暴露的数据push 到Promethus db</p></li><li><p>according to official example</p><ul><li><p>To atomically push completion time for a cron job:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">echo my_batch_job_completion_time $(date +%s) > /path/to/directory/my_batch_job.prom.$$</span><br><span class="line">mv /path/to/directory/my_batch_job.prom.$$ /path/to/directory/my_batch_job.prom</span><br></pre></td></tr></table></figure></li><li><p>To statically set roles for a machine using labels:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">echo 'role{role="application_server"} 1' > /path/to/directory/role.prom.$$</span><br><span class="line">mv /path/to/directory/role.prom.$$ /path/to/directory/role.prom</span><br></pre></td></tr></table></figure></li></ul></li><li><p>if we save the role.prom to test_node folder, then</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">li@yl-machine:~/Go/src/github.com/prometheus/node_exporter$ ./node_exporter --collector.textfile.directory /home/li/Go/test_node</span><br></pre></td></tr></table></figure><p> 输出:</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br></pre></td><td class="code"><pre><span class="line">INFO[0000] Starting node_exporter (version=0.17.0, branch=master, revision=4e5c4d464fa67e9cdfd9858d2151bc99603b2bff) source="node_expor</span><br><span class="line">INFO[0000] Build context (go=go1.12.4, user=li@yl-machine, date=20190418-09:08:01) source="node_exporter.go:157"</span><br><span class="line">INFO[0000] Enabled collectors: source="node_exporter.go:97"</span><br><span class="line">INFO[0000] - arp source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - bcache source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - bonding source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - conntrack source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - cpu source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - cpufreq source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - diskstats source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - edac source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - entropy source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - filefd source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - filesystem source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - hwmon source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - infiniband source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - ipvs source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - loadavg source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - mdadm source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - meminfo source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - netclass source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - netdev source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - netstat source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - nfs source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - nfsd source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - sockstat source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - stat source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - textfile source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - time source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - timex source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - uname source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - vmstat source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - xfs source="node_exporter.go:104"</span><br><span class="line">INFO[0000] - zfs source="node_exporter.go:104"</span><br><span class="line">INFO[0000] Listening on :9100 source="node_exporter.go:170"</span><br></pre></td></tr></table></figure></li></ul><p>now open the brower 可以看出在9100端口 <a href="http://10.239.157.129:9100/metrics" target="_blank" rel="noopener">http://10.239.157.129:9100/metrics</a>, 在browser可看到之前写入的role metric 已经从本地prom file expose to http server</p><pre><code><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">process_virtual_memory_max_bytes -1</span><br><span class="line"># HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.</span><br><span class="line"># TYPE promhttp_metric_handler_requests_in_flight gauge</span><br><span class="line">promhttp_metric_handler_requests_in_flight 1</span><br><span class="line"># HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.</span><br><span class="line"># TYPE promhttp_metric_handler_requests_total counter</span><br><span class="line">promhttp_metric_handler_requests_total{code="200"} 2</span><br><span class="line">promhttp_metric_handler_requests_total{code="500"} 0</span><br><span class="line">promhttp_metric_handler_requests_total{code="503"} 0</span><br><span class="line"># HELP role Metric read from /home/li/Go/test_node/role.prom</span><br><span class="line"># TYPE role untyped</span><br><span class="line">role{role="application_server"} 1</span><br></pre></td></tr></table></figure></code></pre><p>这时候我们promtheous的configure file prometheus.yml 添加新的job-<em>target</em> 为<a href="http://10.239.157.129:9100" target="_blank" rel="noopener">http://10.239.157.129:9100</a><br> 就可以在Promethus的默认端口9090看到metric</p><pre><code>http://10.239.157.129:9090/graph</code></pre><h2 id="Prometheus-usage"><a href="#Prometheus-usage" class="headerlink" title="Prometheus usage"></a>Prometheus usage</h2><p>for example:</p><ul><li>the address <a href="http://10.239.157.129:9090/graph" target="_blank" rel="noopener">http://10.239.157.129:9090/graph</a></li><li>see config <a href="http://10.239.157.129:9090/config" target="_blank" rel="noopener">http://10.239.157.129:9090/config</a></li><li>see target <a href="http://10.239.157.129:9090/targets" target="_blank" rel="noopener">http://10.239.157.129:9090/targets</a></li><li>show graph <a href="http://10.239.157.129:9090/graph" target="_blank" rel="noopener">http://10.239.157.129:9090/graph</a></li></ul><h2 id="Query-Promethus-data"><a href="#Query-Promethus-data" class="headerlink" title="Query Promethus data"></a>Query Promethus data</h2><ul><li>query all metric names</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">response = requests.get('http://100.64.176.12:9090/api/v1/label/__name__/values')</span><br></pre></td></tr></table></figure><ul><li>query 1h data of matric</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">res = requests.get('http://100.64.176.12:9090/api/v1/query',params={'query': metrixName+'[1h]'})</span><br></pre></td></tr></table></figure><ul><li>get label series of a specific matric</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">res = requests.get('http://100.64.176.12:9090/api/v1/series',params={'match[]': metrixName})</span><br></pre></td></tr></table></figure><ul><li>query 1h data of specific matric and with specific label options</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">res = requests.get('http://100.64.176.12:9090/api/v1/query',params={'query': metrixName+'{'+lablename+'="'+labelvalue+'"}'+'[1h]'})</span><br></pre></td></tr></table></figure><ul><li>query with <em>step</em> parameter, if (end-start)/step > 110000, exceed the <em>maximum resolution</em> <a href="https://github.com/prometheus/prometheus/blob/91d7175eaac18b00e370965f3a8186cc40bf9f55/web/api/v1/api.go" target="_blank" rel="noopener">prometheus/web/api/v1/api.go</a>, prometheus will fail<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">url = url + '/api/v1/query_range'</span><br><span class="line">params = {'query': query, 'start': start, 'end': end, 'step': step}</span><br><span class="line">res = requests.get(url, params=params, timeout=timeout)</span><br></pre></td></tr></table></figure></li></ul><h2 id="Show-prometheus-in-Grafana"><a href="#Show-prometheus-in-Grafana" class="headerlink" title="Show prometheus in Grafana"></a>Show prometheus in Grafana</h2><p>grafana datasource add the url of prometheus</p>]]></content>
<summary type="html">
<h2 id="Prometheus-start-by-docker"><a href="#Prometheus-start-by-docker" class="headerlink" title="Prometheus start by docker"></a>Promethe
</summary>
<category term="database" scheme="http://yoursite.com/tags/database/"/>
<category term="prometheus" scheme="http://yoursite.com/tags/prometheus/"/>
<category term="grafana" scheme="http://yoursite.com/tags/grafana/"/>
<category term="time-series" scheme="http://yoursite.com/tags/time-series/"/>
</entry>
<entry>
<title>Setting SSL for Zookeeper</title>
<link href="http://yoursite.com/2019/06/21/zookeeper-ssl/"/>
<id>http://yoursite.com/2019/06/21/zookeeper-ssl/</id>
<published>2019-06-21T05:13:59.000Z</published>
<updated>2019-06-21T09:38:36.131Z</updated>
<content type="html"><![CDATA[<h3 id="Reference-Links"><a href="#Reference-Links" class="headerlink" title="Reference Links"></a>Reference Links</h3><ul><li>Download from: <a href="https://archive.apache.org/dist/zookeeper/" target="_blank" rel="noopener">https://archive.apache.org/dist/zookeeper/</a>, stable version <strong>not</strong> support ssl, so please download <em>zookeeper-3.5.0-alpha/</em></li><li>Zookeeper working with Kazoo: <a href="https://medium.com/@md.tsai/%E7%82%BA-zookeeper-%E8%A8%AD%E5%AE%9A%E5%8A%A0%E5%AF%86%E9%80%A3%E7%B7%9A-11702097f859" target="_blank" rel="noopener">https://medium.com/@md.tsai/%E7%82%BA-zookeeper-%E8%A8%AD%E5%AE%9A%E5%8A%A0%E5%AF%86%E9%80%A3%E7%B7%9A-11702097f859</a>, mainly refer this page</li><li>Official guide: <a href="https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide" target="_blank" rel="noopener">https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide</a></li></ul><h3 id="Details-and-steps"><a href="#Details-and-steps" class="headerlink" title="Details and steps"></a>Details and steps</h3><h3 id="zookeeper-configure"><a href="#zookeeper-configure" class="headerlink" title="zookeeper configure"></a>zookeeper configure</h3><ul><li><p>start zookeeper <em>tar -zxvf zookeeper.tar.gz</em>, check status of zookeeper by <em>bin/zkServer.sh status</em>, copy <em>conf/zoo.sample.cfg</em> to <em>conf/zoo.cfg</em>,zookeeper will auto read this file to start</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/zkServer.sh start</span><br></pre></td></tr></table></figure></li><li><p>test the connection of zookeeper works</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/zkCli.sh -server localhost:2181</span><br></pre></td></tr></table></figure></li></ul><h3 id="zookeeper-with-SSL"><a href="#zookeeper-with-SSL" class="headerlink" title="zookeeper with SSL"></a>zookeeper with SSL</h3><ul><li><p>setting env variable under <em>bin/zkEnv.sh</em></p><ul><li><p>comment out the origin line <em>export SERVER_JVMFLAGS ….</em></p></li><li><p>add this for the new line - for the zookeeper <strong>server</strong></p> <figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">ZK_SERVER_SSL=<span class="string">"-Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory</span></span><br><span class="line"><span class="string">-Dzookeeper.ssl.keyStore.location=/home/li/Documents/zoo-ssl/ssl-conf/keystore.jks </span></span><br><span class="line"><span class="string">-Dzookeeper.ssl.keyStore.password=yourpass</span></span><br><span class="line"><span class="string">-Dzookeeper.ssl.trustStore.location=/home/li/Documents/zoo-ssl/ssl-conf/keystore.jks</span></span><br><span class="line"><span class="string">-Dzookeeper.ssl.trustStore.password=yourpass"</span></span><br><span class="line"><span class="built_in">export</span> SERVER_JVMFLAGS=<span class="string">"-Xmx<span class="variable">${ZK_SERVER_HEAP}</span>m <span class="variable">$ZK_SERVER_SSL</span> <span class="variable">$SERVER_JVMFLAGS</span>"</span></span><br></pre></td></tr></table></figure></li><li><p>comment out the origin line <em>export CLIENT_JVMFLAGS …</em></p></li><li><p>add this for the new line - for the zookeeper <strong>client</strong></p> <figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">ZK_CLIENT_SSL=<span class="string">"-Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory</span></span><br><span class="line"><span class="string">-Dzookeeper.ssl.keyStore.location=/home/li/Documents/zoo-ssl/ssl-conf/keystore.jks </span></span><br><span class="line"><span class="string">-Dzookeeper.ssl.keyStore.password=yourpass</span></span><br><span class="line"><span class="string">-Dzookeeper.ssl.trustStore.location=/home/li/Documents/zoo-ssl/ssl-conf/keystore.jks</span></span><br><span class="line"><span class="string">-Dzookeeper.ssl.trustStore.password=yourpass"</span></span><br><span class="line"><span class="built_in">export</span> CLIENT_JVMFLAGS=<span class="string">"-Xmx<span class="variable">${ZK_CLIENT_HEAP}</span>m <span class="variable">$ZK_CLIENT_SSL</span> <span class="variable">$CLIENT_JVMFLAGS</span>"</span></span><br></pre></td></tr></table></figure></li></ul></li><li><p>add a <em>secureClientPort</em> for <em>conf/zoo.cfg</em></p><pre><code><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">secureClientPort=2281</span><br></pre></td></tr></table></figure></code></pre></li><li><p>restart the zookeeper by</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/zkServer.sh restart</span><br></pre></td></tr></table></figure></li><li><p>test the connection by</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/zkCli.sh -server 127.0.0.1:2281</span><br></pre></td></tr></table></figure><p> now we use the <strong>secureClientPort</strong> 2281</p></li></ul><h3 id="generate-keys-of-jks-and-pem-format"><a href="#generate-keys-of-jks-and-pem-format" class="headerlink" title="generate keys of jks and pem format"></a>generate keys of jks and pem format</h3><ul><li><p>generate keystore.jks example</p> <figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">keytool -genkeypair -<span class="built_in">alias</span> name1 -keyalg RSA -keysize 2048 -keypass password -storepass password -validity 9999 -keystore keystore.jks -ext SAN=DNS:localhost,IP:127.0.0.1</span><br><span class="line"></span><br><span class="line">keytool -genkeypair -<span class="built_in">alias</span> zookeeper-ssl -keyalg RSA -keysize 2048 -keypass aaaaaa -storepass yourpass -validity yourpass -keystore keystore.jks -ext SAN=DNS:localhost,IP:127.0.0.1</span><br></pre></td></tr></table></figure><p> or enter password with interation</p> <figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">keytool -genkey -keyalg RSA -<span class="built_in">alias</span> zookeeper-ssl -keystore keystore.jks -validity 9999 -keysize 2048</span><br></pre></td></tr></table></figure><p> then you can use the zookeeper ssl with this key, the <em>keypass</em> and <em>storepass</em> should be same one, otherwise it will use the storepass.</p></li><li><p>Since kazoo don’t support jks, we need to transfer x.jks to x.pkcs12, then to the x.pem</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">keytool -importkeystore -srckeystore keystore.jks -destkeystore keystore.p12 -srcalias zookeeper-ssl -srcstoretype jks -deststoretype pkcs12</span><br></pre></td></tr></table></figure><p> the source password is the password of x.jks, convert x.jks to x.pkcs12 by</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">keytool -importkeystore -srckeystore keystore.jks -destkeystore keystore.p12 -srcalias zookeeper-ssl -srcstoretype jks -deststoretype pkcs12</span><br></pre></td></tr></table></figure><p> pay attention to the <em>srcalias</em></p><p> convert x.pkcs12 to x.pem by</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">openssl pkcs12 -in keystore.p12 -out keystore.pem</span><br></pre></td></tr></table></figure></li><li><p>Kazoo : Kazoo implements the Zookeeper protocol in pure Python, so you don’t need any Python Zookeeper C bindings installed.</p><p> <a href="https://kazoo.readthedocs.io/en/latest/api/client.html" target="_blank" rel="noopener">https://kazoo.readthedocs.io/en/latest/api/client.html</a></p></li></ul><pre><code><figure class="highlight py"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">zk = KazooClient(hosts=<span class="string">'IP:SEC_PORT'</span>, use_ssl=<span class="literal">True</span>, verify_certs=<span class="literal">False</span>, keyfile=<span class="string">'path/to/keystore.pem'</span>, certfile=<span class="string">'path/to/keystore.pem'</span>, keyfile_password=<span class="string">"password"</span>)</span><br></pre></td></tr></table></figure></code></pre>]]></content>
<summary type="html">
<h3 id="Reference-Links"><a href="#Reference-Links" class="headerlink" title="Reference Links"></a>Reference Links</h3><ul>
<li>Download fro
</summary>
<category term="zookeeper" scheme="http://yoursite.com/tags/zookeeper/"/>
<category term="ssl" scheme="http://yoursite.com/tags/ssl/"/>
</entry>
<entry>
<title>kafka</title>
<link href="http://yoursite.com/2019/06/21/kafka/"/>
<id>http://yoursite.com/2019/06/21/kafka/</id>
<published>2019-06-21T05:13:49.000Z</published>
<updated>2019-06-25T03:08:53.838Z</updated>
<content type="html"><![CDATA[<h2 id="What-is-Kafaka"><a href="#What-is-Kafaka" class="headerlink" title="What is Kafaka"></a>What is Kafaka</h2><blockquote><p>Apache Kafka is an open-source stream-processing software platform developed by the Apache Software Foundation, written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. Its storage layer is essentially a “massively scalable pub/sub message queue architected as a distributed transaction log,” making it highly valuable for enterprise infrastructures to process streaming data. Additionally, Kafka connects to external systems (for data import/export) via Kafka Connect and provides Kafka Streams, a Java stream processing library. - wikipedia</p></blockquote><blockquote><p>先从数据库说起。<br>我们都知道,数据库中的数据,只要应用程序员不主动删除,就可以任意次读写,多少次都行。 数据库还对外提供了很漂亮的接口——SQL ——让程序员操作数据。<br>但是数据库不擅长做<em>通知</em>(人家也不是干这种事的): 例如,程序A向数据库插入了一条数据, 然后程序B想知道这次数据更新,然后做点事情。<br>这种”通知”的事情,一种办法是用轮询实现, 程序B不断地查数据库,看看有没有新数据的到来, 但是这种方法效率很低。<br>更直接的办法是让应用程序之间直接交互,例如程序A调用程序B的RESTful API。<br>但问题是程序B如果暂时不可用,程序A就会比较悲催,怎么办呢?等一会儿再试? 如果程序B还不行,那就循环再试。 调用方的责任太大。<br>于是<em>消息队列(MQ)</em>就出现了,程序A把数据往消息队列中一扔,完事走人,程序B想什么时候读就什么时候读,极其灵活。<br>所以MQ的重要功能就是<em>解耦</em>,让两个系统可以独立运行,异步操作,互不影响。<br>MQ还有一个好处就是允许程序A疯狂地向其中放消息,程序B 可以慢悠悠地处理,这就起到了<em>消峰</em>的效果。<br>可是传统的MQ也有问题,通常情况下,一个消息确认被读取以后,就会被删除。 如果来了一个新的程序C,也想读之前的消息,或者说之前一段时间的消息,传统MQ表示无能无力。<br>能不能把数据库的特点和MQ的特点结合起来呢?<br>消息可以<em>持久化</em>,让多个程序都可以读取,并且还支持发布-订阅这种模式。<br>Kafka出现了,它也是一个消息队列,但是它能保存很长一段时间的消息(因为在硬盘上),队列中每个消息都有一个编号1,2,3,4…. ,这样就支持多个程序来读取。<br>只要记录下每个程序都读到了哪个<em>编号</em>, 这个程序可以断开和Kafka的连接,这个程序可以崩溃,下一次就可以接着读。<br>新的消费者程序可以随意加入读取,不影响其他消费者程序, 是不是很爽?<br>例如:程序B读到了编号为3的消息, 程序C读到了编号为5的消息, 这时候来了一个新的程序D,可以从头开始读。<br>这其实和数据库复制有点像:Kafka维护者“主数据库”, 每个消费者程序都是“从数据库”, 只要记住编号,消息都可以从“主数据库”复制到“从数据库”。<br>当然,Kafka做的远不止于此,它还充分利用硬盘顺序化读取速度快的特性,再加上分区,备份等高可用特性, 一个高吞吐量的分布式发布订阅消息系统就诞生了。 —— <a href="https://mp.weixin.qq.com/s/ghFDVMCacgYuTcG5klxTiw?" target="_blank" rel="noopener">来源</a></p></blockquote><blockquote><p>Every instance of Kafka that is responsible for message exchange is called a Broker. – <a href="https://towardsdatascience.com/getting-started-with-apache-kafka-in-python-604b3250aa05" target="_blank" rel="noopener">source</a><br>每一个kafka实例(或者说每台kafka服务器节点)就是一个broker,一个broker可以有多个topic. –<a href="https://www.jianshu.com/p/1b657ac52f89" target="_blank" rel="noopener">来源</a></p></blockquote><h2 id="Kafka-Usage"><a href="#Kafka-Usage" class="headerlink" title="Kafka Usage"></a>Kafka Usage</h2><p><a href="https://kafka.apache.org/quickstart" target="_blank" rel="noopener">QuickStart</a></p><ul><li><p>start the server</p><ul><li><p>first start the zookeeper</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/zookeeper-server-start.sh config/zookeeper.properties</span><br></pre></td></tr></table></figure></li><li><p>then start the kafka</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-server-start.sh config/server.properties</span><br></pre></td></tr></table></figure></li></ul></li><li><p>stop the server</p><ul><li><p>first stop the kafka</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-server-stop.sh config/server.properties</span><br></pre></td></tr></table></figure></li><li><p>then start the zookeeper</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/zookeeper-server-stop.sh config/zookeeper.properties</span><br></pre></td></tr></table></figure><p>和start顺序相反</p></li></ul></li><li><p>new a topic </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic topic1</span><br></pre></td></tr></table></figure></li><li><p>list all the topic</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-topics.sh --list --bootstrap-server localhost:9092</span><br></pre></td></tr></table></figure></li><li><p>describe topics</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-topics.sh --zookeeper localhost:2181 --describe</span><br><span class="line">or</span><br><span class="line">bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe</span><br></pre></td></tr></table></figure></li><li><p>start a producer </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic1</span><br></pre></td></tr></table></figure></li><li><p>start a consumer, read data from beginning</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic1 --from-beginning</span><br></pre></td></tr></table></figure></li><li><p>describe all the consumers</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list</span><br></pre></td></tr></table></figure></li><li><p>describe broker</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic topic1</span><br></pre></td></tr></table></figure></li><li><p>delete consumer group</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --delete --group console-consumer-44036</span><br></pre></td></tr></table></figure></li><li><p>show offset of current group</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group console-consumer-59213 --describe</span><br></pre></td></tr></table></figure></li><li><p>produce under broker</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic1</span><br></pre></td></tr></table></figure></li></ul><h2 id="Experiment-on-kafka-retention-time-offset-and-group"><a href="#Experiment-on-kafka-retention-time-offset-and-group" class="headerlink" title="Experiment on kafka retention time, offset, and group"></a>Experiment on kafka retention time, offset, and group</h2><p>the retention time of Kafaka <em>Message</em> is configurable, the default retention of kafka is 7 days, <em>log.retention.hours = 168</em>, the priority of retention.ms, retention.minutes,retention.hours decrease progressively.</p><h3 id="summary"><a href="#summary" class="headerlink" title="summary"></a>summary</h3><p>During retention period, the message stored in disks, every consumer group can consume the message, if the retention period expired, neither consumer groups can consumer the message, the offset point to the end of the message queue.</p><p>During the retention period, if one message consumed by groupA, if groupA want to consum this message, it can’t, but if groupB did’t consume this message before, groupB can consume this message.</p><p>If a new message produced, the “log end offset” will plus one.</p><p>The current offset varies by differnt consumers, the consumer can user the <em>seek</em> function to consume any offset of messages, after the retention period, the current offset will point to the end, until new message comes.</p><h3 id="Experiment"><a href="#Experiment" class="headerlink" title="Experiment"></a>Experiment</h3><p>first we create a new topic time3min</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic time3min</span><br></pre></td></tr></table></figure><p>then, alter the default retention time of time3min to 180000ms</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">:~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-configs.sh --zookeeper localhost:2181 --alter --entity-name time3min --entity-type topics --add-config retention.ms=180000</span><br></pre></td></tr></table></figure><p>then we start a producer, produce some messages</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-producer.sh --broker-list localhost:9092 --topic time3min</span><br><span class="line">>1</span><br><span class="line">>2</span><br><span class="line">>3</span><br><span class="line">>4</span><br><span class="line">>5</span><br><span class="line">>6</span><br><span class="line">>7</span><br></pre></td></tr></table></figure><p>within 3 min period, we start a consumer, set time to eraliest by <em>–from-beginning</em></p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">:~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic time3min --from-beginning</span><br><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">Processed a total of 7 messages</span><br></pre></td></tr></table></figure><p>after 3 mins, we create a new consumer, now we can’t consume any message</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">:~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic time3min --from-beginning</span><br><span class="line">Processed a total of 0 messages</span><br></pre></td></tr></table></figure><p>then we produce some new message</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-producer.sh --broker-list localhost:9092 --topic time3min</span><br><span class="line">>8-1</span><br><span class="line">>8-2</span><br><span class="line">>8-3</span><br></pre></td></tr></table></figure><p>then we start a new consumer, we can consume messages. but even we set –from-begining, the can only consume the messages within this 3 mins</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic time3min --from-beginning</span><br><span class="line">8-1</span><br><span class="line">8-2</span><br><span class="line">8-3</span><br></pre></td></tr></table></figure><p>after 3min period, we can’t consume</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic time3min --from-beginning</span><br><span class="line">Processed a total of 0 messages</span><br></pre></td></tr></table></figure><p>then we continue to produce message</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-producer.sh --broker-list localhost:9092 --topic time3min</span><br><span class="line">>9-1</span><br><span class="line">>9-2</span><br><span class="line">>9-3</span><br></pre></td></tr></table></figure><p>within 3mins, using group yi1 to consume data</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic time3min --from-beginning --group yi1</span><br><span class="line">9-1</span><br><span class="line">9-2</span><br><span class="line">9-3</span><br><span class="line">Processed a total of 3 messages</span><br></pre></td></tr></table></figure><p>within 3mins, using group yi1 again to consume data</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic time3min --from-beginning --group yi1</span><br><span class="line">Processed a total of 0 messages</span><br></pre></td></tr></table></figure><p>then we use a new consumer group to consume message, we can consume messages</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic time3min --from-beginning --group yi2</span><br><span class="line">9-1</span><br><span class="line">9-2</span><br><span class="line">9-3</span><br><span class="line">^CProcessed a total of 3 messages</span><br></pre></td></tr></table></figure><p>after 3mins, we use a new group yi3 to consume messages, no message</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">li@yl-machine:~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic time3min --from-beginning --group yi3</span><br><span class="line">^CProcessed a total of 0 messages</span><br></pre></td></tr></table></figure><p>we can check the offset of yi3</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">:~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group yi3 --describe</span><br><span class="line">Consumer group 'yi3' has no active members.</span><br><span class="line"></span><br><span class="line">TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID</span><br><span class="line">time3min 0 15 15 0 -</span><br></pre></td></tr></table></figure><p>we use the producer to produce two new message, then check the offset of yi3</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group yi3 --describe</span><br><span class="line">Consumer group 'yi3' has no active members.</span><br><span class="line"></span><br><span class="line">TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID</span><br><span class="line">time3min 0 15 17 2 - - -</span><br></pre></td></tr></table></figure><p>when yi3 consume this 2 messages, offset point to end</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-console-consumer.sh –bootstrap-server localhost:9092 --topic time3min --from-beginning --group yi3</span><br><span class="line">99</span><br><span class="line">999</span><br><span class="line">^CProcessed a total of 2 messages</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">~/Downloads/kafka_2.11-2.0.0/bin$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group yi3 –describe</span><br><span class="line"></span><br><span class="line">Consumer group 'yi3' has no active members.</span><br><span class="line"></span><br><span class="line">TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID</span><br><span class="line">time3min 0 17 17 0 - - -</span><br></pre></td></tr></table></figure><h2 id="Message-ACK-and-Load-Balance"><a href="#Message-ACK-and-Load-Balance" class="headerlink" title="Message ACK and Load Balance"></a>Message ACK and Load Balance</h2><p><a href="https://www.jianshu.com/p/1b657ac52f89" target="_blank" rel="noopener">https://www.jianshu.com/p/1b657ac52f89</a></p><p><a href="https://kafka.apache.org/documentation/" target="_blank" rel="noopener">https://kafka.apache.org/documentation/</a></p>]]></content>
<summary type="html">
<h2 id="What-is-Kafaka"><a href="#What-is-Kafaka" class="headerlink" title="What is Kafaka"></a>What is Kafaka</h2><blockquote>
<p>Apache Ka
</summary>
<category term="kafka" scheme="http://yoursite.com/tags/kafka/"/>
<category term="message queue" scheme="http://yoursite.com/tags/message-queue/"/>
<category term="database" scheme="http://yoursite.com/tags/database/"/>
</entry>
</feed>