-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathTODO.txt
1195 lines (988 loc) · 53 KB
/
TODO.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
## Terraform AWS Cloud Control integrated.
https://www.infoq.com/news/2024/06/hashicorp-aws-cloud-control/
AWS Cloud Control Terraform Provider Enables Quicker Access to AWS Features
Jun 03, 2024 2 min read
HashiCorp has moved the AWS Cloud Control (AWSCC) provider to general availability. The AWSCC provider is automatically generated based on the Cloud Control API published by AWS implying that new AWS features can be supported in Terraform upon their release. Originally released in 2021 as a tech preview, the move to version 1.0 includes several new features including sample configurations and improved schema-level documentation.
AWSCC is built on top of the AWS Cloud Control API. The Cloud Control API provides CRUDL (create, read, update, delete, and list) operations to use with AWS cloud resources. Any resource type published to the CloudFormation Public Registry has a standard JSON schema that can be used with this API.
As part of this release, there are now over 270 resources with sample configurations. For example, awscc_ec2_key_pair allows for specifying a key pair to use with an EC2 instance. An existing key pair can be specified in the PublicKeyMaterial property; omitting that property will generate a new key pair.
resource "awscc_ec2_key_pair" "example" {
key_name = "example"
public_key_material = ""
tags = [{
key = "Modified By"
value = "AWSCC"
}]
}
In addition, more than 75 resources now have improved attribute-level documentation. The resources have detailed descriptions of how to use the attributes within the resource-accepted values. This includes context about the attribute, how it's used, and the expected values for each attribute.
The AWSCC is not meant as a replacement for the standard AWS provider. As noted by Aurora Chun, Product Marketing Manager at HashiCorp, "using the AWSCC and AWS providers together equips developers with a large catalog of resources across established and new AWS services." The providers can be used in conjunction to provision resources:
# Use the AWS provider to provision an S3 bucket
resource "aws_s3_bucket" "example" {
bucket_prefix = "example"
}
# Use the AWSCC provider to provision an Amazon Personalize dataset
resource "awscc_personalize_dataset" "interactions" {
...
dataset_import_job = {
data_source = {
data_location = aws_s3_bucket.interactions_import.bucket
}
}
}
The AWSCC provider is generated from the latest CloudFormation schemas and releases weekly with all new services added to the Cloud Control API. There are some resources in the CloudFormation schema that are not compatible with the AWSCC provider. A full list of these can be found on GitHub.
Within Azure, the AzAPI Provider enables similar support for the Azure ARM (Azure Resource Management) REST APIs. While there isn't a Terraform provider available for it, CloudGraph provides a similar API experience to AWS Cloud Control. CloudGraph has support for AWS, Azure, GCP, and Kubernetes.
The Terraform AWS Cloud Control provider is available for download now from the Terraform Registry. The AWSCC provider requires Terraform CLI version 1.0.7 or higher. The source code for the provider is available on GitHub and is licensed under the MPL-2.0 license. Additional information can be found within the provider document and the tutorial.
## Traefik 3.0 Reverse Proxy Rolls Out With Major Enhancements
https://linuxiac.com/traefik-3-0-reverse-proxy/
Traefik 3.0, a cloud-native HTTP reverse proxy and load balancer, brings stable HTTP/3 support, OpenTelemetry & Wasm integration, and more.
Yes, there are easier-to-use solutions in the world of reverse proxies, like Nginx Proxy Manager or Caddy, for example. However, when we talk about an enterprise that is tightly integrated with the needs of DevOps and Kubernetes professionals, Traefik is the name that comes out on top.
Traefik 3.0 also extends its observability features, incorporating OpenTelemetry to provide state-of-the-art tooling for metrics and tracing, supporting a seamless transition from older systems like OpenCensus and OpenTracing.
the new release also brings several Kubernetes-related updates, including support for cross-namespace references in Gateway API and the ability to handle middleware in filters for better traffic management. Other Kubernetes enhancements include the addition of the Gateway status address and the removal of deprecated APIs.
Traefik compares to Envoy.
## DevOps: How Container Networking Works: a Docker Bridge Network From Scratchhttps://labs.iximiuz.com/tutorials/container-networking-from-scratch
## DevOps Git 101
https://www.youtube.com/watch?v=aolI_Rz0ZqYhttps://gitbutler.com/
## /bin/pash parallel shells
Data Parallel Shell scripting
https://github.com/binpash
PaSh aims at the correct and automated parallelization of POSIX shell scripts. Broadly, PaSh includes three components: (1) a compiler that, given as input a POSIX shell script, emits a POSIX shell script that includes explicit data-parallel fragments for which PaSh has deemed such parallelization semantics-preserving, (2) a set of PaSh-related runtime primitives for supporting the execution of the parallel script fragments, available as in the PATH as normal commands, and (3) a crowd-sourced library of annotations characterizing several properties of common Unix/Linux commands relevant to parallelization.
# 1000+ DevOps Bash Scripts (AWS, GCP, Kubernetes, ...) [[{101?]]
* AWS, GCP, Kubernetes, Docker, CI/CD, APIs, SQL, PostgreSQL, MySQL,
Hive, Impala, Kafka, Hadoop, Jenkins, GitHub, GitLab, BitBucket,
Azure DevOps, TeamCity, Spotify, MP3, LDAP, Code/Build Linting, pkg
mgmt for Linux, Mac, Python, Perl, Ruby, NodeJS, Golang, Advanced
dotfiles: .bashrc, .vimrc, .gitconfig, .screenrc, tmux..
* <https://github.com/HariSekhon/DevOps-Bash-tools>
_____________________________________
# spell-checking.sh
cat f1.md f2.md |
tr A-Z a-z |
tr -cs A-Za-z '\n' |
sort |
uniq |
comm -13 dict.txt - > out
cat out | wc -l | sed 's/$/ mispelled words!/'
Ej:
$ ./demo-spell.sh # no parallel
$ $PASH_TOP/pa.sh -w 2 -d 1 --log_file pash.log demo-spell.sh # 2x parallelism
## DevOps Git: merging at scale:
* merge queue at Github to ship hundreds of changes every day
* <https://github.blog/2024-03-06-how-github-uses-merge-queue-to-ship-hundreds-of-changes-every-day/>
## Enhancing Istio Operations with Kong Istio Gateway
* <https://thecloudblog.net/post/enhancing-istio-operations-with-kong-istio-gateway/>
## DevOps Grafana Loki 3.0 Released with Native OpenTelemetry Support
* <https://linuxiac.com/grafana-loki-3-0-released-with-native-opentelemetry-support/>
Grafana Loki 3.0 Released with Native OpenTelemetry Support
OpenTelemetry, a set of tools, APIs, and SDKs used to collect, analyze, and export telemetry data from software applications and services. This improves the log ingestion and querying experience.
## Container Networking [[{containerization.networking.101]]
* <https://jvns.ca/blog/2016/12/22/container-networking/> By Julia Evans
> """ There are a lot of different ways you can network containers
> together, and the documentation on the internet about how it works is
> often pretty bad. I got really confused about all of this, so I'm
> going to try to explain what it all is in laymen's terms. """
> ... *what even is container networking?*
> .. you have two main options for running apps:
> 1. run app in host network namespace. (normal networking)
> "host_ip":"app_port"
> 2. run the program in its own *network namespace*:
> It turns out that this problem of how to connect two programs in
> containers together has a ton of different solutions. [[{doc_has.keypoint}]]
1. "every container gets an IP". (k8s requirement)
```
| "172.16.0.1:8080" // Tomcat continer app 1
| "172.16.0.2:5432" // PostgreSQL container app1
| "172.17.0.1:8080" // Tomcat continer app 2
| ...
| └───────┬───────┘
| any other program in the cluster will target those IP:port
| Instead of single-IP:"many ports" we have "many IPs":"some ports"
```
Q: How to get many IPs in a single host?
- Host IP: 172.9.9.9
- Container private IP: 10.4.4.4
- To route from 10.4.4.4 to 172.9.9.9:
1. Alt1: Configure Linux routing tables
```
| $ sudo ip route add 10.4.4.0/24 via 172.23.1.1 dev eth0
```
2. Alt2: Use AWS VPC Route tables
3. Alt3: Use Azure ...
2. Encapsulating to other networks:
```
| LOCAL NETWORK REMOTE NETWORK
| (encapsulation)
| IP: 10.4.4.4 IP: 172.9.9.9
| TCP stuff (extra wrapper stuff)
| HTTP stuff IP: 10.4.4.4
| TCP stuff
| HTTP stuff
```
2 different ways of doing encapsulation:
1. "ip-in-ip": add extra IP-header on top "current" IP header.
```
| MAC: 11:11:11:11:11:11
| IP: 172.9.9.9
| IP: 10.4.4.4
| TCP stuff
| HTTP stuff
| Ex:
| $ sudo ip tunnel add mytun mode ipip \ <·· Create tunnel "mytun"
| remote 172.9.9.9 local 10.4.4.4 ttl 255
| sudo ifconfig mytun 10.42.1.1
| $ sudo route add -net 10.42.2.0/24 dev mytun <·· set up a route table
| $ sudo route list
```
2. "vxlan": take whole packet (including the MAC address) and wrap
it inside a UDP packet. Ex:
```
| MAC address: 11:11:11:11:11:11
| IP: 172.9.9.9
| UDP port 8472 (the "vxlan port")
| MAC address: ab:cd:ef:12:34:56
| IP: 10.4.4.4
| TCP port 80
| HTTP stuff
```
* Every container networking "thing" runs some kind of daemon program
on every box which is in charge of adding routes to the route table.
for automatic route configuration. Alternatives include [[{doc_has.keypoint}]]
1. Alt1: routes are in etcd cluster, and program talks to the
etcd cluster to figure out which routes to set.
2. Alt2: use BGP protocol to gossip to each other about routes,
and a daemon (BIRD) that listens for BGP messages on
every box.
* Q: How does that packet actually end up getting to your container program?
1. bridge networking
1. Docker/... creates fake (virtual) network interfaces for every
single one of your containers with a given IP address.
2. The fake interfaces are bridges to a real one.
2. Flannel:
- Supports vxlan (encapsulate all packets) and
host-gw (just set route table entries, no encapsulation)
- The daemon that sets the routes gets them *from an etcd cluster*.
3. Calico:
- Supports ip-in-ip encapsulation and
"regular" mode, (just set route table entries, no encaps.)
- The daemon that sets the routes gets them *using BGP messages*
from other hosts. (etcd is not used for distributing routes).
[[containerization.networking.101}]]
## Testcontainers: [[{qa.testing,dev_language.java,qa,PM.TODO]
* <https://www.testcontainers.org/#who-is-using-testcontainers>
* Testcontainers is a Java library that supports JUnit tests,
providing lightweight, throwaway instances of common databases,
Selenium web browsers, or anything else that can run in a Docker
container.
- Testcontainers make the following kinds of tests easier:
- Data access layer integration tests: use a containerized instance
of a MySQL, PostgreSQL or Oracle database to test your data access
layer code for complete compatibility, but without requiring complex
setup on developers' machines and safe in the knowledge that your
tests will always start with a known DB state. Any other database
type that can be containerized can also be used.
- Application integration tests: for running your application in a
short-lived test mode with dependencies, such as databases, message
queues or web servers.
- UI/Acceptance tests: use containerized web browsers, compatible
with Selenium, for conducting automated UI tests. Each test can get a
fresh instance of the browser, with no browser state, plugin
variations or automated browser upgrades to worry about. And you get
a video recording of each test session, or just each session where
tests failed.
- Much more!
Testing Modules
- Databases
JDBC, R2DBC, Cassandra, CockroachDB, Couchbase, Clickhouse,
DB2, Dynalite, InfluxDB, MariaDB, MongoDB, MS SQL Server, MySQL,
Neo4j, Oracle-XE, OrientDB, Postgres, Presto
- Docker Compose Module
- Elasticsearch container
- Kafka Containers
- Localstack Module
- Mockserver Module
- Nginx Module
- Apache Pulsar Module
- RabbitMQ Module
- Solr Container
- Toxiproxy Module
- Hashicorp Vault Module
- Webdriver Containers
Who is using Testcontainers?
- ZeroTurnaround - Testing of the Java Agents, micro-services, Selenium browser automation
- Zipkin - MySQL and Cassandra testing
- Apache Gora - CouchDB testing
- Apache James - LDAP and Cassandra integration testing
- StreamSets - LDAP, MySQL Vault, MongoDB, Redis integration testing
- Playtika - Kafka, Couchbase, MariaDB, Redis, Neo4j, Aerospike, MemSQL
- JetBrains - Testing of the TeamCity plugin for HashiCorp Vault
- Plumbr - Integration testing of data processing pipeline micro-services
- Streamlio - Integration and Chaos Testing of our fast data platform based on Apache Puslar, Apache Bookeeper and Apache Heron.
- Spring Session - Redis, PostgreSQL, MySQL and MariaDB integration testing
- Apache Camel - Testing Camel against native services such as Consul, Etcd and so on
- Infinispan - Testing the Infinispan Server as well as integration tests with databases, LDAP and KeyCloak
- Instana - Testing agents and stream processing backends
- eBay Marketing - Testing for MySQL, Cassandra, Redis, Couchbase, Kafka, etc.
- Skyscanner - Integration testing against HTTP service mocks and various data stores
- Neo4j-OGM - Testing new, reactive client implementations
- Lightbend - Testing Alpakka Kafka and support in Alpakka Kafka Testkit
- Zalando SE - Testing core business services
- Europace AG - Integration testing for databases and micro services
- Micronaut Data - Testing of Micronaut Data JDBC, a database access toolkit
- Vert.x SQL Client - Testing with PostgreSQL, MySQL, MariaDB, SQL Server, etc.
- JHipster - Couchbase and Cassandra integration testing
- wescale - Integration testing against HTTP service mocks and various data stores
- Marquez - PostgreSQL integration testing
- Transferwise - Integration testing for different RDBMS, kafka and micro services
- XWiki - Testing XWiki under all supported configurations
- Apache SkyWalking - End-to-end testing of the Apache SkyWalking,
and plugin tests of its subproject, Apache SkyWalking Python, and of
its eco-system built by the community, like SkyAPM NodeJS Agent
- jOOQ - Integration testing all of jOOQ with a variety of RDBMS
[[}]]
## docker-compose: dev vs pro [[{]
https://stackoverflow.com/questions/60604539/how-to-use-docker-in-the-development-phase-of-a-devops-life-cycle/60780840#60780840
Modify your Compose file for production
[[}]]
## CRIU.org: Container Live Migration : [[{]
<https://criu.org/Main_Page>
CRIU: project to implement checkpoint/restore functionality for Linux.
Checkpoint/Restore In Userspace, or CRIU (pronounced kree-oo, IPA:
/krɪʊ/, Russian: криу), is a Linux software. It can freeze a
running container (or an individual application) and checkpoint its
state to disk. The data saved can be used to restore the application
Used for example to bootstrap JVMs in millisecs (vs secs) [[performance,dev_stack.java]]
</JAVA/java_map.html#?jvm_app_checkpoint>
and run it exactly as it was during the time of the freeze. Using
this functionality, application or container live migration,
snapshots, remote debugging, and many other things are now possible.
[[}]]
## ContainerCoreInterceptor: [[{troubleshooting,PM.TODO]
https://github.com/AmadeusITGroup/ContainerCoreInterceptor
Core_interceptor can be used to handle core dumps in a dockerized environment.
It listens on the local docker daemon socket for events. When it
receives a die event it checks if the dead container produced any
core dump or java heap dump.
[[}]]
# KVM Kata containers:[[{PM.TODO]
<https://katacontainers.io/>
- Security: Runs in a dedicated kernel, providing isolation of
network, I/O and memory and can utilize hardware-enforced isolation
with virtualization VT extensions.
- Compatibility: Supports industry standards including OCI container
format, Kubernetes CRI interface, as well as legacy virtualization
technologies.
- Performance: Delivers consistent performance as standard Linux
containers; increased isolation without the performance tax of
standard virtual machines.
- Simplicity: Eliminates the requirement for nesting containers
inside full blown virtual machines; standard interfaces make it easy
to plug in and get started.
[[}]]
## avoid "sudo" docker [[{containerization.docker]]
$* $ sudo usermod -a -G docker "myUser"*
$ newgrp docker (take new group without re-login)
[[}]]
## https://github.com/dbohdan/structured-text-tools/blob/master/sql-based.md
https://github.com/dbohdan/structured-text-tools#sql-based-tools
## GraphDash: web-based dashboard built on graphs and their metadata. [[{]]
https://github.com/AmadeusITGroup/GraphDash
[[}]]
## Use libguestfs to manage virtual machine disk images [[{]]
https://www.redhat.com/sysadmin/libguestfs-manage-vm
[[}]]
## workflow-cps-global-lib-http-plugin [[{jenkins.jenkinsfile]]
https://github.com/AmadeusITGroup/workflow-cps-global-lib-http-plugin
retrieve shared libraries through a SCM, such as Git.
The goal of this plugin is to provide another way to retrieve
shared libraries via the @Library declaration in a Jenkinsfile.
This is a way to separate to concerns : source code (SCM) and built
artefacts (binaries). Built artefacts are immutable, tagged and often
stored on a different kind of infrastructure. Since pipelines can be
used to make production loads, it makes sense to host the libraries
on a server with a production-level SLA for example. You can also
make sure that your artefact repository is close to your pipelines
and share the same SLA. Having your Jenkins and your artefact
repository close limitsr latency and limits network issues.
[[}]]
## GIT: Part 3: Context from commits [[{]]
https://alexwlchan.net/a-plumbers-guide-to-git/3-context-from-commits/
[[}]]
## https://github.blog/author/dstolee/ [[{git,scalability]]
Git’s database internals V: scalability
This fifth and final part of our blog series exploring Git's
internals shows several strategies for scaling your Git repositories
that match related database sharding techniques.
[[}]]
## 4 lines of code to improve your Ansible play [[{ansible,qa.billion_dollar_mistake]]
With a tiny bit of effort, you can help the next person by not just
mapping the safe path but leaving warnings about the dangers
[[}]]
## https://docs.docker.com/network/bridge/
## Gerrit: Git server with voting mechanism [[{]]
https://docs.google.com/presentation/d/1C73UgQdzZDw0gzpaEqIC6SPujZJhqamyqO1XOHjH-uk/edit#slide=id.g4d6c16487b_1_844
- Submit Type / Submit Strategy:
- FAST_FORWARD_ONLY:
Submit fails if fast-forward is not possible.
- MERGE_IF_NECESSARY:
If fast-forward is not possible, a merge commit is created.
- REBASE_IF_NECESSARY:
If fast-forward is not possible, the current patch set is automatically
rebased (creates a new patch set which is submitted).
- MERGE_ALWAYS:
A merge commit is always created, even if fast-forward is possible.
- REBASE_ALWAYS:
The current patch set is always rebased, even if fast-forward is possible.
For all rebased commits some additional footers will be added (Reviewed-On, Reviewed-By, Tested-By).
- CHERRY_PICK:
The change is cherry-picked. This ignores change dependencies.
For all cherry-picked commits some additional footers will be added
(Reviewed-On, Reviewed-By, Tested-By).
- ALLOW CONTENT MERGES:
whether Gerrit should do a content merge if the same files have been touched
[[}]]
## Building rootless containers for JavaScript front ends [[{containerization.security.101]]
https://developers.redhat.com/blog/2021/03/04/building-rootless-containers-for-javascript-front-ends/?sc_cid=7013a000002vsMVAAY
[[}]]
## Git: What's new 2.31
https://github.blog/2021-03-15-highlights-from-git-2-31/
## https://space.sh, Server apps and automation in a nutshell [[{PM.low_code]]
very, very non-intrusive. If you want to manage servers
remotely Space would SSH into those servers to run your tasks and
never upload anything to the server, nor have any dependencies other
than a POSIX shell (ash/dash/bash 3.4+).
- Used as the base from simplenetes.
[[}]]
## dolt: Git + SQL!!!
https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-add.md
https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt.md
https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-blame.md
https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-branch.md
https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-checkout.md
https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-commit.md
- Dolt is a SQL database that you can fork, clone, branch, merge, push
and pull just like a git repository. Connect to Dolt just like any
MySQL database to run queries or update the data using SQL commands.
Use the command line interface to import CSV files, commit your
changes, push them to a remote, or merge your teammate's changes.
## Fetch gitignore boilerplates.
https://github.com/tldr-pages/tldr/blob/master/pages/common/gibo.md
More info: https://github.com/simonwhitaker/gibo.
## https://github.com/tldr-pages/tldr/tree/master/pages/common/git-*.md
git-stage git-am.md git-annex.md git-annotate.md git-annotate
git-apply.md git-apply git-archive.md git-archive git-bisect.md
git-blame.md git-branch.md git-bugreport.md git-bugreport
git-bundle.md git-bundle git-cat-file.md git-check-attr.md
git-check-attr git-check-ignore.md git-check-mailmap.md
git-check-mailmap git-check-ref-format.md git-check-ref-format
git-checkout-index.md git-checkout-index git-checkout.md
git-cherry-pick.md git-cherry-pick git-cherry.md git-cherry
git-clean.md git-clone.md git-clone git-column.md git-column
git-commit-graph.md git-commit-graph git-commit-tree.md git-commit.md
git-commit git-config.md git-count-objects.md git-count-objects
git-credential.md git-credential git-describe.md git-diff.md git-diff
git-difftool.md git-difftool git-fetch.md git-flow.md
git-for-each-repo.md git-for-each-repo git-format-patch.md
git-fsck.md git-gc.md git-grep.md git-grep git-help.md git-ignore.md
git-imerge.md git-init.md git-instaweb.md git-lfs.md git-log.md
git-ls-files.md git-ls-files git-ls-remote.md git-ls-remote
git-ls-tree.md git-maintenance.md git-maintenance git-merge.md
git-merge git-mergetool.md git-mergetool git-mv.md git-notes.md
git-pr.md git-pr git-prune.md git-pull.md git-push.md git-rebase.md
git-reflog.md git-reflog git-remote.md git-remote git-repack.md
git-replace.md git-request-pull.md git-reset.md git-restore.md
git-restore git-rev-list.md git-rev-parse.md git-revert.md git-rm.md
git-send-email.md git-shortlog.md git-show-branch.md git-show-branch
git-show-ref.md git-show-ref git-show.md git-sizer.md git-stage.md
git-stage git-stash.md git-stash git-status.md git-stripspace.md
git-stripspace git-submodule.md git-subtree.md git-svn.md
git-switch.md git-switch git-tag.md git-update-index.md
git-update-index git-update-ref.md git-var.md git-var git-worktree.md
git.md
## Analyze nginx configuration files.
https://github.com/yandex/gixy.
https://github.com/tldr-pages/tldr/blob/master/pages/common/gixy.md
## Render markdown on terminal.
https://github.com/charmbracelet/glow
## Gnomon: pipeline utility to prepend timestamp information from STDOUT. [[{101]]
https://github.com/paypal/gnomon
https://github.com/tldr-pages/tldr/blob/master/pages/common/gnomon.md
Useful for long-running processes where you'd like a historical
record of what's taking so long. [[}]]
## gource: [[{]]
Renders an animated tree diagram of Git, SVN, Mercurial and Bazaar
https://gource.io/
Shows files and directories being created, modified
[[}]]
## Molecule helps testing ansible roles.
https://github.com/tldr-pages/tldr/blob/master/pages/common/molecule.md
More information: https://molecule.readthedocs.io.
## Git-big: cli extension for managing Write Once Read Many (WORM) files. [[{git,scalability]]
https://github.com/vertexai/git-big
$ git big init ← Init repo
$ git big add bigfile.iso ← Add big file, sha256 hash generated&recorded in the index
$ git big status
→ ...
→[ W C ] 993328d6 bigfile.iso
| | └-- Depot KO
| └---- Cache OK
└------ Workding dir OK
$ cat .gitbig
{
"files": { "bigfile.iso": "e99f32a..." },
"version": 1
}
$ ls -l bigfile.iso ← original big file is now a symlink
... bigfile.iso -> .gitbig-anchors/99/33/e99f32a...
└─────────────┬────────────────┘
Final file is read-only
$ git big push ←*Push pending big files to depos*
# We can see the big file has been archived in the depot
$ git big status
→ ...
→ [ W C D ] 993328d6 bigfile.iso
| | └-- Depo (Remote repo) OK
| └---- Cache OK
└------ Workding dir OK
$ git commit -m "Add bigfile.iso" ←*Commit changes*
...
$ git push origin master ← push upstream
In another machine:
$ git clone ...
$ cd repo
$ git big status
→ [ D ] 993328d6 bigfile.iso
| | └-- Depo (Remote repo) OK ← Only in depot after clone
| └---- Cache KO
└------ Workding dir KO
$ git big pull
Pulling object: e99f32a...
$ ls -l $(readlink bigfile.iso)
-r--r--r-- ... .gitbig-anchors/99/33/e99f32a...
$ git big status
...
→ [ W C D ] 993328d6 bigfile.iso
| | └-- Depo (Remote repo) OK ← Only in depot after clone
| └---- Cache KO
└------ Workding dir KO
[[}]]
## set -o pipefail bash flag <[{bash,qa.error_control>]
https://linuxtect.com/make-bash-shell-safe-with-set-euxo-pipefail/
...By default, if a commands in a pipe fails, pipe continues to
execute, makes it fail instead.
[[}]]
## volatile overlay mounts and containers: [[{containerization]]
https://www.redhat.com/sysadmin/container-volatile-overlay-mounts
Recent versions of Podman, Buildah, and CRI-O have started to take
advantage of a new kernel feature, volatile overlay mounts. This
feature allows you to mount an overlay file system with a flag that
tells it not to sync to the disk.
https://sysadmin.prod.acquia-sites.com/sysadmin/overlay-mounts
Speed up container builds with overlay mounts
How Podman can speed up builds for multiple distributions by sharing the host's metadata.
Overlay mounts help to address a challenge we run into when we have
several containers on a single host. The basic problem is that every
time you run dnf or yum inside a container, the container downloads
and processes the metadata of all the repositories. To address this,
we added an advanced volume mount to allow all of the containers to
share the host's metadata. This approach avoids repeating the
download and processing for each container. I previously wrote a blog
post introducing the concept of overlay mounts inside of builds.
[[}]]
## Git merge strategies: [[{]]
https://git-scm.com/docs/merge-strategies
resolve: This can only resolve current-branch and another pulled-branch
recursive:
ours:
theirs:
patience:
diff-algorithm=[patience|minimal|histogram|myers]
ignore-space-change
ignore-all-space
ignore-space-at-eol
ignore-cr-at-eol
renormalize
no-renormalize
no-renames
find-renames[=n]
rename-threshold=n
subtree[=path]
octopus
subtree
- git merge strategies: (From Bitbucket UI)
- Merge commit --no-ff
Always create a new merge commit and update the target branch to it,
even if the source branch is already up to date with the target
branch.
- Fast-forward --ff
If the source branch is out of date with the target branch, create a
merge commit. Otherwise, update the target branch to the latest
commit on the source branch.
- Fast-forward only --ff-only
If the source branch is out of date with the target branch, reject
the merge request. Otherwise, update the target branch to the latest
commit on the source branch.
- Rebase and merge, rebase + merge --no-ff
Rebase commits from the source branch onto the target branch,
creating a new non-merge commit for each incoming commit, and create
a merge commit to update the target branch.
- Rebase and fast-forward, rebase + merge --ff-only
Rebase commits from the source branch onto the target branch,
creating a new non-merge commit for each incoming commit, and
fast-forward the target branch with the resulting commits.
- Squash, --squash
Combine all commits into one new non-merge commit on the target branch.
- Squash, fast-forward only, --squash --ff-only
If the source branch is out of date with the target branch, reject
the merge request. Otherwise, combine all commits into one new
non-merge commit on the target branch.
[[}]]
## GitLab CI to publish HTML pages: [[{]]
https://roneo.org/en/framagit-render-html/
You can render HTML using the Gitlab CI. This doc was redacted for
Framagit, from the french non-profit Framasoft, which uses Gitlab
Pages. You just need to adapt the path.
A service called GitHack seems to propose the same, I didn't test it
though.
[[}]]
## bash "$-" read-variable: [[{]]
· prints/reports current set-of-options in current shell.
ex.: "im...." outputs means following options are enabled:
m - monitor : (set -m),
REF: https://unix.stackexchange.com/questions/196603/can-someone-explain-in-detail-what-set-m-does
i - interactive :
INTERACTIVE=0
case "$-" in
*i*)
SUDO_OPTS=""
;;
*)
SUDO_OPTS="--non-interactive" # fail-fast if sudo user is not passwordless
;;
esac
sudo ${SUDO_OPTS} ...
[[}]]
## Resizing containers with the Device Mapper: [[{containerization.image,storage,PM.TODO]]
<http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/>
[[}]]
## Rootless Docker: [[{containerization.docker,qa,containerization.security,PM.TODO]]
<https://docs.docker.com/engine/security/rootless/>
[[}]]
## Show image change history: [[{containerization.image.build]]
$ docker history /clock:1.0
[[}]]
## Commit image modifications [[{containerization.image.build]]
(Discouraged most of the time, modify Dockerbuild instead)
host-mach $ docker run -it ubuntu bash # Boot up existing image
container # apt-get install ... # Apply changes to running instance
host-mach $ docker diff $(docker ps -lq) # Show changes done in running container
host-mach $ docker commit $(docker ps -lq) # Commit/Confirm changes
host-mach $ docker tag figlet # Tage new image
host-mach $ docker run -it figlet # Boot new image instance
[[}]]
## Selenium Browser test automation [[{ci/cd,qa,testing,selenium,web,_PM.low_code,PM.TODO]]
See also QAWolf:
[[}]]
## Packaging Apps: [[{containerization.image.build,InfraAsCode.pulumi,doc_has.comparative,]]
[[dev_stack.kubernetes.ballerina,dev_stack.metaparticle,dev_language.java]]
<https://www.infoq.com/articles/metaparticle-pulumi-ballerina/>
- Packaging Applications for Docker and Kubernetes approaches comparative.
• Metaparticle:
- Looks to be discontinued (last update in github 2020-06-25)
- provides a standard library to create cloud native apps directly deployable on k8s
supporting (2018-07-24) Java, .NET core, Javascript (NodeJS), Go, Python and Ruby.
• Pulumi: Aims to define Infra-as-code (vs "silly" YAML files).
""" It is going to DevOps what React did to web development """ (according to their authors)
- Web service. WARN: Potential vendor lock-in (account registration in pulumi.io needed)
- Focused on Infra-as-code.
- Support JS, Typescript, Python, Go on AWS, Azure, GCP and k8s (multi-cloud).
• Ballerina: language to generate k8s + Istio YAMLs.
- first-class support for APIs, distributed transactions, circuit-breakers, stream processing,
data-access, JSON, XML, gRPC, and many other integration challenges.
- Ballerina compiler understands the architecture around it with microservices directly
deployable into Docker or Kubernetes by auto generating Docker images and YAML's.
- https://v1-0.ballerina.io/learn/by-example/
- WARN : It uses its own language (vs Java, Go, ...)
[[}]]
## Bash: Search&Replace with regexs:
https://stackoverflow.com/questions/13043344/search-and-replace-in-bash-using-regular-expressions
hello=ho02123ware38384you443d34o3434ingtod38384day
re='(.*)[0-9]+(.*)'
while [[ $hello =~ $re ]]; do
hello=${BASH_REMATCH[1]}${BASH_REMATCH[2]}
done
echo "$hello"
## https://stackoverflow.com/questions/19758915/keeping-a-branch-up-to-date-with-master @ma [[{git.101}]]
## DevOps pipelines DONT's:
https://jamesjoshuahill.github.io/talk/2018/12/06/how-not-to-build-a-pipeline/
## git update branches:
https://jamesjoshuahill.github.io/note/2015/02/07/is-your-branch-up-to-date/ [[{Git.101]]
Nearly everything you do with git happens on your machine. Don’t
take my word for it. Turn off your wifi and see how many git commands
you can run. You’ll see fetch, pull and push fail without a
connection to your remote, but try the other commands you can think
of: status, commit, checkout, cherry-pick, merge, rebase, diff, log
and see how many times git tells you that you’re up-to-date. How
can you be up-to-date if you’re disconnected?
....
When you run git fetch origin the list of branches and commit history
is downloaded from GitHub and synchronised into the clone on your
machine. Doing a fetch won’t affect your local branches, so it’s
one of the safest git commands you can run. You can fetch as much as
you like.
[[}]]
## An Interview With Linus Torvalds: Linux and Git
https://www.tag1consulting.com/blog/interview-linus-torvalds-linux-and-git
## DevOps, ansible: what ansible is not [[{ansible.101]]
https://www.linkedin.com/pulse/ansible-what-marcel-koert/ [[}]]
## https://www.30secondsofcode.org/git/p/1
30 secs recipes:
· How does Git's fast-forward mode work?
· Prints a list of all local branches sorted by date.
· Prints a list of all merged local branches.
· Delete merged branches
· Deletes all local merged branches.
· Create a git commit with a different date
· Purge a file from history
· Completely purges a file from history.
· View a visual graph of the repository
· Disables the default fast forwarding on merge commits.
· Prints a list of lost files and commits.
· ...
## 5 tips for configuring virtualenvs with Ansible Tower
https://www.redhat.com/sysadmin/virtualenvs-ansible-tower
## How to Create Your Own Repositories for Packages
https://www.percona.com/blog/2020/01/02/how-to-create-your-own-repositories-for-packages/
## This is how a #GitOps pipeline looks like.
NEXT) Firstly, the user changes the code in the Git repository.
NEXT) Then a container image gets created, and it is pushed to the container registry.
NEXT) It gets updated into a config updater.
NEXT) Once a user creates a pull request to merge to a different branch, it deploys to the concerned branch.
NEXT) Then it tests whether it is all good or not.
NEXT) Once it’s all good, the reviewer will be able to merge it.
NEXT) After the merge, it goes to the test branch.
NEXT) Once you create a pull request, it will deploy to that test
branch. Below are a few popular GitOps tools that you must try while
working on GitOps workflows. • Flux: Flux was created in 2016 by
Weaveworks. It is a GitOps operator for your Kubernetes cluster. •
ArgoCD: ArgoCD is also a GitOps operator but with a web user
interface. • Jenkins X: A CICD solution for Kubernetes clusters but
different than classic Jenkins. • WKSctl: A GitOps tool that uses
Git commits to manage the Kubernetes cluster. • Gitkube: Gitkube is
ideal for development where it uses Git push to build and deploy
docker images on a Kubernetes cluster. • Helm Operator: an
open-source Kubernetes operator to manage helm chart releases
declaratively. Know more about GitOps: http://bit.ly/393ahpv
https://geekflare.com/gitops-introduction/
## Chuletario de pócimas y recetas: Monitoring uninterruptible system calls.
http://chuletario.blogspot.com/2011/05/monitoring-uninterruptible-system-calls.html?m=1
## DevSecOps: Image scanning in your pipelines using quay.io scanner
https://www.redhat.com/sysadmin/using-quayio-scanner
## git-pw: [[{]]
<http://jk.ozlabs.org/projects/patchwork/>
<https://www.collabora.com/news-and-blog/blog/2019/04/18/quick-hack-git-pw/>
- git-pw requires patchwork v2.0, since it uses the
new REST API and other improvements, such as understanding
the difference between patches, series and cover letters,
to know exactly what to try and apply.
- python-based tool that integrates git and patchwork.
$ pip install --user git-pw
$ git config pw.server https://patchwork.kernel.org/api/1.1
$ git config pw.token YOUR_USER_TOKEN_HERE
*Daily work example: finding and applying series*
- Alternative 1: Manually
- We could use patchwork web UI search engine for it.
- Go to "linux-rockchip" project
- click on _"Show patches with" to access the filter menu.
- filter by submitter.
- Alternative 2: git-pw (REST API wrapper)
- $ git-pw --project linux-rockchip series list "dynamically"
→ ID Date Name Version Submitter
→ 95139 a day ago Add support ... 3 Gaël PORTAY
→ 93875 3 days ago Add support ... 2 Gaël PORTAY
→ 3039 8 months ago Add support ... 1 Enric Balletbo i Serra
- Get some more info:
$ git-pw series show 95139
→ Property Value
→ ID 95139
→ Date 2019-03-21T23:14:35
→ Name Add support for drm/rockchip to dynamically control the DDR frequency.
→ URL https://patchwork.kernel.org/project/linux-rockchip/list/?series=95139
→ Submitter Gaël PORTAY
→ Project Rockchip SoC list
→ Version 3
→ Received 5 of 5
→ Complete True
→ Cover 10864561 [v3,0/5] Add support ....
→ Patches 10864575 [v3,1/5] devfreq: rockchip-dfi: Move GRF definitions to a common place.
→ 10864579 [v3,2/5] : devfreq: rk3399_dmc: Add rockchip, pmu phandle.
→ 10864589 [v3,3/5] devfreq: rk3399_dmc: Pass ODT and auto power down parameters to TF-A.
→ 10864591 [v3,4/5] arm64: dts: rk3399: Add dfi and dmc nodes.
→ 10864585 [v3,5/5] arm64: dts: rockchip: Enable dmc and dfi nodes on gru.
- Applying the entire series (or at least trying to):
$ git-pw series apply 95139
^^^^^^^^^^^^^^^^^^^^^^^^^^^
fetch all the patches in the series, and apply them in the right order.
[[}]]
## SaST-scan [[{devops.security.101]]
https://github.com/AppThreat/sast-scan
This repo builds appthreat/sast-scan (and
quay.io/appthreat/sast-scan), a container image with a number of
bundled open-source static analysis security testing (SAST) tools.
This is like a Swiss Army knife for DevSecOps engineers.
- Features
- No messy configuration and no server required
- Scanning is performed directly in the CI and is extremely quick. Full scan often takes only couple of minutes
- Gorgeous HTML reports that you can proudly share with your colleagues, and the security team
- Automatic exit code 1 (build breaker) with critical and high vulnerabilities
- There are a number of small things that will bring a smile to any DevOps team
Bundled tools
Programming Language Tools
ansible ansible-lint
apex pmd
aws cfn-lint, cfn_nag
bash shellcheck
bom cdxgen
credscan gitleaks
depscan dep-scan
go gosec, staticcheck
java cdxgen, gradle, find-sec-bugs, pmd
jsp pmd
json jq, jsondiff, jsonschema
kotlin detekt
kubernetes kube-score
nodejs cdxgen, NodeJsScan, eslint, yarn
puppet puppet-lint
plsql pmd
python bandit, cdxgen, pipenv
ruby cyclonedx-ruby
rust cdxgen, cargo-audit
terraform tfsec
Visual Force (vf) pmd
Apache Velocity (vm) pmd
yaml yamllint
[[}]]
## <http://alblue.bandlem.com/2011/11/git-tip-of-week-git-notes.html>
## Managing "many-branches" Git projects:
- sync all local/remote branches:
https://stackoverflow.com/questions/27157166/sync-all-branches-with-git
## gitbase: Query Git with SQL [[{]]
https://opensource.com/article/18/11/gitbase
[[}]]
## online SSH Certificate Authority [[{]]
https://github.com/smallstep/certificates
An online SSH Certificate Authority
· Delegate SSH authentication to step-ca by using SSH
certificates instead of public keys and authorized_keys files
· For user certificates, connect SSH to your single sign-on
provider, to improve security with short-lived certificates and MFA
(or other security policies) via any OAuth OIDC provider.
· For host certificates, improve security, eliminate TOFU
warnings, and set up automated host certificate renewal
[[}]]
## https://docs.ipfs.io/how-to/host-git-style-repo/ [[{]]]
serve a read-only Git repository through the IPFS network.
end result: git cloneable url served through IPFS!
1) git clone --bare [email protected]/myrepo <·· --bare: don't create working tree, just .git object store
2) cd myrepo
3) git update-server-info <·· Add metadata information to .git/info and .git/objects/info
in order to help clients discover what references and packs
the server has. (needed for HTTP -vs ssh -)
4) mv objects/pack/*.pack . <·· Optional, unpack "large packfile" into its individual
git unpack-objects < *.pack objects, allowing IPFS to deduplicate objects if
rm -f *.pack objects/pack/* the Git repository is duplicated 2+ times
(at this point the repository is ready to be served)
5) $ ipfs add -r . <·· add current repo to ipfs
...
added QmX679gmfyaRkKMvPA4WGNWXj9PtpvKWGPgtXaF18etC95 . <- Hash identifying the directory in IPFS
- Test setup: -----------------------------
$ cd "some_new_and_clean_path"
$ REPO_HASH="QmX679gmfya..."
$ git clone http://${REPO_HASH}.ipfs.localhost:8080/ myrepo <·· Cloning git from IPFS!!!!
- See also: https://dev.to/woss/part-1-rehosting-git-repositories-on-ipfs-23bf
truly distributed way of hosting the git repository
AT A SPECIFIC REVISION, TAG, OR BRANCH,
[[}]]
## Top 10 container guides for sysadmins | Enable Sysadmin
https://www.redhat.com/sysadmin/containers-articles-2021
## bash: Parse Arguments in Bash Scripts With getopts
https://ostechnix.com/parse-arguments-in-bash-scripts-using-getopts/
## Dockerfile Linter Hadolint Brings Fixes and Improvements, and Support for ARM64 Binaries
https://www.infoq.com/news/2022/04/hadolint-dockerfile-linter/
## How to Use S3 as a Private Git Repository
https://fancybeans.com/2012/08/24/how-to-use-s3-as-a-private-git-repository/
Basically, use git for local commands that manipulate the local
repository (adding, committing, merging) and jgit for any
interactions that involve sending or receiving data from the S3
bucket.
https://github.com/bgahagan/git-remote-s3
Push and pull git repos to/from an s3 bucket. Uses gpg to encrypt the
repo contents (but not branch names!) before sending to s3.
https://www.petekeen.net/hosting-private-git-repositories-with-gitolite
Step 1: Install Gitolite
Gitolite is a system for managing git repositories using git itself