-
Notifications
You must be signed in to change notification settings - Fork 1
/
Dye & Durham Technology Radar - Vol. 1.csv
We can make this file beautiful and searchable if this error is corrected: Any value after quoted field isn't allowed in line 35.
46 lines (42 loc) · 24.8 KB
/
Dye & Durham Technology Radar - Vol. 1.csv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
name,ring,quadrant,isNew,description
Four key metrics,Adopt,Techniques,FALSE,"<p>To measure software delivery performance, more and more organizations are turning to the <strong>four key metrics</strong> as defined by the DORA research program: change lead time, deployment frequency, mean time to restore (MTTR) and change fail percentage. This research and its statistical analysis have shown a clear link between high delivery performance and these metrics; they provide a great leading indicator for how a team, or even a whole delivery organization, is doing.</p>
<p>We're still big proponents of these metrics, but we've also learned some lessons since we first started monitoring them. And we're increasingly seeing misguided measurement approaches with tools that help teams measure these metrics based purely on their continuous delivery (CD) pipelines. In particular when it comes to the stability metrics (MTTR and change fail percentage), CD pipeline data alone doesn't provide enough information to determine what a deployment failure with real user impact is. Stability metrics only make sense if they include data about real incidents that degrade service for the users.</p>
<p>And as with all metrics, we recommend to always keep in mind the ultimate intention behind a measurement and use them to reflect and learn. For example, before spending weeks to build up sophisticated dashboard tooling, consider just regularly taking the DORA quick check in team retrospectives. This gives the team the opportunity to reflect on which capabilities they could work on to improve their metrics, which can be much more effective than overdetailed out-of-the-box tooling.</p>"
Platform engineering product teams,Trial,Techniques,FALSE,"<p>We continue to see <strong>platform engineering product teams</strong> as a sensible default with the key insight being that they're just another product team, albeit one focused on internal platform customers. Thus it is critical to have clearly defined customers and products while using the same engineering disciplines and ways of working as any other (externally focused) product team; platform teams aren't special in this regard. We strongly caution against just renaming existing internal teams platform teams while leaving ways of working and organizational structures unchanged. We're still big fans of using concepts from Team Topologies as we think about how best to organize platform teams. We consider platform engineering product teams to be a standard approach and a significant enabler for high-performing IT.</p>"
Zero trust architecture,Assess,Techniques,FALSE,"<p>We keep hearing about enterprises finding their security badly compromised due to an overreliance on the <strong>secure</strong> network perimeter. Once this external perimeter is breached, internal systems prove to be poorly protected with attackers quickly and easily able to deploy automated data extraction tools and ransomware attacks that all too often remain undetected for long periods. This leads us to recommend <strong>zero trust architecture</strong> (ZTA) as a now sensible default.</p>
<p>ZTA is a paradigm shift in security architecture and strategy. It’s based on the assumption that a network perimeter is no longer representative of a secure boundary and no implicit trust should be granted to users or services based solely on their physical or network location. The number of resources, tools and platforms available to implement aspects of ZTA keeps growing and includes enforcing policies as code based on the least privilege and as-granular-as-possible principles and continuous monitoring and automated mitigation of threats; using service mesh to enforce security control application-to-service and service-to-service; implementing binary attestation to verify the origin of the binaries; and including secure enclaves in addition to traditional encryption to enforce the three pillars of data security: in transit, at rest and in memory. For introductions to the topic, consult the NIST ZTA publication and Google's white paper on BeyondProd.</p>"
API as a Product,Adopt,Techniques,FALSE,"Companies have wholeheartedly embraced APIs as a way to expose business capabilities to both external and internal developers. APIs promise the ability to experiment quickly with new business ideas by recombining core capabilities. But what differentiates an API from an ordinary enterprise integration service? One difference lies in treating APIs as a product, even when the consumer is an internal system or fellow developer. Teams that build APIs should understand the needs of their customers and make the product compelling to them. Usability testing and UX research can lead to a better design and understanding of the API usage patterns and help bring a product mindset to APIs. APIs, like products, should be actively maintained and supported, and, easy to use. They should have an owner who advocates for the customer and strives for continual improvement. In our experience, product orientation is the missing ingredient that makes the difference between ordinary enterprise integration and an agile business built on a platform of APIs."
Data mesh,Assess,Techniques,FALSE,"<p>Increasingly, we see a mismatch between what data-driven organizations want to achieve and what the current data architectures and organizational structures allow. Organizations want to embed data-driven decision-making, machine learning and analytics into many aspects of their products and services and how they operate internally; essentially they want to augment every aspect of their operational landscape with data-driven intelligence. Yet, we still have a ways to go before we can embed analytical data, access to it and how it is managed into the business domains and operations. Today, every aspect of managing analytical data is externalized outside of the operational business domains to the data team and to the data management monoliths: data lakes and data warehouses. <strong>Data mesh</strong> is a decentralized sociotechnical approach to remove the dichotomy of analytical data and business operation. Its objective is to embed sharing and using analytical data into each operational business domain and close the gap between the operational and analytical planes. It's founded on four principles: domain data ownership, data as a product, self-serve data platform and computational federated governance.</p>
<p>Our teams have been implementing the data mesh architecture; they've created new architectural abstractions such as the data product quantum to encapsulate the code, data and policy as an autonomous unit of analytical data sharing embedded into operational domains; and they've built self-serve data platform capabilities to manage the lifecycle of data product quanta in a declarative manner as described in <em>Data Mesh</em>. Despite our technical advances, we're still experiencing friction using the existing technologies in a data mesh topology, not to mention the resistance of business domains to embrace sharing and using data as a first-class responsibility in some organizations.</p>"
Architectural fitness functions,Trial,Techniques,FALSE,"Borrowed from evolutionary computing, a fitness function is used to summarize how close a given design solution is to achieving the set aims. When defining an evolutionary algorithm, the designer seeks a ‘better’ algorithm; the fitness function defines what ‘better’ means in this context. An architectural fitness function , as defined in Building Evolutionary Architectures, provides an objective integrity assessment of some architectural characteristics, which may encompass existing verification criteria, such as unit testing, metrics, monitors, and so on. We believe architects can communicate, validate and preserve architectural characteristics in an automated, continual manner, which is the key to building evolutionary architectures."
Architecture decision records,Adopt,Techniques,FALSE,"Much documentation can be replaced with highly readable code and tests. In a world of evolutionary architecture, however, it's important to record certain design decisions for the benefit of future team members as well as for external oversight. Lightweight Architecture Decision Records is a technique for capturing important architectural decisions along with their context and consequences. We recommend storing these details in source control, instead of a wiki or website, as then they can provide a record that remains in sync with the code itself. For most projects, we see no reason why you wouldn't want to use this technique."
Micro frontends,Trial,Techniques,FALSE,"We've seen significant benefits from introducing microservices, which have allowed teams to scale the delivery of independently deployed and maintained services. Unfortunately, we've also seen many teams create a front-end monolith — a large, entangled browser application that sits on top of the back-end services — largely neutralizing the benefits of microservices. Micro frontends have continued to gain in popularity since they were first introduced. We've seen many teams adopt some form of this architecture as a way to manage the complexity of multiple developers and teams contributing to the same user experience. In June of last year, one of the originators of this technique published an introductory article that serves as a reference for micro frontends. It shows how this style can be implemented using various web programming mechanisms and builds out an example application using React.js. We're confident this style will grow in popularity as larger organizations try to decompose UI development across multiple teams."
Remote mob programming,Trial,Techniques,FALSE,"<p>We continue to see many teams working and collaborating remotely; for these teams <strong>remote mob programming</strong> is a technique that is well worth trying. Remote mob programming allows teams to quickly mob around an issue or piece of code without the physical constraints of only being able to fit so many people around a pairing station. Teams can quickly collaborate on an issue or piece of code using their video conferencing tool of choice without having to connect to a big display, book a physical meeting room or find a whiteboard.</p>"
Peer review equals pull request,Hold,Techniques,FALSE,"<p>Some organizations seem to think <strong>peer review equals pull request</strong>; they've taken the view that the only way to achieve a peer review of code is via a pull request. We've seen this approach create significant team bottlenecks as well as significantly degrade the quality of feedback as overloaded reviewers begin to simply reject requests. Although the argument could be made that this is one way to demonstrate code review regulatory compliance, one of our clients was told this was invalid since there was no evidence the code was actually read by anyone prior to acceptance. Pull requests are only one way to manage the code review workflow; we urge people to consider other approaches, especially where there is a need to coach and pass on feedback carefully.</p>"
Production data in test environments,Hold,Techniques,FALSE,"<p>We continue to perceive <strong>production data in test environments</strong> as an area for concern. Firstly, many examples of this have resulted in reputational damage, for example, where an incorrect alert has been sent from a test system to an entire client population. Secondly, the level of security, specifically around protection of private data, tends to be less for test systems. There is little point in having elaborate controls around access to production data if that data is copied to a test database that can be accessed by every developer and QA. Although you <em>can</em> obfuscate the data, this tends to be applied only to specific fields, for example, credit card numbers. Finally, copying production data to test systems can break privacy laws, for example, where test systems are hosted or accessed from a different country or region. This last scenario is especially problematic with complex cloud deployments. Fake data is a safer approach, and tools exist to help in its creation. We do recognize there are reasons for <em>specific</em> elements of production data to be copied, for example, in the reproduction of bugs or for training of specific ML models. Here our advice is to proceed with caution.</p>"
Diagrams as code,Trial,Techniques,FALSE,"We're seeing more and more tools that enable you to create software architecture and other diagrams as code. There are benefits to using these tools over the heavier alternatives, including easy version control and the ability to generate the DSLs from many sources. Tools in this space that we like include Diagrams, Structurizr DSL, AsciiDoctor Diagram and stables such as WebSequenceDiagrams, PlantUML and the venerable Graphviz. It's also fairly simple to generate your own SVG these days, so don't rule out quickly writing your own tool either. One of our authors wrote a small Ruby script to quickly create SVGs, for example."
Serverless architectures,Adopt,Techniques,FALSE,"The use of serverless architecture has very quickly become an accepted approach for organizations deploying cloud applications, with a plethora of choices available for deployment. Even traditionally conservative organizations are making partial use of some serverless technologies. Most of the discussion goes to Functions as a Service (e.g., AWS Lambda, Google Cloud Functions, Azure Functions) while the appropriate patterns for use are still emerging. Deploying serverless functions undeniably removes the nontrivial effort that traditionally goes into server and OS configuration and orchestration. Serverless functions, however, are not a fit for every requirement. At this stage, you must be prepared to fall back to deploying containers or even server instances for specific requirements. Meanwhile, the other components of a serverless architecture, such as Backend as a Service, have become almost a default choice."
Dremio,Adopt,Platforms,FALSE,"Dremio is a cloud data lake engine that powers interactive queries against cloud data lake storage. With Dremio, you don't have to manage data pipelines in order to extract and transform data into a separate data warehouse for predictive performance. Dremio creates virtual data sets from data ingested into a data lake and provides a uniform view to consumers. Presto popularized the technique of separating storage from the compute layer, and Dremio takes it further by improving performance and optimizing cost of operation."
DataHub,Trial,Platforms,FALSE,"DataHubs are the next generation platform that addresses data discoverability via an extensible metadata system. Instead of crawling and pulling metadata, DataHub adopts a push-based model where individual components of the data ecosystem publish metadata via an API or a stream to the central platform. This push-based integration shifts the ownership from the central entity to individual teams making them accountable for their metadata. As more and more companies are trying to become data-driven, having a system that helps with data discovery and understanding data quality and lineage is critical, and we recommend you assess DataHub in that capacity."
Overambitious API Gateways,Hold,Platforms,FALSE,"This entry aims to identify a possible anti-pattern of overambitious API gateways. We need to keep consistent with our principles of <strong>smart endpoints, dumb pipes</strong> and make sure that business logic is not implemented at the API gateway layer."
Cosmos DB,Adopt,Platforms,FALSE,"Cosmos DB is Microsoft's globally distributed, multimodel database service, which became generally available earlier this year. While most modern NoSQL databases offer tunable consistency, Cosmos DB makes it a first-class citizen and offers five different consistency models. It's worth highlighting that it also supports multiple models — key value, document, column family and graph — all of which map to its internal data model, called atom-record-sequence (ARS). One interesting aspect of Cosmos DB is that it offers service level agreements (SLAs) on its latency, throughput, consistency and availability. With its wide range of applicability, it has set a high standard for other cloud vendors to match."
Stemma,Assess,Platforms,FALSE,"Stemma is a fully managed data catalog, powered by the leading open-source data catalog Amundsen. Stemma goes further and introduces features to support data governance, helps teams understand the impact of data changes, and supports data mesh patterns."
UiPath,Adopt,Platforms,FALSE,"UiPath offers an end-to-end platform for automation, combining the leading Robotic Process Automation (RPA) solution with a full suite of capabilities and techs like AI, Process Mining, and Cloud to enable every organization to rapidly scale digital business operations."
NoSQL,Adopt,Tools,FALSE,"NoSQL is about scale, massive datasets, cloud data, social network data, data in buckets, data in graphs i.e. a range of use cases for which traditional SQL databases may not be the optimal choice. Unravelling NoSQL and trying to explain what it is and why you should or should not be interested in it is difficult as the term covers a wide range of technologies, data architectures and priorities and represents as much a movement or a school of thought as it does any particular technology. Types of NoSQL technologies include key-value, column and object stores as well as document, graph and XML databases."
Terraform,Adopt,Tools,FALSE,"Terraform, is rapidly becoming a de facto choice for creating and managing cloud infrastructures by writing declarative definitions. The configuration of the servers instantiated by Terraform is usually left to Puppet, Chef or Ansible. We like Terraform because the syntax of its files is quite readable and because it supports a number of cloud providers while making no attempt to provide an artificial abstraction across those providers. The active community will add support for the latest features from most cloud providers. Following our first, more cautious, mention of Terraform almost two years ago, it has seen continued development and has evolved into a stable product with a good ecosystem that has proven its value in our projects. The issue with state file management can now be sidestepped by using what Terraform calls a remote state backend. We've successfully used AWS S3 for that purpose."
ConfigCat,Trial,Tools,FALSE,"If you're looking for a service to support dynamic feature toggles (and bear in mind that simple feature toggles work well too), check out ConfigCat. We'd describe it as like LaunchDarkly but cheaper and a bit less fancy and find that it does most of what we need. ConfigCat supports simple feature toggles, user segmentation, and A/B testing and has a generous free tier for low-volume use cases or those just starting out."
Cypress,Adopt,Tools,FALSE,"Cypress is still a favorite among our teams where developers manage end-to-end tests themselves, as part of a healthy test pyramid, of course. We decided to call it out again in this Radar because recent versions of Cypress have added support for Firefox, and we strongly suggest testing on multiple browsers. The dominance of Chrome and Chromium-based browsers has led to a worrying trend of teams seemingly only testing with Chrome which can lead to nasty surprises."
Pactflow,Assess,Tools,FALSE,"<p>For organizations with larger and more complex API ecosystems, especially those who are already using Pact, we think it's worth assessing whether <strong>Pactflow</strong> could be useful. Pactflow manages the workflow and continuous deployment of tests written in Pact, lowering the barrier to consumer-driven contract testing. The complexity of coordination between multiple producers and various disparate consumers can become prohibitive. We've seen some teams invest significant effort in hand-crafting solutions to this problem and think it's worth assessing whether Pactflow can look after this for you.</p>"
SonarQube,Adopt,Tools,FALSE,"SonarQube allows us to measure and understand the evolotuion of the code quality in our Projects. With SonarQube, wecan get a quick insight into the condition of your code. It analyzes many languages and provides numerous static analysis rules. SonarQube is also being used for Static Application Security Testing (SAST) which scans our code for potential security vulnerabilities and is an essential element of our Secure Software Development Lifecycle."
Mass Transit,Trial,languages-and-frameworks,FALSE,"MassTransit is a free, open-source distributed application framework for .NET. MassTransit makes it easy to create applications and services that leverage message-based, loosely-coupled asynchronous communication for higher availability, reliability, and scalability."
Polly,Trial,languages-and-frameworks,FALSE,"Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner."
.NET 5,Assess,languages-and-frameworks,FALSE,".NET 5 represents a significant step forward in bringing .NET Core and .NET Framework into a single platform. Organizations should start to develop a strategy to migrate their development environments — a fragmented mix of frameworks depending on the deployment target — to a single version of .NET 5 or 6 when it becomes available. The advantage of this approach will be a common development platform regardless of the intended environment: Windows, Linux, cross-platform mobile devices (via Xamarin) or the browser (using Blazor). While polyglot development will remain the preferred approach for companies with the engineering culture to support it, others will find it more efficient to standardize on a single platform for .NET development. For now, we want to keep this in the Assess ring to see how well the final unified framework performs in .NET 6."
Angular,Adopt,languages-and-frameworks,FALSE,"Angular is a structural JavaScript-based frontend framework that allows creating robust dynamic web applications. It is developed by Google and is supported by an enormous community which makes it very reliable and easy to find help when needed. Its components system benefits in keeping the code well organized and highly reusable, and that's always appreciated by the developers. In addition to it, TypeScript keeps everything more readable and easily maintainable."
TypeScript,Adopt,languages-and-frameworks,FALSE,"TypeScript is an open source language, a superset of JavaScript. TypeScript compiler can transpile the code to various versions of ECMAScript, starting from ES3. This helps new features introduced in newer versions of ECMAScript to work in older browsers, without writing extra polyfills."
Entity Framework Core,Adopt,languages-and-frameworks,FALSE,"Entity Framework Core is a modern object-database mapper for . NET. It supports LINQ queries, change tracking, updates, and schema migrations. EF Core works with many databases, including SQL Database (on-premises and Azure), SQLite, MySQL, PostgreSQL, and Azure Cosmos DB."
Mock Service Worker,Trial,languages-and-frameworks,FALSE,"Web applications, especially those for internal use in enterprises, are usually written in two parts. The user interface and some business logic run in the web browser while business logic, authorization and persistence run on a server. These two halves normally communicate via JSON over HTTP. The endpoints shouldn't be mistaken for a real API; they're simply an implementation detail of an application that is split across two run-time environments. At the same time, they provide a valid seam to test the pieces individually. When testing the JavaScript part, the server side can be stubbed and mocked at the network level by a tool such as Mountebank. Mock Service Worker offers an alternative approach of intercepting requests in the browser. This simplifies manual tests as well. Like Mountebank, Mock Service Worker is run outside the browser as a Node.js process for testing network interactions. In addition to REST interactions, it mocks GraphQL APIs — a bonus because GraphQL can be complex to mock manually at the network level."
dbt (data build tool),Adopt,Tools,FALSE,"dbt is a transformation workflow that lets teams quickly and collaboratively deploy analytics code following software engineering best practices like modularity, portability, CI/CD, and documentation. Now anyone who knows SQL can build production-grade data pipelines. "
Soda-sql,Trial,Tools,FALSE,"Data testing, monitoring, and profiling for SQL-accessible data."
Great Expectations,Assess,FALSE,"Great Expectations (https://greatexpectations.io/) io is a shared, open standard for data quality. It helps data teams eliminate pipeline debt, through data testing, documentation, and profiling. Can be included in dbt for example."
Metabase,Trial,Platforms,FALSE,"Data Reporting tool, that provides a simple way of navigating round to create dashboards and provide data analysis"
Airbyte,Assess,Tools,FALSE,"Web-based ETL tool for data transformations"