Skip to content

anis6677/expass-5-Anita-CRUD-etc.

Repository files navigation

expass-5-Anita-CRUD-etc.

expass5.md --------------- Anita Varøy

MapReduce is like distributed algorithm used to process and analysed large amounts of data across multiple computers. MapReduce allows for the efficient processing of large datasets by distributing tasks across multiple nodes in a distributed system. maps organize data and the "reduce" phase to aggregate the results, complex data operations can be performed quickly, this may have contributed to the analysis and structured collection of large datasets. Also, this increases the reliability of the experiment. MapReduce leverages parallel processing to enhance performance, particularly with large datasets. Data can be interpreted in several ways. For instance, if the experiment analysed text frequencies, the results could show how often certain words or phrases appear, offering a picture of the most dominant concepts. If MapReduce is used to analyses time-series data, the collection could reveal patterns over time, which can be interpreted to understand growth, decline, or specific events within the dataset and MapReduce could be used to uncover patterns between user behaviours and system performance, which could be useful for optimizations. I find localhost and sha-256 database my IP, so I started mongod driver this been not so easy ….but I fix this problem also. CRUD stands for Create, Read, Update, and Delete, and refers to the basic operation. Quick overview of CRUD operations: Create: Adding new data to the system or database. Adding a new record to a database or creating a new user in an application. Read: Retrieving or fetching data from the system. Searching for a specific record in a database or fetching information about a user. Update: Modifying existing data in the system. Changing an existing record, such as updating a user's email address. Delete: Removing data from the system. Deleting a record from a database or removing a user. CRUD operations are commonly implemented in web applications that interact with databases like MySQL, PostgreSQL, or NoSQL databases like MongoDB. In many applications, these operations are exposed through an API (such as REST or GraphQL) Experiment 2: Aggregation Aggregation operations process data records and return calculated results. Aggregation operations group values from multiple documents together and can perform various operations on the grouped data to return a single result. The MongoDB database provides three ways to perform aggregation: aggregation pipeline, map-reduce, and single-target aggregation methods. In this experiment have a focus on Map-Reduce. The purpose of the MapReduce operation implemented in Experiment 5 was to efficiently aggregate data using two phases: a map phase and a reduce phase. In the map phase, data is broken down into smaller, more manageable pieces, where each piece of data is assigned to a key-value pair. In the reduce phase, these key-value pairs are collected and combined to produce an aggregated result. The MapReduce operation implemented in Experiment 5 was particularly useful because it: Handled large data sets: It divided the data into smaller fragments that were processed in parallel, increasing efficiency. Facilitated aggregated analysis: Through the reduce phase, we were able to easily extract insights from the datasets that would otherwise have taken much more time and resources using traditional methods. Simplified complex data processing: The operation reduced the complexity of data processing by allowing us to focus on how data should be split and reassembled, making it easier to program and maintain. The results from the MapReduce operation provided me with an aggregated collection that gave me a summary of the datasets.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published