diff --git a/Extremem/README.md b/Extremem/README.md index 06b8dbb..cd49ebd 100644 --- a/Extremem/README.md +++ b/Extremem/README.md @@ -24,6 +24,55 @@ This produces the Jar file extremem.jar in the src/main/java subdirectory. Command-line arguments configure Extremem to represent different combinations of workload requirements. Various command-line arguments are described below, in approximate chronological order as to their relevance during execution of each Extremem simulation. In each case, the sample argument display sets the argument to its default value. Though not described explicitly in the descriptions of each argument below, larger data sizes generally result in lower cache locality, which will generally decrease workoad throughput. +### *-dFastAndFurious=false* + +In the default Extremem configuration, the shared Customers and Products in-memory databases are each protected by a global +synchronization lock which allows multiple readers and a single writer. Multiple customers can read from these databases +concurrently. Each time a server thread replaces customers or products, a write-lock is required, causing all customer threads +to wait until the server thread has finished its changes to the database. With the high transaction rates required to represent +allocations in excess of 2 GB per second, significant synchronization contention has been observed. This flag changes the +synchronization protocol. The FastAndFurious mode of operation replaces the global multiple-reader-single-writer lock with +a larger number of smaller-context locks. Locks that protect much smaller scopes are held for much shorter +time frames, improving parallel access to shared data structures. +The large majority of these smaller-context locks should normally be uncontended +because the contexts are so small that collisions by multiple threads on the same small contexts is normally +rare. This mode of operation is identified as ``furious'' because it allows false positives and false +negatives. During the process of replacing products, the indexes might report a match to a product that no longer exists. +Likewise, the indexes may not recognize a match for a product that has been newly added but is not yet indexed. This mode +of operation properly uses synchronization to assure coherency of data structures. The default value of the FastAndFurious flag +is false, preserving compatibility with the original Extremem mode of operation. While configuring FastAndFurious=true allows +Extremem to simulate higher allocation rates with less interference from synchronization contention, disabling FastAndFurious +may reveal different weaknesses in particular GC approaches. In particular, interference from synchronization causes allocations +to be more bursty. While a single server thread locks indexes in order to replace products or customers, multiple customer +threads that would normally be allocating are idle, waiting for the server thread to releases its exclusive lock. When the server +thread releases its lock, these customer threads resume execution and allocate at rates much higher than normal because they +have fallen behind their intended execution schedule. This causes a burst of allocation, making it difficult for the GC +scheduling heuristic to predict when the allocation pool will become depleted. If the heuristic is late to trigger the start +of GC, it is likely that the allocation pool will become exhausted before the GC replenishes it, resulting in a degenerated +stop-the-world GC pause. + +## *-dPhasedUpdates=false* + +In the default Extremem configuration, the shared Customers and Products in-memory databases are each protected by a global +synchronization lock which allows multiple readers and a single writer. Multiple customers can read from these databases +concurrently. Each time a server thread replaces customers or products, a write-lock is required, causing all customer threads +to wait until the server thread has finished its changes to the database. With the high transaction rates required to represent +allocations in excess of 2 GB per second, significant synchronization contention has been observed. This flag changes the +synchronization protocol. The PhasedUpdates mode of operation causes all intended changes to the shared data base to be placed +into a change log. The change log is processed by a single thread running continuously in the background. This thread +copies the existing data base, applies all changes present in the change log, then replaces the old data base with +the new data base. In this mode of operation, the current data base is a read-only data structure requiring no synchronization +for access. A +synchronized method is used to obtain access to the most current version of the shared database. Server threads synchronize +only for the purpose of placing intended changes into the change log. PhasedUpdates and FastAndFurious options are mutually +exclusive. The thread that rebuilds the database does not run if the change log is empty. + +## *-dPhasedUpdateInterval=1m* + +When PhasedUpdates is true, a dedicated background thread alternates between rebuilding of the Customers and Products +databases. Each time it finishes building a database, it waits PhasedUpdateInterval amount of time before it begins +to rebuild the other database. + ### *-dInitializationDelay=50ms* It is important to complete all initialization of all global data structures before beginning to execute the experimental workload threads. If @@ -175,33 +224,6 @@ java -jar src/main/java/extremem.jar \ -dCustomerPeriod=12s -dCustomerThinkTime=8s -dSimulationDuration=20m ``` -### *-dFastAndFurious=false* - -In the default Extremem configuration, the shared Customers and Products in-memory databases are each protected by a global -synchronization lock which allows multiple readers and a single writer. Multiple customers can read from these databases -concurrently. Each time a server thread replaces customers or products, a write-lock is required, causing all customer threads -to wait until the server thread has finished its changes to the database. With the high transaction rates required to represent -allocations in excess of 2 GB per second, significant synchronization contention has been observed. This flag changes the -synchronization protocol. The FastAndFurious mode of operation replaces the global multiple-reader-single-writer lock with -a larger number of smaller-context locks. Locks that protect much smaller scopes are held for much shorter -time frames, improving parallel access to shared data structures. -The large majority of these smaller-context locks should normally be uncontended -because the contexts are so small that collisions by multiple threads on the same small contexts is normally -rare. This mode of operation is identified as ``furious'' because it allows false positives and false -negatives. During the process of replacing products, the indexes might report a match to a product that no longer exists. -Likewise, the indexes may not recognize a match for a product that has been newly added but is not yet indexed. This mode -of operation properly uses synchronization to assure coherency of data structures. The default value of the FastAndFurious flag -is false, preserving compatibility with the original Extremem mode of operation. While configuring FastAndFurious=true allows -Extremem to simulate higher allocation rates with less interference from synchronization contention, disabling FastAndFurious -may reveal different weaknesses in particular GC approaches. In particular, interference from synchronization causes allocations -to be more bursty. While a single server thread locks indexes in order to replace products or customers, multiple customer -threads that would normally be allocating are idle, waiting for the server thread to releases its exclusive lock. When the server -thread releases its lock, these customer threads resume execution and allocate at rates much higher than normal because they -have fallen behind their intended execution schedule. This causes a burst of allocation, making it difficult for the GC -scheduling heuristic to predict when the allocation pool will become depleted. If the heuristic is late to trigger the start -of GC, it is likely that the allocation pool will become exhausted before the GC replenishes it, resulting in a degenerated -stop-the-world GC pause. - ## Interpreting Results The report displays response times for each of the various distinct operations that are performed by the Extremem workload. The average response times give an approximation of overall performance. A lower average response time corresponds to improved throughput. diff --git a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Bootstrap.java b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Bootstrap.java index c83ef22..fb54165 100644 --- a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Bootstrap.java +++ b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Bootstrap.java @@ -12,6 +12,7 @@ public class Bootstrap extends ExtrememThread { } public void runExtreme() { + UpdateThread update_thread; CustomerThread[] customer_threads; ServerThread[] server_threads; @@ -269,7 +270,8 @@ public void runExtreme() { server_threads[i].start(); // will wait for first release } staggered_start.garbageFootprint(this); - + staggered_start = null; + staggered_customer_replacement.garbageFootprint(this); staggered_customer_replacement = null; @@ -286,7 +288,17 @@ public void runExtreme() { if (product_replacement_stagger != null) product_replacement_stagger.garbageFootprint(this); product_replacement_stagger = null; - + + if (config.PhasedUpdates()) { + staggered_start = start_time.addRelative(this, config.PhasedUpdateInterval()); + update_thread = new UpdateThread(config, randomLong(), all_products, all_customers, staggered_start, end_time); + update_thread.start(); // will wait for first release + staggered_start.garbageFootprint(this); + staggered_start = null; + } else { + update_thread = null; + } + now = AbsoluteTime.now(this); if (config.ReportCSV()) { s = Long.toString(now.microseconds()); @@ -311,7 +323,6 @@ public void runExtreme() { now = null; Trace.msg(2, "Joining with customer threads"); - // Each thread will terminate when the end_time is reached. for (int i = 0; i < config.CustomerThreads(); i++) { try { @@ -322,7 +333,6 @@ public void runExtreme() { } Trace.msg(2, "Joining with server threads"); - for (int i = 0; i < config.ServerThreads(); i++) { try { server_threads[i].join(); @@ -330,7 +340,20 @@ public void runExtreme() { i--; // just try it again } } - + + if (update_thread != null) { + Trace.msg(2, "Joining with update thread"); + boolean retry = false; + do { + try { + update_thread.join(); + retry = false; + } catch (InterruptedException x) { + retry = true; + } + } while (retry); + } + Trace.msg(2, "Program simulation has ended"); all_products.report(this); all_customers.report(this); diff --git a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Configuration.java b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Configuration.java index 25b812e..a509adc 100644 --- a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Configuration.java +++ b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Configuration.java @@ -38,6 +38,7 @@ class Configuration { static final boolean DefaultReportIndividualThreads = false; static final boolean DefaultReportCSV = false; static final boolean DefaultFastAndFurious = false; + static final boolean DefaultPhasedUpdates = false; static final int DefaultDictionarySize = 25000; static final String DefaultDictionaryFile = "/usr/share/dict/words"; @@ -75,6 +76,8 @@ class Configuration { static final int DefaultProductReplacementPeriodSeconds = 90; static final int DefaultProductReplacementCount = 64; + static final int DefaultPhasedUpdateIntervalSeconds = 60; + static final long DefaultInitializationDelayMillis = 50; static final long DefaultDurationMinutes = 10; @@ -96,6 +99,7 @@ class Configuration { private int ServerThreads; private boolean FastAndFurious; + private boolean PhasedUpdates;; private boolean ReportIndividualThreads; private boolean ReportCSV; @@ -112,6 +116,8 @@ class Configuration { private RelativeTime ServerPeriod; private RelativeTime BrowsingExpiration; + private RelativeTime PhasedUpdateInterval; + // Multiple concurrent Server threads execute with the same period, // with different stagger values. private RelativeTime CustomerReplacementPeriod; @@ -200,6 +206,7 @@ void initialize(ExtrememThread t) { RandomSeed = DefaultRandomSeed; FastAndFurious = DefaultFastAndFurious; + PhasedUpdates = DefaultPhasedUpdates; SimulationDuration = new RelativeTime(t, DefaultDurationMinutes * 60, 0); SimulationDuration.changeLifeSpan(t, LifeSpan.NearlyForever); @@ -222,6 +229,9 @@ void initialize(ExtrememThread t) { ServerPeriod = rt.addMillis(t, DefaultServerPeriodMilliseconds); ServerPeriod.changeLifeSpan(t, LifeSpan.NearlyForever); + PhasedUpdateInterval = rt.addSeconds(t, DefaultPhasedUpdateIntervalSeconds); + PhasedUpdateInterval.changeLifeSpan(t, LifeSpan.NearlyForever); + CustomerReplacementPeriod = ( rt.addSeconds(t, DefaultCustomerReplacementPeriodSeconds)); CustomerReplacementPeriod.changeLifeSpan(t, LifeSpan.NearlyForever); @@ -257,6 +267,7 @@ void initialize(ExtrememThread t) { private static String[] boolean_patterns = { "FastAndFurious", + "PhasedUpdates", "ReportCSV", "ReportIndividualThreads", }; @@ -292,6 +303,7 @@ void initialize(ExtrememThread t) { "CustomerReplacementPeriod", "CustomerThinkTime", "InitializationDelay", + "PhasedUpdateInterval", "ProductReplacementPeriod", "ServerPeriod", "SimulationDuration", @@ -362,11 +374,16 @@ else if (booleanString.equals("true")) break; } case 1: + if (keyword.equals("PhasedUpdates")) { + PhasedUpdates = b; + break; + } + case 2: if (keyword.equals("ReportCSV")) { ReportCSV = b; break; } - case 2: + case 3: if (keyword.equals("ReportIndividualThreads")) { ReportIndividualThreads = b; break; @@ -585,20 +602,27 @@ else if ((i + 2 == timeString.length()) && break; } case 5: + if (keyword.equals("PhasedUpdateInterval")) { + PhasedUpdateInterval.garbageFootprint(t); + PhasedUpdateInterval = new RelativeTime(t, secs, nanos); + PhasedUpdateInterval.changeLifeSpan(t, LifeSpan.NearlyForever); + break; + } + case 6: if (keyword.equals("ProductReplacementPeriod")) { ProductReplacementPeriod.garbageFootprint(t); ProductReplacementPeriod = new RelativeTime(t, secs, nanos); ProductReplacementPeriod.changeLifeSpan(t, LifeSpan.NearlyForever); break; } - case 6: + case 7: if (keyword.equals("ServerPeriod")) { ServerPeriod.garbageFootprint(t); ServerPeriod = new RelativeTime(t, secs, nanos); ServerPeriod.changeLifeSpan(t, LifeSpan.NearlyForever); break; } - case 7: + case 8: if (keyword.equals("SimulationDuration")) { SimulationDuration.garbageFootprint(t); SimulationDuration = new RelativeTime(t, secs, nanos); @@ -632,6 +656,9 @@ private boolean sufficientVocabulary(int vocab_size, int num_words, private void assureConfiguration(ExtrememThread t) { // Ignore memory allocation accounting along early termination paths. + if (PhasedUpdates && FastAndFurious) + usage("Only one of PhasedUpdates or FastAndFurious can be true"); + if (DictionarySize < 1) usage("DictionarySize must be greater or equal to 1"); @@ -738,6 +765,10 @@ boolean FastAndFurious() { return FastAndFurious; } + boolean PhasedUpdates() { + return PhasedUpdates; + } + int MaxArrayLength() { return MaxArrayLength; } @@ -852,6 +883,10 @@ RelativeTime ProductReplacementPeriod() { return ProductReplacementPeriod; } + RelativeTime PhasedUpdateInterval() { + return PhasedUpdateInterval; + } + // Dictionary services String arbitraryWord(ExtrememThread t) { return dictionary.arbitraryWord(t); @@ -873,9 +908,20 @@ void dumpCSV(ExtrememThread t) { ReportIndividualThreads? "true": "false"); Report.output("ReportCSV,", ReportCSV? "true": "false"); - Report.output(); + Report.output("Simulation configuration"); + Report.output("FastAndFurious,", + FastAndFurious? "true": "false"); + Report.output("PhasedUpdates,", + PhasedUpdates? "true": "false"); + Report.output(); + s = Long.toString(PhasedUpdateInterval.microseconds()); + l = s.length(); + Util.ephemeralString(t, l); + Report.output("PhasedUpdateInterval,", s); + Util.abandonEphemeralString(t, l); + s = Integer.toString(RandomSeed); l = s.length(); Util.ephemeralString(t, l); @@ -1081,6 +1127,16 @@ void dump(ExtrememThread t) { Report.output(); Report.output("Simulation configuration"); + Report.output(" Fine-grain locking of data base (FastAndFurious): ", FastAndFurious? "true": "false"); + Report.output(" Rebuild data base in phases (PhasedUpdates): ", PhasedUpdates? "true": "false"); + Report.output(); + s = PhasedUpdateInterval.toString(); + l = s.length(); + Util.ephemeralString(t, l); + Report.output(" Time between data rebuild (PhasedUpdateInterval): ", s); + Util.abandonEphemeralString(t, l); + + s = Integer.toString(RandomSeed); l = s.length(); Util.ephemeralString(t, l); diff --git a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Customers.java b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Customers.java index eb441b3..614a922 100644 --- a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Customers.java +++ b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Customers.java @@ -9,13 +9,90 @@ * Keep track of all currently active customers. */ class Customers extends ExtrememObject { + static class ChangeLogNode { + private Customer replacement_customer; + private int replacement_index; + private ChangeLogNode next; + + ChangeLogNode(int index, Customer customer) { + this.replacement_index = index; + this.replacement_customer = customer; + this.next = null; + } + + int index() { + return replacement_index; + } + + Customer customer() { + return replacement_customer; + } + } + + static class ChangeLog { + ChangeLogNode head, tail; + + ChangeLog() { + head = tail = null; + } + + synchronized private void addToEnd(ChangeLogNode node) { + if (head == null) { + head = tail = node; + } else { + tail.next = node; + tail = node; + } + } + + void append(int index, Customer customer) { + ChangeLogNode new_node = new ChangeLogNode(index, customer); + addToEnd(new_node); + } + + // Returns null if ChangeLog is empty. + synchronized ChangeLogNode pull() { + ChangeLogNode result = head; + if (head == tail) { + // This handles case where head == tail == null already. Overwriting with null is cheaper than testing and branching + // over for special case. + head = tail = null; + } else { + head = head.next; + } + return result; + } + } + + static class CurrentCustomersData { + final private Arraylet customer_names; + final private HashMap customer_map; + + CurrentCustomersData(Arraylet customer_names, HashMap customer_map) { + this.customer_names = customer_names; + this.customer_map = customer_map; + } + + Arraylet customerNames() { + return customer_names; + } + + HashMap customerMap() { + return customer_map; + } + } + + // The change_log is only used if config.PhasedUpdates + final ChangeLog change_log; + static final float DefaultLoadFactor = 0.75f; final private ConcurrencyControl cc; final private Configuration config; // was final private String[] customer_names; - final private Arraylet customer_names; - final private HashMap customer_map; + + private Arraylet customer_names; + private HashMap customer_map; private int cbhs = 0; // cumulative browsing history size. @@ -28,6 +105,12 @@ class Customers extends ExtrememObject { MemoryLog log = t.memoryLog(); Polarity Grow = Polarity.Expand; + if (config.PhasedUpdates()) { + change_log = new ChangeLog(); + } else { + change_log = null; + } + // Account for cc, config, customer_names, customer_map log.accumulate(ls, MemoryFlavor.ObjectReference, Grow, 4); // Account for long cncl, next_customer_no; int cbhs @@ -115,9 +198,27 @@ private String randomDistinctName (ExtrememThread t) { } while (true); } + // In PhasedUpdates mode of operation, the database updater thread invokes this service to update customer_names + // and customer_map each time it rebuilds the Customers database + synchronized void establishUpdatedDataBase(ExtrememThread t, Arraylet customer_names, + HashMapcustomer_map) { + this.customer_names = customer_names; + this.customer_map = customer_map; + } + + synchronized CurrentCustomersData getCurrentData() { + return new CurrentCustomersData(customer_names, customer_map); + } + Customer selectRandomCustomer(ExtrememThread t) { Customer result; - if (config.FastAndFurious()) { + if (config.PhasedUpdates()) { + // no synchronization necessary here + int index = t.randomUnsignedInt() % config.NumCustomers(); + CurrentCustomersData frozen_state = getCurrentData(); + String name = frozen_state.customerNames().get(index); + result = frozen_state.customerMap().get(name); + } else if (config.FastAndFurious()) { int index = t.randomUnsignedInt() % config.NumCustomers(); synchronized (customer_names) { String name = customer_names.get(index); @@ -139,6 +240,62 @@ Customer controlledSelectRandomCustomer(ExtrememThread t) { return c; } + // For PhasedUpdates mode of operation + void replaceRandomCustomerPhasedUpdates(ExtrememThread t) { + String new_customer_name = randomDistinctName(t); + long new_customer_no; + Customer obsolete_customer; + synchronized (this) { + new_customer_no = next_customer_no++; + } + Customer new_customer = new Customer(t, LifeSpan.NearlyForever, new_customer_name, new_customer_no); + int replacement_index = t.randomUnsignedInt() % config.NumCustomers(); + change_log.append(replacement_index, new_customer); + + // Memory accounting is not implemented for PhasedUpdates mode + } + + // Rebuild Customers data base from change_log. Return number of customers changed. + long rebuildCustomersPhasedUpdates(ExtrememThread t) { + Arraylet new_customer_names; + HashMap new_customer_map; + int num_customers = config.NumCustomers(); + int capacity = Util.computeHashCapacity(num_customers, DefaultLoadFactor, Util.InitialHashMapArraySize); + long tally = 0; + + LifeSpan ls = this.intendedLifeSpan(); + new_customer_names = new Arraylet(t, ls, config.MaxArrayLength(), num_customers); + new_customer_map = new HashMap(capacity, DefaultLoadFactor); + + // First, copy the existing data base + for (int i = 0; i < num_customers; i++) { + String customer_name = customer_names.get(i); + new_customer_names.set(i, customer_name); + new_customer_map.put(customer_name, customer_map.get(customer_name)); + } + + // Then, modify the data base according to instructions in the change log. + ChangeLogNode change; + while ((change = change_log.pull()) != null) { + tally++; + int replacement_index = change.index(); + Customer replacement_customer = change.customer(); + String replacement_customer_name = replacement_customer.name(); + if (new_customer_map.get(replacement_customer_name) == null) { + String obsolete_name = new_customer_names.get(replacement_index); + new_customer_map.remove(obsolete_name); + new_customer_names.set(replacement_index, replacement_customer_name); + new_customer_map.put(replacement_customer_name, replacement_customer); + // Don't bother to expire the old customer or expunge it from save-for-later queues. That will happen when the + // expiration time is reached, at which time the object will become garbage. + } + // else, in the very unlikely event that this new name is redundant with an existing name, skip the + // customer replacement request. + } + establishUpdatedDataBase(t, new_customer_names, new_customer_map); + return tally; + } + void replaceRandomCustomer(ExtrememThread t) { if (config.FastAndFurious()) { String new_customer_name = randomDistinctName(t); diff --git a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Products.java b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Products.java index eabb789..7bb5de2 100644 --- a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Products.java +++ b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/Products.java @@ -9,6 +9,100 @@ * Keep track of all currently active products. */ class Products extends ExtrememObject { + static class ChangeLogNode { + private Product replacement_product; + private int replacement_index; + private ChangeLogNode next; + + ChangeLogNode(int index, Product product) { + this.replacement_index = index; + this.replacement_product = product; + this.next = null; + } + + int index() { + return replacement_index; + } + + Product product() { + return replacement_product; + } + } + + static class ChangeLog { + ChangeLogNode head, tail; + + ChangeLog() { + head = tail = null; + } + + synchronized private void addToEnd(ChangeLogNode node) { + if (head == null) { + head = tail = node; + } else { + tail.next = node; + tail = node; + } + } + + void append(int index, Product product) { + ChangeLogNode new_node = new ChangeLogNode(index, product); + addToEnd(new_node); + } + + // Returns null if ChangeLog is empty. + synchronized ChangeLogNode pull() { + ChangeLogNode result = head; + if (head == tail) { + // This handles case where head == tail == null already. Overwriting with null is cheaper than testing and branching + // over for special case. + head = tail = null; + } else { + head = head.next; + } + return result; + } + } + + static class CurrentProductsData { + final private ArrayletOflong product_ids; + // Map unique product id to Product + final private TreeMap product_map; + // Map keywords found in product name to product id + final private TreeMap > name_index; + // Map keywords found in product description to product id + final private TreeMap > description_index; + + CurrentProductsData(ArrayletOflong product_ids, TreeMap product_map, + TreeMap > name_index, + TreeMap > description_index) { + + this.product_ids = product_ids; + this.product_map = product_map; + this.name_index = name_index; + this.description_index = description_index; + } + + ArrayletOflong productIds() { + return product_ids; + } + + TreeMap productMap() { + return product_map; + } + + TreeMap > nameIndex() { + return name_index; + } + + TreeMap > descriptionIndex() { + return description_index; + } + } + + // The change_log is only used if config.PhasedUpdates + final ChangeLog change_log; + private static final float DefaultLoadFactor = 0.75f; /* Concurrency control: @@ -115,7 +209,6 @@ class Products extends ExtrememObject { private long npi; // next product id - // was private long[] product_ids; private ArrayletOflong product_ids; private ConcurrencyControl cc; @@ -131,6 +224,12 @@ class Products extends ExtrememObject { Products (ExtrememThread t, LifeSpan ls, Configuration config) { super(t, ls); + if (config.PhasedUpdates()) { + change_log = new ChangeLog(); + } else { + change_log = null; + } + MemoryLog log = t.memoryLog(); MemoryLog garbage = t.garbageLog(); Polarity Grow = Polarity.Expand; @@ -209,6 +308,33 @@ class Products extends ExtrememObject { // accounted during construction of description_index. } + // In PhasedUpdates mode of operation, the database updater thread invokes this service to update customer_names + // and customer_map each time it rebuilds the Customers database + synchronized void establishUpdatedDataBase(ExtrememThread t, ArrayletOflong product_ids, TreeMap product_map, + TreeMap > name_index, + TreeMap > description_index) { + this.product_ids = product_ids; + this.product_map = product_map; + this.name_index = name_index; + this.description_index = description_index; + } + + // In PhasedUpdates mode of operation. every CustomerThread invokes getUpdatedDataBase() before each customer transaction. + // This assures no incoherent changes to data base in the middle of the customer transaction. + synchronized CurrentProductsData getUpdatedDataBase() { + if (config.PhasedUpdates()) { + return new CurrentProductsData(product_ids, product_map, name_index, description_index); + } else { + throw new IllegalStateException("Only update data base in PhasedUpdates mode of operation"); + } + } + + // For PhasedUpdates mode of operation + Product fetchProductByIndexPhasedUpdates(ExtrememThread t, int index, CurrentProductsData current) { + long id = current.productIds().get(index); + return current.productMap().get(id); + } + Product fetchProductByIndex(ExtrememThread t, int index) { if (config.FastAndFurious()) { long id; @@ -237,6 +363,17 @@ Product controlledFetchProductByIndex(ExtrememThread t, int index) { return product_map.get(id); } + // The Product result returned is only an approximation of which Product will be replaced. In the case that + // the change_log holds multiple changes of the same product, this replacement will change the product that + // is in the change_log rather than the one that is in the current data base. + Product replaceArbitraryProductPhasedUpdates(ExtrememThread t, Product new_product) { + int index = t.randomUnsignedInt() % product_ids.length(); + change_log.append(index, new_product); + long old_id = product_ids.get(index); + Product removed_product_approximation = product_map.get(old_id); + return removed_product_approximation; + } + Product replaceArbitraryProduct(ExtrememThread t, Product new_product) { if (config.FastAndFurious()) { long old_id; @@ -336,10 +473,14 @@ public void replaceRandomProduct(ExtrememThread t) { long new_id = nextUniqId (); Product new_product = new Product(t, LifeSpan.NearlyForever, new_id, name, description); - Product old_product = replaceArbitraryProduct (t, new_product); + if (config.PhasedUpdates()) { + Product old_product = replaceArbitraryProductPhasedUpdates(t, new_product); - Trace.msg(4, "old product: ", old_product.name(), - " replaced with new product: ", new_product.name()); + } else { + Product old_product = replaceArbitraryProduct (t, new_product); + Trace.msg(4, "old product: ", old_product.name(), + " replaced with new product: ", new_product.name()); + } // Note that there is a race between when keyword searches are // performed and when products are looked up. For example, a @@ -349,8 +490,85 @@ public void replaceRandomProduct(ExtrememThread t) { // This race is handled elsewhere. } + Product[] lookupProductsMatchingAllPhasedUpdates(ExtrememThread t, String[] keywords, CurrentProductsData current) { + ExtrememHashSet intersection = new ExtrememHashSet(t, LifeSpan.Ephemeral); + for (int i = 0; i < keywords.length; i++) { + String keyword = keywords[i]; + if (i == 0) { + ExtrememHashSet matched_ids; + matched_ids = current.nameIndex().get(keyword); + if (matched_ids != null) { + Util.createEphemeralHashSetIterator(t); + for (Long id: matched_ids) { + addToSetIfAvailable(t, intersection, id); + } + Util.abandonEphemeralHashSetIterator(t); + } + matched_ids = current.descriptionIndex().get(keyword); + if (matched_ids != null) { + Util.createEphemeralHashSetIterator(t); + for (Long id: matched_ids) { + addToSetIfAvailable(t, intersection, id); + } + Util.abandonEphemeralHashSetIterator(t); + } + } else { + ExtrememHashSet matched_ids; + ExtrememHashSet new_matches = new ExtrememHashSet(t, LifeSpan.Ephemeral); + matched_ids = current.nameIndex().get(keyword); + if (matched_ids != null) { + Util.createEphemeralHashSetIterator(t); + for (Long id: matched_ids) { + addToSetIfAvailable(t, new_matches, id); + } + Util.abandonEphemeralHashSetIterator(t); + } + matched_ids = current.descriptionIndex().get(keyword); + if (matched_ids != null) { + Util.createEphemeralHashSetIterator(t); + for (Long id: matched_ids) { + addToSetIfAvailable(t, new_matches, id); + } + Util.abandonEphemeralHashSetIterator(t); + } + ExtrememHashSet remove_set = new ExtrememHashSet(t, LifeSpan.Ephemeral); + Util.createEphemeralHashSetIterator(t); + for (Product p: intersection) { + if (!new_matches.contains(p)) { + remove_set.add(t, p); + } + } + Util.abandonEphemeralHashSetIterator(t); + new_matches.garbageFootprint(t); + Util.createEphemeralHashSetIterator(t); + for (Product p: remove_set) { + intersection.remove(t, p); + } + Util.abandonEphemeralHashSetIterator(t); + remove_set.garbageFootprint(t); + if (intersection.size() == 0) { + Util.ephemeralReferenceArray(t, 0); + // Returning an array with no entries. + return new Product[0]; + } + } + } + Product[] result = new Product[intersection.size()]; + Util.ephemeralReferenceArray(t, result.length); + int j = 0; + Util.createEphemeralHashSetIterator(t); + for (Product p: intersection) + result[j++] = p; + Util.abandonEphemeralHashSetIterator(t); + intersection.garbageFootprint(t); + return result; + } + Product[] lookupProductsMatchingAll(ExtrememThread t, String [] keywords) { - if (config.FastAndFurious()) { + if (config.PhasedUpdates()) { + CurrentProductsData all_products_currently = getUpdatedDataBase(); + return lookupProductsMatchingAllPhasedUpdates(t, keywords, all_products_currently); + } else if (config.FastAndFurious()) { ExtrememHashSet intersection = new ExtrememHashSet(t, LifeSpan.Ephemeral); for (int i = 0; i < keywords.length; i++) { String keyword = keywords[i]; @@ -446,9 +664,44 @@ Product[] lookupProductsMatchingAll(ExtrememThread t, String [] keywords) { } } - Product[] lookupProductsMatchingAny(ExtrememThread t, - String [] keywords) { - if (config.FastAndFurious()) { + Product[] lookupProductsMatchingAnyPhasedUpdates(ExtrememThread t, String [] keywords, CurrentProductsData products) { + ExtrememHashSet accumulator = new ExtrememHashSet(t, LifeSpan.Ephemeral); + for (int i = 0; i < keywords.length; i++) { + String keyword = keywords[i]; + ExtrememHashSet matched_ids = name_index.get(keyword); + matched_ids = name_index.get(keyword); + if (matched_ids != null) { + Util.createEphemeralHashSetIterator(t); + for (Long id: matched_ids) { + addToSetIfAvailable(t, accumulator, id); + } + Util.abandonEphemeralHashSetIterator(t); + } + matched_ids = description_index.get(keyword); + if (matched_ids != null) { + Util.createEphemeralHashSetIterator(t); + for (Long id: matched_ids) { + addToSetIfAvailable(t, accumulator, id); + } + Util.abandonEphemeralHashSetIterator(t); + } + } + Product[] result = new Product[accumulator.size()]; + Util.ephemeralReferenceArray(t, result.length); + int j = 0; + Util.createEphemeralHashSetIterator(t); + for (Product p: accumulator) + result[j++] = p; + Util.abandonEphemeralHashSetIterator(t); + accumulator.garbageFootprint(t); + return result; + } + + Product[] lookupProductsMatchingAny(ExtrememThread t, String [] keywords) { + if (config.PhasedUpdates()) { + CurrentProductsData all_products_currently = getUpdatedDataBase(); + return lookupProductsMatchingAnyPhasedUpdates(t, keywords, all_products_currently); + } else if (config.FastAndFurious()) { ExtrememHashSet accumulator = new ExtrememHashSet(t, LifeSpan.Ephemeral); for (int i = 0; i < keywords.length; i++) { String keyword = keywords[i]; @@ -495,6 +748,54 @@ Product[] lookupProductsMatchingAny(ExtrememThread t, } } + // Rebuild Products data base from change_log. Return the number of products actually replaced. + long rebuildProductsPhasedUpdates(ExtrememThread t) { + ArrayletOflong new_product_ids; + TreeMap new_product_map; + TreeMap > new_name_index; + TreeMap > new_description_index; + long tally = 0; + + LifeSpan ls = this.intendedLifeSpan(); + int num_products = config.NumProducts(); + new_product_ids = new ArrayletOflong(t, ls, config.MaxArrayLength(), num_products); + new_product_map = new TreeMap(); + new_name_index = new TreeMap>(); + new_description_index = new TreeMap>(); + + // First, copy the existing data base + for (int i = 0; i < num_products; i++) { + long product_id = product_ids.get(i); + Product product = product_map.get(product_id); + new_product_ids.set(i, product_id); + new_product_map.put(product_id, product); + } + + // Then, modify the data base according to content of the change log. + ChangeLogNode change; + while ((change = change_log.pull()) != null) { + int replacement_index = change.index(); + long replacement_product_id = product_ids.get(replacement_index); + tally++; + + new_product_map.remove(replacement_product_id); + + Product new_product = change.product(); + long new_product_id = new_product.id(); + new_product_ids.set(replacement_index, new_product_id); + new_product_map.put(new_product_id, new_product); + } + + // Now, build the replacement indexes + for (int i = 0; i < num_products; i++) { + long product_id = new_product_ids.get(i); + Product product = new_product_map.get(product_id); + addToIndicesPhasedUpdates(t, product, new_name_index, new_description_index); + } + establishUpdatedDataBase(t, new_product_ids, new_product_map, new_name_index, new_description_index); + return tally; + } + // Memory footprint may change as certain product names and // descriptions are replaced. void tallyMemory (MemoryLog log, LifeSpan ls, Polarity p) { @@ -820,6 +1121,37 @@ private ExtrememHashSet getSetAtIndex( return set; } + // Thread does not hold exclusion lock and does not require it. Memory accounting is not fully implemented. + private void addStringToIndexPhasedUpdates(ExtrememThread t, long id, boolean is_name_index, String s, + TreeMap > index) { + LifeSpan ls = this.intendedLifeSpan(); + // Assume first characters of s not equal to space + for (int start = 0; start < s.length(); start = skipSpaces(s, start)) { + int end = skipNonSpaces(s, start); + String word = s.substring(start, end); + int word_length = end - start; + start = end; + + ExtrememHashSet set; + set = index.get(word); + if (set == null) { + // Do the allocation of new HashSet outside of synchronized context + set = new ExtrememHashSet(t, ls); + if (index.get(word) == null) { + index.put(word, set); + } + } + // id gets auto-boxed to Long + long orig_capacity = set.capacity(); + boolean success; + long new_capacity; + // Note: there may be allocation within this synchronized block to represent id + success = set.add(t, id); + } + } + + + // Thread does not hold exclusion lock. // This method accounts for memory required to autobox id, and to // create or expand the ExtrememHashSet, as appropriate. @@ -943,11 +1275,21 @@ private ExtrememHashSet getSetAtIndex( } } + private void addToIndicesPhasedUpdates(ExtrememThread t, Product p, + TreeMap> name_map, + TreeMap> desc_map) { + long id = p.id (); + + // Memory accounting not implemented for PhasedUpdates + addStringToIndexPhasedUpdates(t, id, true, p.name(), name_map); + addStringToIndexPhasedUpdates(t, id, false, p.description(), desc_map); + } + // Thread does not hold exclusion lock. private void addToIndicesFastAndFurious(ExtrememThread t, Product p) { long id = p.id (); - MemoryLog log = t.memoryLog(); + // Memory accounting not implemented for FastAndFurious addStringToIndexFastAndFurious(t, id, true, p.name(), name_index); addStringToIndexFastAndFurious(t, id, false, p.description(), description_index); } diff --git a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/ServerThread.java b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/ServerThread.java index d092a83..7e01ccf 100644 --- a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/ServerThread.java +++ b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/ServerThread.java @@ -172,8 +172,13 @@ public void runExtreme() { break; case 2: if (next_release_time.compare(customer_replacement_time) >= 0) { - for (int i = config.CustomerReplacementCount(); i > 0; i--) - all_customers.replaceRandomCustomer(this); + if (config.PhasedUpdates()) { + for (int i = config.CustomerReplacementCount(); i > 0; i--) + all_customers.replaceRandomCustomerPhasedUpdates(this); + } else { + for (int i = config.CustomerReplacementCount(); i > 0; i--) + all_customers.replaceRandomCustomer(this); + } customer_replacement_time.garbageFootprint(this); customer_replacement_time = ( diff --git a/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/UpdateThread.java b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/UpdateThread.java new file mode 100644 index 0000000..8aa568a --- /dev/null +++ b/Extremem/src/main/java/com/amazon/corretto/benchmark/extremem/UpdateThread.java @@ -0,0 +1,304 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 + +package com.amazon.corretto.benchmark.extremem; + +class UpdateThread extends ExtrememThread { + // There are two attention points: (0) Rebuild Customers, (1) Rebuild Products + final static int TotalAttentionPoints = 2; + + // Identifies the point of attention for the next release of this server thread. + private int attention; + + private final Configuration config; + private final Products all_products; + private final Customers all_customers; + private AbsoluteTime next_release_time; + private final AbsoluteTime end_simulation_time; + + private long customers_rebuild_count = 0; + private long replaced_customers_min = 0; + private long replaced_customers_max = 0; + private long replaced_customers_total = 0; + private long replaced_customers_micros_min = 0; + private long replaced_customers_micros_max = 0; + private long replaced_customers_micros_total = 0; + + private long products_rebuild_count = 0; + private long replaced_products_min = 0; + private long replaced_products_max = 0; + private long replaced_products_total = 0; + private long replaced_products_micros_min = 0; + private long replaced_products_micros_max = 0; + private long replaced_products_micros_total = 0; + + // private final MemoryLog alloc_accumulator; + // private final MemoryLog garbage_accumulator; + + UpdateThread(Configuration config, long random_seed, Products all_products, Customers all_customers, + AbsoluteTime first_release, AbsoluteTime end_simulation) { + super (config, random_seed); + final Polarity Grow = Polarity.Expand; + final MemoryLog log = this.memoryLog(); + final MemoryLog garbage = this.garbageLog(); + + this.attention = 0; + this.config = config; + + this.setLabel("PhasedUpdaterThread"); + // Util.convertEphemeralString(this, LifeSpan.NearlyForever, label.length()); + + this.all_customers = all_customers; + this.all_products = all_products; + + // Replaced every period, typically less than 2 minutes for ServerThread. + this.next_release_time = new AbsoluteTime(this, first_release); + this.next_release_time.changeLifeSpan(this, LifeSpan.TransientShort); + + this.end_simulation_time = end_simulation; + + // this.accumulator = accumulator; + // this.alloc_accumulator = alloc_accumulator; + // this.garbage_accumulator = garbage_accumulator; + + // Account for reference fields label, all_products, + // all_customers, sales_queue, browsing_queue, + // end_simulation_time, history, accumulator, alloc_accumulator, + // garbage_accumulator, next_release_time, + // customer_replacement_time, product_replacement_time + // log.accumulate(LifeSpan.NearlyForever, + // MemoryFlavor.ObjectReference, Grow, 13); + // Account for int field attention. + // log.accumulate(LifeSpan.NearlyForever, + // MemoryFlavor.ObjectRSB, Grow, Util.SizeOfInt); + } + + public void runExtreme() { + long customers_rebuild_count = 0; + long replaced_customers_min = 0; + long replaced_customers_max = 0; + long replaced_customers_total = 0; + long replaced_customers_micros_min = 0; + long replaced_customers_micros_max = 0; + long replaced_customers_micros_total = 0; + + long products_rebuild_count = 0; + long replaced_products_min = 0; + long replaced_products_max = 0; + long replaced_products_total = 0; + long replaced_products_micros_min = 0; + long replaced_products_micros_max = 0; + long replaced_products_micros_total = 0; + + while (true) { + // If the simulation will have ended before we wake up, don't + // even bother to sleep. + if (next_release_time.compare(end_simulation_time) >= 0) + break; + + AbsoluteTime now = next_release_time.sleep(this); + AbsoluteTime after = now; + + RelativeTime delta; + long duration; // microseconds + + // In an earlier implementation, termination of the thread was + // determined by comparing next_release_time against + // end_simulation_time. In the case that the thread falls + // hopelessly behind schedule, the thread "never" terminates. + if (now.compare(end_simulation_time) >= 0) + break; + + Trace.msg(4, "PhasedUpdateThread processing with attention: ", Integer.toString(attention)); + + switch (attention) { + case 0: + // Update the Customers data base + long customers_replaced = all_customers.rebuildCustomersPhasedUpdates(this); + after = AbsoluteTime.now(this); + delta = after.difference(this, now); + duration = delta.microseconds(); + // now.garbageFootprint(); + // delta.garbageFootprint(); + if (customers_rebuild_count++ == 0) { + replaced_customers_min = replaced_customers_max = replaced_customers_total = customers_replaced; + replaced_customers_micros_min = replaced_customers_micros_max = replaced_customers_micros_total = duration; + } else { + replaced_customers_total += customers_replaced; + if (customers_replaced < replaced_customers_min) { + replaced_customers_min = customers_replaced; + } else if (customers_replaced > replaced_customers_max) { + replaced_customers_max = customers_replaced; + } + replaced_customers_micros_total += duration; + if (duration < replaced_customers_micros_min) { + replaced_customers_micros_min = duration; + } else if (duration > replaced_customers_micros_max) { + replaced_customers_micros_max = duration; + } + } + break; + case 1: + // Update the Products data base + long products_replaced = all_products.rebuildProductsPhasedUpdates(this); + after = AbsoluteTime.now(this); + delta = after.difference(this, now); + duration = delta.microseconds(); + + // now.garbageFootprint(); + // delta.garbageFootprint(); + if (products_rebuild_count++ == 0) { + replaced_products_min = replaced_products_max = replaced_products_total = products_replaced; + replaced_products_micros_min = replaced_products_micros_max = replaced_products_micros_total = duration; + } else { + replaced_products_total += products_replaced; + if (products_replaced < replaced_products_min) { + replaced_products_min = products_replaced; + } else if (products_replaced > replaced_products_max) { + replaced_products_max = products_replaced; + } + replaced_products_micros_total += duration; + if (duration < replaced_products_micros_min) { + replaced_products_micros_min = duration; + } else if (duration > replaced_products_micros_max) { + replaced_products_micros_max = duration; + } + } + break; + default: + assert (false): " Unhandled attention point in PhasedUpdaterThread"; + } + if (attention-- == 0) + attention = TotalAttentionPoints - 1; + + // next_release_time.garbageFootprint(this); + next_release_time = after.addRelative(this, config.PhasedUpdateInterval()); + // after.garbageFootprint(this); + next_release_time.changeLifeSpan(this, LifeSpan.TransientShort); + } + Trace.msg(2, "Server ", label, " terminating. Time is up."); + + updateReport(customers_rebuild_count, replaced_customers_min, replaced_customers_max, replaced_customers_total, + replaced_customers_micros_min, replaced_customers_micros_max, replaced_customers_micros_total, + products_rebuild_count, replaced_products_min, replaced_products_max, replaced_products_total, + replaced_products_micros_min, replaced_products_micros_max, replaced_products_micros_total); + + this.report(this); + } + + synchronized void updateReport(long customers_rebuild_count, long replaced_customers_min, long replaced_customers_max, + long replaced_customers_total, long replaced_customers_micros_min, + long replaced_customers_micros_max, long replaced_customers_micros_total, + long products_rebuild_count, long replaced_products_min, long replaced_products_max, + long replaced_products_total, long replaced_products_micros_min, + long replaced_products_micros_max, long replaced_products_micros_total) { + + this.customers_rebuild_count = customers_rebuild_count; + this.replaced_customers_min = replaced_customers_min; + this.replaced_customers_max = replaced_customers_max; + this.replaced_customers_total = replaced_customers_total; + this.replaced_customers_micros_min = replaced_customers_micros_min; + this.replaced_customers_micros_max = replaced_customers_micros_max; + this.replaced_customers_micros_total = replaced_customers_micros_total; + this.products_rebuild_count = products_rebuild_count; + this.replaced_products_min = replaced_products_min; + this.replaced_products_max = replaced_products_max; + this.replaced_products_total = replaced_products_total; + this.replaced_products_micros_min = replaced_products_micros_min; + this.replaced_products_micros_max = replaced_products_micros_max; + this.replaced_products_micros_total = replaced_products_micros_total; + } + + /* Every subclass overrides this method if its size differs from the size of its superclass. */ + void tallyMemory(MemoryLog log, LifeSpan ls, Polarity p) { + super.tallyMemory(log, ls, p); + + // Memory accounting not implemented + } + + void report(ExtrememThread t) { + Report.acquireReportLock(); + Report.output(); + if (config.ReportCSV()) { + Report.output("PhasedUpdater Thread report"); + String s = Long.toString(customers_rebuild_count); + Report.output("Customer rebuild executions ,", s); + s = Long.toString(replaced_customers_total); + Report.output("Total replaced customers, ", s); + s = Long.toString(replaced_customers_min); + Report.output("Minimum replacements per execution, ", s); + s = Long.toString(replaced_customers_max); + Report.output("Maximum replacements per execution, ", s); + double average = ((double) replaced_customers_total) / customers_rebuild_count; + s = Double.toString(average); + Report.output("Average replacements per execution, ", s); + s = Long.toString(replaced_customers_micros_min); + Report.output("Minimum execution time (us), ", s); + s = Long.toString(replaced_customers_micros_max); + Report.output("Maximum execution time (us), ", s); + average = ((double) replaced_customers_micros_total) / customers_rebuild_count; + s = Double.toString(average); + Report.output("Average execution time (us), ", s); + + s = Long.toString(products_rebuild_count); + Report.output("Products rebuild executions, ", s); + s = Long.toString(replaced_products_total); + Report.output("Total replaced products, ", s); + s = Long.toString(replaced_products_min); + Report.output("Minimum replacements per execution, ", s); + s = Long.toString(replaced_products_max); + Report.output("Maximum replacements per execution, ", s); + average = ((double) replaced_products_total) / products_rebuild_count; + s = Double.toString(average); + Report.output("Average replacements per execution, ", s); + s = Long.toString(replaced_products_micros_min); + Report.output("Minimum execution time (us), ", s); + s = Long.toString(replaced_products_micros_max); + Report.output("Maximum execution time (us), ", s); + average = ((double) replaced_products_micros_total) / products_rebuild_count; + s = Double.toString(average); + Report.output("Average execution time (us), ", s); + } else { + Report.output("PhasedUpdater Thread report"); + String s = Long.toString(customers_rebuild_count); + Report.output(" Customer rebuild executions: ", s); + s = Long.toString(replaced_customers_total); + Report.output(" Total replaced customers: ", s); + s = Long.toString(replaced_customers_min); + Report.output(" Minimum replacements per execution: ", s); + s = Long.toString(replaced_customers_max); + Report.output(" Maximum replacements per execution: ", s); + double average = ((double) replaced_customers_total) / customers_rebuild_count; + s = Double.toString(average); + Report.output(" Average replacements per execution: ", s); + s = Long.toString(replaced_customers_micros_min); + Report.output(" Minimum execution time (us): ", s); + s = Long.toString(replaced_customers_micros_max); + Report.output(" Maximum execution time (us): ", s); + average = ((double) replaced_customers_micros_total) / customers_rebuild_count; + s = Double.toString(average); + Report.output(" Average execution time (us): ", s); + + s = Long.toString(products_rebuild_count); + Report.output(" Products rebuild executions: ", s); + s = Long.toString(replaced_products_total); + Report.output(" Total replaced products: ", s); + s = Long.toString(replaced_products_min); + Report.output(" Minimum replacements per execution: ", s); + s = Long.toString(replaced_products_max); + Report.output(" Maximum replacements per execution: ", s); + average = ((double) replaced_products_total) / products_rebuild_count; + s = Double.toString(average); + Report.output(" Average replacements per execution: ", s); + s = Long.toString(replaced_products_micros_min); + Report.output(" Minimum execution time (us): ", s); + s = Long.toString(replaced_products_micros_max); + Report.output(" Maximum execution time (us): ", s); + average = ((double) replaced_products_micros_total) / products_rebuild_count; + s = Double.toString(average); + Report.output(" Average execution time (us): ", s); + } + Report.output(); + Report.releaseReportLock(); + } +}