Skip to content
Benjamin Fleischer edited this page Aug 5, 2013 · 23 revisions

Background Thoughts

Let's see if we can organize the various TODO items into a roadmap, to help sort out what needs to be done, from what could be done, and what should be done.

There are somewhat regularly requests to add more metrics. Right now it appears better to continue to focus on cleaning up the metric_fu code, fixing it, making it 'pluggable', and limiting file/class responsibilities/knowledge of each others' internals. Actually, it would probably behoove use to remove more metrics.

What metrics to include is a big question about what purpose Metricfu serves, and what should be its 'core functionality.' Should we replace churn with turbulence? Should we replace cane with rubocop? Should we add brakeman? Should metrics execute code such as with simplecov or mutant?

And we need to improve guidelines of how to understand the metrics we generate.

Draft Roadmap

Plugin System / Modularity

These will almost necessarily require clarifying the internal program interfaces

  1. Each metric configures itself and adds itself to a list of configured plugins #91
  2. Rearrange classes, files, and narrow each classes public api
  3. Plugins can use an Environment object to check for ruby version, engine, capabilities, os, etc.
  4. Reduce class responsibilities. Commenting the top of each class with its responsibilities should help
  5. Make a proper metric grapher subclass
  6. Make a proper individual metric configuration subclass
  7. Make a proper formatter object
  8. Make a proper metric runner object
  9. Get rid of how MetricFu and MetricFu::Configuration reach into each other back and forth.
  10. Tease apart how Hotspots are calculated and compared.
    • metric, problem, score, weight, line, file, trend? suggestion?, problem_type (file, class, method)
  11. Logic to enable or skip metrics
  12. Consider making a linter to validate metric plugins, like ActiveModel::Lint::Tests so that external metric_fu plugins can easily test themselves.

Enhancements

  1. Switch from rake stats to code_metrics
  2. Add Formatters to enable output customization
  3. Be able to run metric tools from metric_fu without shelling out
  4. Make it easier to whitelist metrics when running from the commandline (cli)
  5. Surface configurable options. (Right now you basically need to look in configuration.rb for symbols)
  • environment: rail? mri? jruby? ripper? cruise_control? osx?
  • logging: log_level, output destination
  • io: directories( code_dirs, base_dir, scratch_dir, output_dir, data_dir, roo_dir, template_dir)
  • reporting: templates classes, selectedd template, link_prefix, syntax_highlighting, open report in browser?, graph engine
  • formatters: available and enabled
  • metrics: enabled metrics and config
  • any library extensions
  • error behavior
  1. Add configurable Logger to all output streams, with a log_level e.g. debug, info, warn, fatal
  2. Be able to specify folders to run against rather than just app and lib
  3. Be able to generate historical metrics for eg gem releases (tagged with appropriate date). file format should be sequential by data, e.g. yyyymmdd-sequencenumber-fingerprint.yml where fingerprint method can be configured. Possibly use git for-each tag
  4. Have an ErrorHandler with hooks, failure messages, and can output debug-info to paste into issues
    • Also have ExitCodes.
  5. Environment class that knows the ruby version, engine, ripper support, any errors, log level, operating system, artifact / output directory
  6. Be able to set run date on the metric reports so that we can generate historical metrics. see #8
  7. Identify trends, make suggestions?

Analysis

  1. Make the Location and LineNumber classes work
  2. Clarify hotspot weighting

Testing

  1. Test against a dummy rails app.
  2. Remove / Modify Devver code from the generators/hotspots_spec and base/hotspot_analzyer_spec
  3. Add (pending) tests
  4. Remove useless tests

Reporting

  1. Make the graphs prettier. See turbulence.
  2. Make the hotspots page prettier
  3. For HTML pages
  • use pjax to make it faster
  • add more links between reports to make exploration easier

Documentation

  1. Understand and explain how each metric can be used, purpose of metric. e.g. complexity, duplication, smells, coverage, security, style, documentation
  2. Use yard doc, remove outdated comments

The pieces of metric fu

For each metric

  • Configuration
  • Runner
  • Formatter
  • Analyzer
  • Grapher
  • Reporter (uses a template)

Code parser / analyzer / s-expression processor to get metric location and line number

(Desired) Execution flow:

  1. At initialization, load ~/.metrics and/or local .metrics see how pry does it

  2. Configure which metrics to run and their settings. See #91 e.g.

module MetricFu
  def self.configure
    metric_configs.each do |metric_config|
      metric_config.configure # excludes mri, rcov, etc
    end
  end
end
MetricFu.configure

Each metric configures itself e.g.

module MetricFu
   class Configuration
     attr_accessor :configs
     def add_config(config)
       self.configs = (Array(configs) << config).uniq
     end
     class Config
       # Encapsulates the configuration options for each metric
       # and configures MetricFu for that metric
       # Usage:
       #   metric :flog
       #   metric_options {}
       #   graph :flog
       #   graph_engine :bluff
       #   graph_engine_options :bluff
       # Add config instance to the MetricFu::Configuration instance
       def initialize
         MetricFu::Configuration.run do |config|
           config.add_config(self)
         end
       end
       # Configures the metric, can be modified in subclasses which call super
       def configure
         do_stuff unless skip_metric
       end
       def skip_metric; false; end
    end
  end
end

To consider: Pry distinguishes between activated plugins and enabled plugins. When a plugin is activated, it's libraries are required. Whether it is used is determined by whether it is enabled.

  1. Run each metric (hotspots always runs last. It requires the input of the other metrics)
  • normalize/format its output, and cache it. e.g.
    • class methods: MetricFu::bar, MetricFu.bar
    • instance methods MetricFu#bar, metric_fu.bar
    • class constants: MetricFu::Bar
  • Analyze the output and cache the results
  1. Prepare a report for each metric, including a graph if so configured. This uses historical runs of metric_fu

  2. Optionally open a browser window, see launchy

Important classes in MetricFu

  • MetricFu::Generator
    • :emit should take care of running the metric tool and gathering its output.
    • :analyze should take care of manipulating the output from :emit and making it possible to store it in a programmatic way.
    • :to_h should provide a hash representation of the output from analyze ready to be serialized into yaml at some point.
  • MetricFu::Configuration
    • :add_metric(metric_name_as_symbol)
    • :add_graph(metric_name_as_symbol)
    • :configure_metric(metric_name_as_symbol, options_hash)
  • MetricFu::Result
    • :add
  • MetricFu::Reporter
    • :notify sends messages to configured formatters
  • MetricFu::Template
    • :result
  • MetricFu::Grapher
  • MetricFu::Graph
    • :add(graph_type, graph engine)
    • :generate
  • MetricFu::Hotspot
    • :generate_records
  • MetricFu::HotspotAnalyzer

Prospective new folder structure

Also see the config PR and grapher PR

 lib/
    metric_fu.rb
    metric_fu_requires.rb
    metric_fu/
                   configuration.rb # contains MetricConfig
                     # which contains all the configuration info for each metric
                     # is in a collection of configured metrics in the Configuration
                     # has :configure method that sets the metric_name, metric_conf, if gaphed, when skipped/activated
                     # MetricFu::ChurnConfig
                   metric.rb # was generator, is interface for running, analyzing, formatting, outputting metrics
                   metrics/
                     MetricFu::Churn < MetricFu::Metric
                       churn/ 
                          config.rb # MetricFu::ChurnConfig
                       hotspots/
                         config.rb
                         analyzer/ # 
                   runner.rb # === generator :emit base used by each metric
                   formatter.rb
                   formatter/
                     html.rb
                     html/
                       templates/
                       graphs/
                         graph/grapher.rb
                     yaml.rb
                   reporter.rb # notifies configured formatters about various events
                   environment.rb # contains info about ruby environs e.g. rails? mri? supports_ripper? 192?
                   file_system_operations.rb # ensure directories exists, knows code_dirs
                   logger.rb
                   cli.rb
                     cli/etc.
                   parser.rb # has helpers that can get class and method name for a line number
                   run.rb / controller.rb # coordinates configuring, running, formatting, and outputting metrics
  tasks/
    metric_fu.rake
.metrics

Part of doing this is to not make the Configuration object create methods and instance variables everywhere. Each metric can configure itself and add itself to metric_fu (allows for plugins).