-
Notifications
You must be signed in to change notification settings - Fork 96
Roadmap
Let's see if we can organize the various TODO items into a roadmap, to help sort out what needs to be done, from what could be done, and what should be done.
There are somewhat regularly requests to add more metrics. Right now it appears better to continue to focus on cleaning up the metric_fu code, fixing it, making it 'pluggable', and limiting file/class responsibilities/knowledge of each others' internals. Actually, it would probably behoove use to remove more metrics.
What metrics to include is a big question about what purpose Metricfu serves, and what should be its 'core functionality.' Should we replace churn with turbulence? Should we replace cane with rubocop? Should we add brakeman? Should metrics execute code such as with simplecov or mutant?
And we need to improve guidelines of how to understand the metrics we generate.
These will almost necessarily require clarifying the internal program interfaces
- Each metric configures itself and adds itself to a list of configured plugins #91
- Rearrange classes, files, and narrow each classes public api
- Plugins can use an Environment object to check for ruby version, engine, capabilities, os, etc.
- Reduce class responsibilities. Commenting the top of each class with its responsibilities should help
- Make a proper metric grapher subclass
- Make a proper individual metric configuration subclass
Make a proper formatter object- Make a proper metric runner object
- Get rid of how MetricFu and MetricFu::Configuration reach into each other back and forth.
- Tease apart how Hotspots are calculated and compared.
- metric, problem, score, weight, line, file, trend? suggestion?, problem_type (file, class, method)
- Logic to enable or skip metrics
- Consider making a linter to validate metric plugins, like ActiveModel::Lint::Tests so that external metric_fu plugins can easily test themselves.
- Switch from
rake stats
to code_metrics Add Formatters to enable output customization- Be able to run metric tools from metric_fu without shelling out
- Make it easier to whitelist metrics when running from the commandline (cli)
- Surface configurable options. (Right now you basically need to look in configuration.rb for symbols)
- environment: rail? mri? jruby? ripper? cruise_control? osx?
- logging: log_level, output destination
- io: directories( code_dirs, base_dir, scratch_dir, output_dir, data_dir, roo_dir, template_dir)
- reporting: templates classes, selectedd template, link_prefix, syntax_highlighting, open report in browser?, graph engine
- formatters: available and enabled
- metrics: enabled metrics and config
- any library extensions
- error behavior
- Add configurable Logger to all output streams, with a
log_level
e.g. debug, info, warn, fatal - Be able to specify folders to run against rather than just app and lib
- Be able to generate historical metrics for eg gem releases (tagged with appropriate date). file format should be sequential by data, e.g. yyyymmdd-sequencenumber-fingerprint.yml where fingerprint method can be configured. Possibly use git for-each tag
- Have an ErrorHandler with hooks, failure messages, and can output debug-info to paste into issues
- Also have ExitCodes.
- Environment class that knows the ruby version, engine, ripper support, any errors, log level, operating system, artifact / output directory
- Be able to set run date on the metric reports so that we can generate historical metrics. see #8
- Identify trends, make suggestions?
- Make the Location and LineNumber classes work
- Clarify hotspot weighting
- Test against a dummy rails app.
- Remove / Modify Devver code from the generators/hotspots_spec and base/hotspot_analzyer_spec
- Add (pending) tests
- Remove useless tests
- Make the graphs prettier. See turbulence.
- Make the hotspots page prettier
- For HTML pages
- use pjax to make it faster
- add more links between reports to make exploration easier
- Understand and explain how each metric can be used, purpose of metric. e.g. complexity, duplication, smells, coverage, security, style, documentation
- Use yard doc, remove outdated comments
For each metric
- Configuration
- Runner
- Formatter
- Analyzer
- Grapher
- Reporter (uses a template)
Code parser / analyzer / s-expression processor to get metric location and line number
-
At initialization, load ~/.metrics and/or local .metrics see how pry does it
-
Configure which metrics to run and their settings. See #91 e.g.
module MetricFu
def self.configure
metric_configs.each do |metric_config|
metric_config.configure # excludes mri, rcov, etc
end
end
end
MetricFu.configure
Each metric configures itself e.g.
module MetricFu
class Configuration
attr_accessor :configs
def add_config(config)
self.configs = (Array(configs) << config).uniq
end
class Config
# Encapsulates the configuration options for each metric
# and configures MetricFu for that metric
# Usage:
# metric :flog
# metric_options {}
# graph :flog
# graph_engine :bluff
# graph_engine_options :bluff
# Add config instance to the MetricFu::Configuration instance
def initialize
MetricFu::Configuration.run do |config|
config.add_config(self)
end
end
# Configures the metric, can be modified in subclasses which call super
def configure
do_stuff unless skip_metric
end
def skip_metric; false; end
end
end
end
To consider: Pry distinguishes between activated plugins and enabled plugins. When a plugin is activated, it's libraries are required. Whether it is used is determined by whether it is enabled.
- Run each metric (hotspots always runs last. It requires the input of the other metrics)
- normalize/format its output, and cache it. e.g.
- class methods: MetricFu::bar, MetricFu.bar
- instance methods MetricFu#bar, metric_fu.bar
- class constants: MetricFu::Bar
- Analyze the output and cache the results
-
Prepare a report for each metric, including a graph if so configured. This uses historical runs of metric_fu
-
Optionally open a browser window, see launchy
- MetricFu::Generator
- :emit should take care of running the metric tool and gathering its output.
- :analyze should take care of manipulating the output from :emit and making it possible to store it in a programmatic way.
- :to_h should provide a hash representation of the output from analyze ready to be serialized into yaml at some point.
- MetricFu::Configuration
- :add_metric(metric_name_as_symbol)
- :add_graph(metric_name_as_symbol)
- :configure_metric(metric_name_as_symbol, options_hash)
- MetricFu::Result
- :add
- MetricFu::Reporter
- :notify sends messages to configured formatters
- MetricFu::Template
- :result
- MetricFu::Grapher
- MetricFu::Graph
- :add(graph_type, graph engine)
- :generate
- MetricFu::Hotspot
- :generate_records
- MetricFu::HotspotAnalyzer
Also see the config PR and grapher PR
lib/
metric_fu.rb
metric_fu_requires.rb
metric_fu/
configuration.rb # contains MetricConfig
# which contains all the configuration info for each metric
# is in a collection of configured metrics in the Configuration
# has :configure method that sets the metric_name, metric_conf, if gaphed, when skipped/activated
# MetricFu::ChurnConfig
metric.rb # was generator, is interface for running, analyzing, formatting, outputting metrics
metrics/
MetricFu::Churn < MetricFu::Metric
churn/
config.rb # MetricFu::ChurnConfig
hotspots/
config.rb
analyzer/ #
runner.rb # === generator :emit base used by each metric
formatter.rb
formatter/
html.rb
html/
templates/
graphs/
graph/grapher.rb
yaml.rb
reporter.rb # notifies configured formatters about various events
environment.rb # contains info about ruby environs e.g. rails? mri? supports_ripper? 192?
file_system_operations.rb # ensure directories exists, knows code_dirs
logger.rb
cli.rb
cli/etc.
parser.rb # has helpers that can get class and method name for a line number
run.rb / controller.rb # coordinates configuring, running, formatting, and outputting metrics
tasks/
metric_fu.rake
.metrics
Part of doing this is to not make the Configuration object create methods and instance variables everywhere. Each metric can configure itself and add itself to metric_fu (allows for plugins).