diff --git a/.travis.yml b/.travis.yml
index 79036ed06..2afa73f55 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -24,7 +24,7 @@ matrix:
rvm: 2.5.1
script:
- bundle install --with documentation
- - bundle exec rake spec:docs_uptodate
+ - bundle exec rake yard:master:uptodate
- name: MRI 2.4.4
rvm: 2.4.4
diff --git a/CHANGELOG.md b/CHANGELOG.md
index a1afafcb8..13d02b730 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,7 +2,7 @@
concurrent-ruby:
-* [Promises](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Promises.html)
+* [Promises](http://ruby-concurrency.github.io/concurrent-ruby/1.1.0/Concurrent/Promises.html)
are moved from `concurrent-ruby-edge` to `concurrent-ruby`
* Add support for TruffleRuby
* (#734) Fix Array/Hash/Set construction broken on TruffleRuby
diff --git a/LICENSE.txt b/LICENSE.txt
deleted file mode 100644
index 47474f192..000000000
--- a/LICENSE.txt
+++ /dev/null
@@ -1,21 +0,0 @@
-Copyright (c) Jerry D'Antonio -- released under the MIT license.
-
-http://www.opensource.org/licenses/mit-license.php
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in
-all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-THE SOFTWARE.
diff --git a/README.md b/README.md
index aef815544..66a494f19 100644
--- a/README.md
+++ b/README.md
@@ -93,7 +93,7 @@ We also have a [IRC (gitter)](https://gitter.im/ruby-concurrency/concurrent-ruby
Like a Future scheduled for a specific future time.
* [TimerTask](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/TimerTask.html):
A Thread that periodically wakes up to perform work at regular intervals.
-* [Promises Framework](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html):
+* [Promises](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html):
Unified implementation of futures and promises which combines features of previous `Future`,
`Promise`, `IVar`, `Event`, `dataflow`, `Delay`, and (partially) `TimerTask` into a single
framework. It extensively uses the new synchronization layer to make all the features
@@ -186,21 +186,21 @@ Deprecated features are still available and bugs are being fixed, but new featur
* ~~[Future](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Future.html):
An asynchronous operation that produces a value.~~ Replaced by
- [Promises Framework](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
- * ~~[Dataflow](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent.html#dataflow-class_method):
+ [Promises](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
+ * ~~[.dataflow](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent.html#dataflow-class_method):
Built on Futures, Dataflow allows you to create a task that will be scheduled when all of
its data dependencies are available.~~ Replaced by
- [Promises Framework](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
+ [Promises](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
* ~~[Promise](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promise.html): Similar
to Futures, with more features.~~ Replaced by
- [Promises Framework](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
+ [Promises](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
* ~~[Delay](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Delay.html) Lazy evaluation
of a block yielding an immutable result. Based on Clojure's
[delay](https://clojuredocs.org/clojure.core/delay).~~ Replaced by
- [Promises Framework](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
+ [Promises](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
* ~~[IVar](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/IVar.html) Similar to a
"future" but can be manually assigned once, after which it becomes immutable.~~ Replaced by
- [Promises Framework](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
+ [Promises](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html).
### Edge Features
diff --git a/Rakefile b/Rakefile
index 627023731..3385c3e9e 100644
--- a/Rakefile
+++ b/Rakefile
@@ -29,10 +29,12 @@ end
require 'rake_compiler_dock'
namespace :repackage do
- desc '- with Windows fat distributions'
+ desc '* with Windows fat distributions'
task :all do
Dir.chdir(__dir__) do
sh 'bundle package'
+ # needed only if the jar is built outside of docker
+ Rake::Task['lib/concurrent/concurrent_ruby.jar'].invoke
RakeCompilerDock.exec 'support/cross_building.sh'
end
end
@@ -60,19 +62,19 @@ begin
--tag ~notravis ]
namespace :spec do
- desc '- Configured for ci'
+ desc '* Configured for ci'
RSpec::Core::RakeTask.new(:ci) do |t|
t.rspec_opts = [*options].join(' ')
end
- desc '- test packaged and installed gems instead of local files'
- task :installed => :repackage do
+ desc '* test packaged and installed gems instead of local files'
+ task :installed do
Dir.chdir(__dir__) do
sh 'gem install pkg/concurrent-ruby-1.1.0.pre1.gem'
sh 'gem install pkg/concurrent-ruby-ext-1.1.0.pre1.gem' if Concurrent.on_cruby?
sh 'gem install pkg/concurrent-ruby-edge-0.4.0.pre1.gem'
ENV['NO_PATH'] = 'true'
- sh 'bundle install'
+ sh 'bundle update'
sh 'bundle exec rake spec:ci'
end
end
@@ -86,6 +88,8 @@ rescue LoadError => e
puts 'RSpec is not installed, skipping test task definitions: ' + e.message
end
+current_yard_version_name = Concurrent::VERSION.split('.')[0..2].join('.')
+
begin
require 'yard'
require 'md_ruby_eval'
@@ -99,25 +103,45 @@ begin
'--title', 'Concurrent Ruby',
'--template', 'default',
'--template-path', 'yard-template',
- '--default-return', 'undocumented',]
+ '--default-return', 'undocumented']
desc 'Generate YARD Documentation (signpost, master)'
task :yard => ['yard:signpost', 'yard:master']
namespace :yard do
- desc '- eval markdown files'
+ desc '* eval markdown files'
task :eval_md do
Dir.chdir File.join(__dir__, 'docs-source') do
sh 'bundle exec md-ruby-eval --auto'
end
end
+ task :update_readme do
+ Dir.chdir __dir__ do
+ content = File.read(File.join('README.md')).
+ gsub(/\[([\w ]+)\]\(http:\/\/ruby-concurrency\.github\.io\/concurrent-ruby\/master\/.*\)/) do |_|
+ case $1
+ when 'LockFreeLinkedSet'
+ "{Concurrent::Edge::#{$1} #{$1}}"
+ when '.dataflow'
+ '{Concurrent.dataflow Concurrent.dataflow}'
+ when 'thread pool'
+ '{file:thread_pools.md thread pool}'
+ else
+ "{Concurrent::#{$1} #{$1}}"
+ end
+ end
+ File.write 'tmp/README.md', content
+ end
+ end
+
define_yard_task = -> name do
- desc "- of #{name} into subdir #{name}"
+ desc "* of #{name} into subdir #{name}"
YARD::Rake::YardocTask.new(name) do |yard|
yard.options.push(
'--output-dir', "docs/#{name}",
+ '--main', 'tmp/README.md',
*common_yard_options)
yard.files = ['./lib/**/*.rb',
'./lib-edge/**/*.rb',
@@ -125,17 +149,16 @@ begin
'-',
'docs-source/thread_pools.md',
'docs-source/promises.out.md',
- 'README.md',
- 'LICENSE.txt',
+ 'LICENSE.md',
'CHANGELOG.md']
end
- Rake::Task[name].prerequisites.push 'yard:eval_md'
+ Rake::Task[name].prerequisites.push 'yard:eval_md', 'yard:update_readme'
end
- define_yard_task.call(Concurrent::VERSION.split('.')[0..2].join('.'))
- define_yard_task.call('master')
+ define_yard_task.call current_yard_version_name
+ define_yard_task.call 'master'
- desc "- signpost for versions"
+ desc "* signpost for versions"
YARD::Rake::YardocTask.new(:signpost) do |yard|
yard.options.push(
'--output-dir', 'docs',
@@ -143,35 +166,59 @@ begin
*common_yard_options)
yard.files = ['no-lib']
end
- end
- namespace :spec do
- desc '- ensure that generated documentation is matching the source code'
- task :docs_uptodate do
- Dir.chdir(__dir__) do
- begin
- FileUtils.cp_r 'docs', 'docs-copy', verbose: true
- Rake::Task[:yard].invoke
- sh 'diff -r docs/ docs-copy/'
- ensure
- FileUtils.rm_rf 'docs-copy', verbose: true
+ define_uptodate_task = -> name do
+ namespace name do
+ desc "** ensure that #{name} generated documentation is matching the source code"
+ task :uptodate do
+ Dir.chdir(__dir__) do
+ begin
+ FileUtils.cp_r 'docs', 'docs-copy', verbose: true
+ Rake::Task["yard:#{name}"].invoke
+ sh 'diff -r docs/ docs-copy/'
+ ensure
+ FileUtils.rm_rf 'docs-copy', verbose: true
+ end
+ end
end
end
end
+
+ define_uptodate_task.call current_yard_version_name
+ define_uptodate_task.call 'master'
end
rescue LoadError => e
puts 'YARD is not installed, skipping documentation task definitions: ' + e.message
end
+desc 'build, test, and publish the gem'
+task :release => ['release:checks', 'release:build', 'release:test', 'release:publish']
+
namespace :release do
# Depends on environment of @pitr-ch
- mri_version = '2.4.3'
+ mri_version = '2.5.1'
jruby_version = 'jruby-9.1.17.0'
+ task :checks => "yard:#{current_yard_version_name}:uptodate" do
+ Dir.chdir(__dir__) do
+ begin
+ STDOUT.puts "Is this a final release build? (Do git checks?) (y/n)"
+ input = STDIN.gets.strip.downcase
+ end until %w(y n).include?(input)
+ if input == 'y'
+ sh 'test -z "$(git status --porcelain)"'
+ sh 'git fetch'
+ sh 'test $(git show-ref --verify --hash refs/heads/master) = $(git show-ref --verify --hash refs/remotes/github/master)'
+ end
+ end
+ end
+
+ desc '* build all *.gem files necessary for release'
task :build => 'repackage:all'
+ desc '* test actual installed gems instead of cloned repository on MRI and JRuby'
task :test do
Dir.chdir(__dir__) do
old = ENV['RBENV_VERSION']
@@ -190,27 +237,43 @@ namespace :release do
end
end
- task :push do
- Dir.chdir(__dir__) do
- sh 'git fetch'
- sh 'test $(git show-ref --verify --hash refs/heads/master) = $(git show-ref --verify --hash refs/remotes/github/master)'
-
- sh "git tag v#{Concurrent::VERSION}"
- sh "git tag edge-v#{Concurrent::EDGE_VERSION}"
- sh "git push github v#{Concurrent::VERSION} edge-v#{Concurrent::EDGE_VERSION}"
-
- sh "gem push pkg/concurrent-ruby-#{Concurrent::VERSION}.gem"
- sh "gem push pkg/concurrent-ruby-edge-#{Concurrent::EDGE_VERSION}.gem"
- sh "gem push pkg/concurrent-ruby-ext-#{Concurrent::VERSION}.gem"
- sh "gem push pkg/concurrent-ruby-ext-#{Concurrent::VERSION}-x64-mingw32.gem"
- sh "gem push pkg/concurrent-ruby-ext-#{Concurrent::VERSION}-x86-mingw32.gem"
+ desc '* do all nested steps'
+ task :publish => ['publish:ask', 'publish:tag', 'publish:rubygems', 'publish:post_steps']
+
+ namespace :publish do
+ task :ask do
+ begin
+ STDOUT.puts "Do you want to publish? (y/n)"
+ input = STDIN.gets.strip.downcase
+ end until %w(y n).include?(input)
+ raise 'reconsidered' if input == 'n'
end
- end
- task :notify do
- puts 'Manually: create a release on GitHub with relevant changelog part'
- puts 'Manually: send email same as release with relevant changelog part'
- puts 'Manually: update documentation'
- puts ' $ bundle exec rake yard:push'
+ desc '** tag HEAD with current version and push to github'
+ task :tag do
+ Dir.chdir(__dir__) do
+ sh "git tag v#{Concurrent::VERSION}"
+ sh "git tag edge-v#{Concurrent::EDGE_VERSION}"
+ sh "git push github v#{Concurrent::VERSION} edge-v#{Concurrent::EDGE_VERSION}"
+ end
+ end
+
+ desc '** push all *.gem files to rubygems'
+ task :rubygems do
+ Dir.chdir(__dir__) do
+ sh "gem push pkg/concurrent-ruby-#{Concurrent::VERSION}.gem"
+ sh "gem push pkg/concurrent-ruby-edge-#{Concurrent::EDGE_VERSION}.gem"
+ sh "gem push pkg/concurrent-ruby-ext-#{Concurrent::VERSION}.gem"
+ sh "gem push pkg/concurrent-ruby-ext-#{Concurrent::VERSION}-x64-mingw32.gem"
+ sh "gem push pkg/concurrent-ruby-ext-#{Concurrent::VERSION}-x86-mingw32.gem"
+ end
+ end
+
+ desc '** print post release steps'
+ task :post_steps do
+ puts 'Manually: create a release on GitHub with relevant changelog part'
+ puts 'Manually: send email same as release with relevant changelog part'
+ puts 'Manually: tweet'
+ end
end
end
diff --git a/docs-source/dataflow.md b/docs-source/dataflow.md
index 7ffa914dc..a34207ee3 100644
--- a/docs-source/dataflow.md
+++ b/docs-source/dataflow.md
@@ -1,4 +1,4 @@
-Dataflow allows you to create a task that will be scheduled when all of its data dependencies are available. Data dependencies are `Future` values. The dataflow task itself is also a `Future` value, so you can build up a graph of these tasks, each of which is run when all the data and other tasks it depends on are available or completed.
+Data dependencies are `Future` values. The dataflow task itself is also a `Future` value, so you can build up a graph of these tasks, each of which is run when all the data and other tasks it depends on are available or completed.
Our syntax is somewhat related to that of Akka's `flow` and Habanero Java's `DataDrivenFuture`. However unlike Akka we don't schedule a task at all until it is ready to run, and unlike Habanero Java we pass the data values into the task instead of dereferencing them again in the task.
diff --git a/docs-source/signpost.md b/docs-source/signpost.md
index 3cc9282e3..7748b94a4 100644
--- a/docs-source/signpost.md
+++ b/docs-source/signpost.md
@@ -3,4 +3,5 @@
Pick a version:
* [master](./master/index.html)
+* [1.1.0.pre1](./1.1.0/index.html)
* [1.0.5](./1.0.5/index.html)
diff --git a/docs-source/thread_pools.md b/docs-source/thread_pools.md
index f9cb74319..8f4ed20fa 100644
--- a/docs-source/thread_pools.md
+++ b/docs-source/thread_pools.md
@@ -2,7 +2,7 @@
A Thread Pool is an abstraction that you can give a unit of work to, and the work will be executed by one of possibly several threads in the pool. One motivation for using thread pools is the overhead of creating and destroying threads. Creating a pool of reusable worker threads then repeatedly re-using threads from the pool can have huge performance benefits for a long-running application like a service.
-`concurrent-ruby` also offers some higher level abstractions than thread pools. For many problems, you will be better served by using one of these -- if you are thinking of using a thread pool, we especially recommend you look at and understand [Future](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Future.html)s before deciding to use thread pools directly instead. Futures are implemented using thread pools, but offer a higher level abstraction.
+`concurrent-ruby` also offers some higher level abstractions than thread pools. For many problems, you will be better served by using one of these -- if you are thinking of using a thread pool, we especially recommend you look at and understand {Concurrent::Future}s before deciding to use thread pools directly instead. Futures are implemented using thread pools, but offer a higher level abstraction.
But there are some problems for which directly using a thread pool is an appropriate solution. Or, you may wish to make your own thread pool to run Futures on, to be separate or have different characteristics than the global thread pool that Futures run on by default.
@@ -10,7 +10,7 @@ Thread pools are considered 'executors' -- an object you can give a unit of work
## FixedThreadPool
-A [FixedThreadPool](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/FixedThreadPool.html) contains a fixed number of threads. When you give a unit of work to it, an available thread will be used to execute.
+A {Concurrent::FixedThreadPool} contains a fixed number of threads. When you give a unit of work to it, an available thread will be used to execute.
~~~ruby
pool = Concurrent::FixedThreadPool.new(5) # 5 threads
@@ -29,7 +29,7 @@ The `FixedThreadPool` is based on the semantics used in Java for [java.util.conc
## CachedThreadPool
-A [CachedThreadPool](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/CachedThreadPool.html) will create as many threads as necessary for work posted to it. If you post work to a `CachedThreadPool` when all its existing threads are busy, it will create a new thread to execute that work, and then keep that thread cached for future work. Cached threads are reclaimed (destroyed) after they are idle for a while.
+A {Concurrent::CachedThreadPool} will create as many threads as necessary for work posted to it. If you post work to a `CachedThreadPool` when all its existing threads are busy, it will create a new thread to execute that work, and then keep that thread cached for future work. Cached threads are reclaimed (destroyed) after they are idle for a while.
CachedThreadPools typically improve the performance of programs that execute many short-lived asynchronous tasks.
@@ -46,7 +46,7 @@ If you'd like to configure a maximum number of threads, you can use the more gen
## ThreadPoolExecutor
-A [ThreadPoolExecutor](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/ThreadPoolExecutor.html) is a general-purpose thread pool that can be configured to have various behaviors.
+A {Concurrent::ThreadPoolExecutor} is a general-purpose thread pool that can be configured to have various behaviors.
A `ThreadPoolExecutor` will automatically adjust the pool size according to the bounds set by `min-threads` and `max-threads`.
When a new task is submitted and fewer than `min-threads` threads are running, a new thread is created to handle the request, even if other worker threads are idle.
@@ -130,16 +130,16 @@ The `shutdown?` method will return true for a stopped pool, regardless of whethe
There are several other thread pools and executors in the `concurrent-ruby` library. See the API documentation for more information:
- * [CachedThreadPool](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/CachedThreadPool.html)
- * [FixedThreadPool](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/FixedThreadPool.html)
- * [ImmediateExecutor](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/ImmediateExecutor.html)
- * [PerThreadExecutor](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/PerThreadExecutor.html)
- * [SafeTaskExecutor](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/SafeTaskExecutor.html)
- * [SerializedExecution](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/SerializedExecution.html)
- * [SerializedExecutionDelegator](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/SerializedExecutionDelegator.html)
- * [SingleThreadExecutor](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/SingleThreadExecutor.html)
- * [ThreadPoolExecutor](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/ThreadPoolExecutor.html)
- * [TimerSet](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/TimerSet.html)
+ * {Concurrent::CachedThreadPool}
+ * {Concurrent::FixedThreadPool}
+ * {Concurrent::ImmediateExecutor}
+ * {Concurrent::SimpleExecutorService}
+ * {Concurrent::SafeTaskExecutor}
+ * {Concurrent::SerializedExecution}
+ * {Concurrent::SerializedExecutionDelegator}
+ * {Concurrent::SingleThreadExecutor}
+ * {Concurrent::ThreadPoolExecutor}
+ * {Concurrent::TimerSet}
## Global Thread Pools
diff --git a/docs/1.0.5/Atomic.html b/docs/1.0.5/Atomic.html
deleted file mode 100644
index d101f3553..000000000
--- a/docs/1.0.5/Atomic.html
+++ /dev/null
@@ -1,307 +0,0 @@
-
-
-
- This method is part of a private API.
- You should avoid using this method if possible, as it may be removed or be changed in the future.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-21
-22
-23
-24
-25
-
-
-
# File 'lib/concurrent/next2.rb', line 21
-
-defdone(future)# TODO pass in success/value/reason to avoid locking
-# futures could be deleted from blocked_by one by one here, but that would too expensive,
-# it's done once when all are done to free the reference
-resolvableifsynchronize{@countdown}.decrement.zero?
-end
Raised when a lifecycle method (such as stop) is called in an improper
-sequence or when the object is in an inappropriate state.
-
-
-
-
-
-
-
-
-
-
Class.new(StandardError)
-
-
InitializationError =
-
-
-
Raised when an object's methods are called when it has not been
-properly initialized.
-
-
-
-
-
-
-
-
-
-
Class.new(StandardError)
-
-
MaxRestartFrequencyError =
-
-
-
Raised when an object with a start/stop lifecycle has been started an
-excessive number of times. Often used in conjunction with a restart
-policy or strategy.
-
-
-
-
-
-
-
-
-
-
Class.new(StandardError)
-
-
MultipleAssignmentError =
-
-
-
Raised when an attempt is made to modify an immutable object
-(such as an IVar) after its final state has been set.
-
-
-
-
-
-
-
-
-
-
Class.new(StandardError)
-
-
RejectedExecutionError =
-
-
-
Raised by an Executor when it is unable to process a given task,
-possibly because of a reject policy or other internal error.
-
-
-
-
-
-
-
-
-
-
Class.new(StandardError)
-
-
ResourceLimitError =
-
-
-
Raised when any finite resource, such as a lock counter, exceeds its
-maximum limit/threshold.
Dataflow allows you to create a task that will be scheduled when all of its data dependencies are available. Data dependencies are Future values. The dataflow task itself is also a Future value, so you can build up a graph of these tasks, each of which is run when all the data and other tasks it depends on are available or completed.
-
-
Our syntax is somewhat related to that of Akka's flow and Habanero Java's DataDrivenFuture. However unlike Akka we don't schedule a task at all until it is ready to run, and unlike Habanero Java we pass the data values into the task instead of dereferencing them again in the task.
-
-
The theory of dataflow goes back to the 70s. In the terminology of the literature, our implementation is coarse-grained, in that each task can be many instructions, and dynamic in that you can create more tasks within other tasks.
-
-
Example
-
-
A dataflow task is created with the dataflow method, passing in a block.
-
-
task=Concurrent::dataflow{14}
-
-
-
This produces a simple Future value. The task will run immediately, as it has no dependencies. We can also specify Future values that must be available before a task will run. When we do this we get the value of those futures passed to our block.
Using the dataflow method you can build up a directed acyclic graph (DAG) of tasks that depend on each other, and have the tasks run as soon as their dependencies are ready and there is CPU capacity to schedule them. This can help you create a program that uses more of the CPU resources available to you.
-
-
Derivation
-
-
This section describes how we could derive dataflow from other primitives in this library.
-
-
Consider a naive fibonacci calculator.
-
-
deffib(n)
- ifn<2
- n
- else
- fib(n-1)+fib(n-2)
- end
-end
-
-putsfib(14)#=> 377
-
One of the drawbacks of this approach is that all the futures start, and then most of them immediately block on their dependencies. We know that there's no point executing those futures until their dependencies are ready, so let's not execute each future until all their dependencies are ready.
-
-
To do this we'll create an object that counts the number of times it observes a future finishing before it does something - and for us that something will be to execute the next future.
Since we know that the futures the dataflow computation depends on are already going to be available when the future is executed, we might as well pass the values into the block so we don't have to reference the futures inside the block. This allows us to write the dataflow block as straight non-concurrent code without reference to futures.
# File 'lib/concurrent/tvar.rb', line 143
-
-defabort_transaction
- raiseTransaction::AbortError.new
-end
-
-
-
-
-
-
-
-
- + (Object) atomically
-
-
-
-
-
-
-
-
Run a block that reads and writes TVars as a single atomic transaction.
-With respect to the value of TVar objects, the transaction is atomic, in
-that it either happens or it does not, consistent, in that the TVar
-objects involved will never enter an illegal state, and isolated, in that
-transactions never interfere with each other. You may recognise these
-properties from database transactions.
-
-
There are some very important and unusual semantics that you must be aware of:
-
-
-
Most importantly, the block that you pass to atomically may be executed
-more than once. In most cases your code should be free of
-side-effects, except for via TVar.
-
If an exception escapes an atomically block it will abort the transaction.
-
It is undefined behaviour to use callcc or Fiber with atomically.
-
If you create a new thread within an atomically, it will not be part of
-the transaction. Creating a thread counts as a side-effect.
-
-
-
Transactions within transactions are flattened to a single transaction.
# File 'lib/concurrent/tvar.rb', line 89
-
-defatomically
- raiseArgumentError.new('no block given')unlessblock_given?
-
- # Get the current transaction
-
- transaction=Transaction::current
-
- # Are we not already in a transaction (not nested)?
-
- iftransaction.nil?
- # New transaction
-
- begin
- # Retry loop
-
- loopdo
-
- # Create a new transaction
-
- transaction=Transaction.new
- Transaction::current=transaction
-
- # Run the block, aborting on exceptions
-
- begin
- result=yield
- rescueTransaction::AbortError=>e
- transaction.abort
- result=Transaction::ABORTED
- rescue=>e
- transaction.abort
- raisee
- end
- # If we can commit, break out of the loop
-
- ifresult!=Transaction::ABORTED
- iftransaction.commit
- breakresult
- end
- end
- end
- ensure
- # Clear the current transaction
-
- Transaction::current=nil
- end
- else
- # Nested transaction - flatten it and just run the block
-
- yield
- end
-end
Only change this option if you know what you are doing!
-When this is set to true (the default) then at_exit handlers
-will be registered automatically for all thread pools to
-ensure that they are shutdown when the application ends. When
-changed to false, the at_exit handlers will be circumvented
-for all Concurrent Ruby thread pools running within the
-application. Even those created within other gems used by the
-application. This method should never be called from within a
-gem. It should only be used from within the main application.
-And even then it should be used only when necessary.
-
-
-
-
Defines if ALL executors should be auto-terminated with an
-at_exit callback. When set to false it will be the application
-programmer's responsibility to ensure that all thread pools,
-including the global thread pools, are shutdown properly prior to
-application exit.
-
-
-
-
-
-
-
Returns:
-
-
-
-
-
- (Boolean)
-
-
-
- —
-
true when all thread pools will auto-terminate on
-application exit using an at_exit handler; false when no auto-termination
-will occur.
-
-
-
-
-
-
-
-
-
-
-
-
-131
-132
-133
-
-
-
# File 'lib/concurrent/configuration.rb', line 131
-
-defself.auto_terminate_all_executors?
- @@auto_terminate_all_executors.value
-end
Only change this option if you know what you are doing!
-When this is set to true (the default) then at_exit handlers
-will be registered automatically for the global thread pools
-to ensure that they are shutdown when the application ends. When
-changed to false, the at_exit handlers will be circumvented
-for all global thread pools. This method should never be called
-from within a gem. It should only be used from within the main
-application and even then it should be used only when necessary.
-
-
-
-
Defines if global executors should be auto-terminated with an
-at_exit callback. When set to false it will be the application
-programmer's responsibility to ensure that the global thread pools
-are shutdown properly prior to application exit.
-
-
-
-
-
-
-
Returns:
-
-
-
-
-
- (Boolean)
-
-
-
- —
-
true when global thread pools will auto-terminate on
-application exit using an at_exit handler; false when no auto-termination
-will occur.
-
-
-
-
-
-
-
-
-
-
-
-
-87
-88
-89
-
-
-
# File 'lib/concurrent/configuration.rb', line 87
-
-defself.auto_terminate_global_executors?
- @@auto_terminate_global_executors.value
-end
Dataflow allows you to create a task that will be scheduled when all of its data dependencies are available. Data dependencies are Future values. The dataflow task itself is also a Future value, so you can build up a graph of these tasks, each of which is run when all the data and other tasks it depends on are available or completed.
-
-
Our syntax is somewhat related to that of Akka's flow and Habanero Java's DataDrivenFuture. However unlike Akka we don't schedule a task at all until it is ready to run, and unlike Habanero Java we pass the data values into the task instead of dereferencing them again in the task.
-
-
The theory of dataflow goes back to the 70s. In the terminology of the literature, our implementation is coarse-grained, in that each task can be many instructions, and dynamic in that you can create more tasks within other tasks.
-
-
Example
-
-
A dataflow task is created with the dataflow method, passing in a block.
-
-
task=Concurrent::dataflow{14}
-
-
-
This produces a simple Future value. The task will run immediately, as it has no dependencies. We can also specify Future values that must be available before a task will run. When we do this we get the value of those futures passed to our block.
Using the dataflow method you can build up a directed acyclic graph (DAG) of tasks that depend on each other, and have the tasks run as soon as their dependencies are ready and there is CPU capacity to schedule them. This can help you create a program that uses more of the CPU resources available to you.
-
-
Derivation
-
-
This section describes how we could derive dataflow from other primitives in this library.
-
-
Consider a naive fibonacci calculator.
-
-
deffib(n)
- ifn<2
- n
- else
- fib(n-1)+fib(n-2)
- end
-end
-
-putsfib(14)#=> 377
-
One of the drawbacks of this approach is that all the futures start, and then most of them immediately block on their dependencies. We know that there's no point executing those futures until their dependencies are ready, so let's not execute each future until all their dependencies are ready.
-
-
To do this we'll create an object that counts the number of times it observes a future finishing before it does something - and for us that something will be to execute the next future.
Since we know that the futures the dataflow computation depends on are already going to be available when the future is executed, we might as well pass the values into the block so we don't have to reference the futures inside the block. This allows us to write the dataflow block as straight non-concurrent code without reference to futures.
Only change this option if you know what you are doing!
-When this is set to true (the default) then at_exit handlers
-will be registered automatically for all thread pools to
-ensure that they are shutdown when the application ends. When
-changed to false, the at_exit handlers will be circumvented
-for all Concurrent Ruby thread pools running within the
-application. Even those created within other gems used by the
-application. This method should never be called from within a
-gem. It should only be used from within the main application.
-And even then it should be used only when necessary.
-
-
-
-
Defines if ALL executors should be auto-terminated with an
-at_exit callback. When set to false it will be the application
-programmer's responsibility to ensure that all thread pools,
-including the global thread pools, are shutdown properly prior to
-application exit.
-
-
-
-
-
-
-
-
-
-
-
-
-
-107
-108
-109
-
-
-
# File 'lib/concurrent/configuration.rb', line 107
-
-defself.disable_auto_termination_of_all_executors!
- @@auto_terminate_all_executors.make_false
-end
Only change this option if you know what you are doing!
-When this is set to true (the default) then at_exit handlers
-will be registered automatically for the global thread pools
-to ensure that they are shutdown when the application ends. When
-changed to false, the at_exit handlers will be circumvented
-for all global thread pools. This method should never be called
-from within a gem. It should only be used from within the main
-application and even then it should be used only when necessary.
-
-
-
-
Defines if global executors should be auto-terminated with an
-at_exit callback. When set to false it will be the application
-programmer's responsibility to ensure that the global thread pools
-are shutdown properly prior to application exit.
-
-
-
-
-
-
-
-
-
-
-
-
-
-66
-67
-68
-
-
-
# File 'lib/concurrent/configuration.rb', line 66
-
-defself.disable_auto_termination_of_global_executors!
- @@auto_terminate_global_executors.make_false
-end
Time calculations one all platforms and languages are sensitive to
-changes to the system clock. To alleviate the potential problems
-associated with changing the system clock while an application is running,
-most modern operating systems provide a monotonic clock that operates
-independently of the system clock. A monotonic clock cannot be used to
-determine human-friendly clock times. A monotonic clock is used exclusively
-for calculating time intervals. Not all Ruby platforms provide access to an
-operating system monotonic clock. On these platforms a pure-Ruby monotonic
-clock will be used as a fallback. An operating system monotonic clock is both
-faster and more reliable than the pure-Ruby implementation. The pure-Ruby
-implementation should be fast and reliable enough for most non-realtime
-operations. At this time the common Ruby platforms that provide access to an
-operating system monotonic clock are MRI 2.1 and above and JRuby (all versions).
-
-
-
-
Runs the given block and returns the number of seconds that elapsed.
Time calculations one all platforms and languages are sensitive to
-changes to the system clock. To alleviate the potential problems
-associated with changing the system clock while an application is running,
-most modern operating systems provide a monotonic clock that operates
-independently of the system clock. A monotonic clock cannot be used to
-determine human-friendly clock times. A monotonic clock is used exclusively
-for calculating time intervals. Not all Ruby platforms provide access to an
-operating system monotonic clock. On these platforms a pure-Ruby monotonic
-clock will be used as a fallback. An operating system monotonic clock is both
-faster and more reliable than the pure-Ruby implementation. The pure-Ruby
-implementation should be fast and reliable enough for most non-realtime
-operations. At this time the common Ruby platforms that provide access to an
-operating system monotonic clock are MRI 2.1 and above and JRuby (all versions).
-
-
-
-
Returns the current time a tracked by the application monotonic clock.
-
-
-
-
-
-
-
Returns:
-
-
-
-
-
- (Float)
-
-
-
- —
-
The current monotonic time when since not given else
-the elapsed monotonic time between since and the current time
Time calculations one all platforms and languages are sensitive to
-changes to the system clock. To alleviate the potential problems
-associated with changing the system clock while an application is running,
-most modern operating systems provide a monotonic clock that operates
-independently of the system clock. A monotonic clock cannot be used to
-determine human-friendly clock times. A monotonic clock is used exclusively
-for calculating time intervals. Not all Ruby platforms provide access to an
-operating system monotonic clock. On these platforms a pure-Ruby monotonic
-clock will be used as a fallback. An operating system monotonic clock is both
-faster and more reliable than the pure-Ruby implementation. The pure-Ruby
-implementation should be fast and reliable enough for most non-realtime
-operations. At this time the common Ruby platforms that provide access to an
-operating system monotonic clock are MRI 2.1 and above and JRuby (all versions).
-
-
-
-
Wait the given number of seconds for the block operation to complete.
-Intended to be a simpler and more reliable replacement to the Ruby
-standard library Timeout::timeout method.
Time calculations one all platforms and languages are sensitive to
-changes to the system clock. To alleviate the potential problems
-associated with changing the system clock while an application is running,
-most modern operating systems provide a monotonic clock that operates
-independently of the system clock. A monotonic clock cannot be used to
-determine human-friendly clock times. A monotonic clock is used exclusively
-for calculating time intervals. Not all Ruby platforms provide access to an
-operating system monotonic clock. On these platforms a pure-Ruby monotonic
-clock will be used as a fallback. An operating system monotonic clock is both
-faster and more reliable than the pure-Ruby implementation. The pure-Ruby
-implementation should be fast and reliable enough for most non-realtime
-operations. At this time the common Ruby platforms that provide access to an
-operating system monotonic clock are MRI 2.1 and above and JRuby (all versions).
-
-
-
-
Perform the given operation asynchronously after the given number of seconds.
-
-
-
-
-
-
Parameters:
-
-
-
-
- seconds
-
-
- (Fixnum)
-
-
-
- —
-
the interval in seconds to wait before executing the task
# File 'lib/concurrent/utility/timer.rb', line 15
-
-deftimer(seconds,*args,&block)
- raiseArgumentError.new('no block given')unlessblock_given?
- raiseArgumentError.new('interval must be greater than or equal to zero')ifseconds<0
-
- Concurrent.configuration.global_timer_set.post(seconds,*args,&block)
- true
-end
A ThreadLocalVar is a variable where the value is different for each thread.
-Each variable may have a default value, but when you modify the variable only
-the current thread will ever see that change.