- When raising
RetryException
with nomethod
, use task decorator retry method if set (356)
- Add
tiger.get_sizes_for_queues_and_states
(352)
- First version using the automated-release process
- Stop heartbeat thread in case of unhandled exceptions (335)
- Heartbeat threading-related fixes with synchronous worker (333)
- Implement heartbeat with synchronous TaskTiger worker (331)
- Adding synchronous (non-forking) executor (319, 320)
- If possible, retry tasks that fail with "execution not found" (323)
- Option to exit TaskTiger after a certain amount of time (324)
- Purge errored tasks even if task object is not found (310)
- Added
current_serialized_func
property toTaskTiger
object (296)
- Added support for Redis >= 6.2.7 (268)
- Removed
execute_pipeline
script (284)
- Added typing to more parts of the codebase
- Dropped Python 3.7 support, added Python 3.11 support
- Added CI checks to ensure compatibility on redis-py versions (currently >=3.3.0,<5)
Allow truncating task executions (251)
This version of TaskTiger switches to using the t:task:<id>:executions_count
Redis key to determine the total number of task executions. In previous versions this was accomplished by obtaining the length of t:task:<id>:executions
. This change was required for the introduction of a parameter to enable the truncation of task execution entries. This is useful for tasks with many retries, where execution entries consume a lot of memory.
This behavior is incompatible with the previous mechanism and requires a migration to populate the task execution counters. Without the migration, the execution counters will behave as though they were reset, which may result in existing tasks retrying more times than they should.
The migration can be executed fully live without concern for data integrity.
- Upgrade TaskTiger to
0.16.2
if running a version lower than that. - Call
tasktiger.migrations.migrate_executions_count
with yourTaskTiger
instance, e.g.:
from tasktiger import TaskTiger
from tasktiger.migrations import migrate_executions_count
# Instantiate directly or import from your application module
tiger = TaskTiger(...)
# This could take a while depending on the
# number of failed/retrying tasks you have
migrate_executions_count(tiger)
- Upgrade TaskTiger to
0.17.0
. Done!
Import cleanup (258)
Due to a cleanup of imports, some internal TaskTiger objects can no longer be imported from the public modules. This shouldn't cause problems for most users, but it's a good idea to double check that all imports from the TaskTiger package continue to function correctly in your application.
- Prefilter polled queues (242)
- Use SSCAN to prefilter queues in scheduled state (248)
- Add task execution counter (252)
- Add function name to tasktiger done log messages (203)
- Add task args / kwargs to the task_error log statement (215)
- Fix
hard_timeout
in parent process when stored on task function (235)
- Populate
Task.ts
field inTask.from_id
function (019bf18) - Add
TaskTiger.would_process_configured_queue()
function (217152d)
- Add
Task.time_last_queued
property getter (6d2285d)
This new version of TaskTiger uses a new locking mechanism: the Lock
provided by redis-py. It is incompatible with the old locking mechanism we were using, and several core functions in TaskTiger depends on locking to work correctly, so this warrants a careful migration process.
You can perform this migration in two ways: via a live migration, or via a downtime migration. After the migration, there's an optional cleanup step.
-
Update your environment to TaskTiger 0.12 as usual.
-
Deploy TaskTiger as it is in the commit SHA
cf600449d594ac22e6d8393dc1009a84b52be0c1
. Inpip
parlance, it would be:-e git+ssh://[email protected]/closeio/tasktiger.git@cf600449d594ac22e6d8393dc1009a84b52be0c1#egg=tasktiger
-
Wait at least 2-3 minutes with it running in production in all your TaskTiger workers. This is to give time for the old locks to expire, and after that the new locks will be fully in effect.
-
Deploy TaskTiger 0.13. Your system is migrated.
- Update your environment to TaskTiger 0.12 as usual.
- Scale your TaskTiger workers down to zero.
- Deploy TaskTiger 0.13. Your system is migrated.
Run the script in scripts/redis_scan.py
to delete the old lock keys from your Redis instance:
./scripts/redis_scan.py --host HOST --port PORT --db DB --print --match "t:lock:*" --ttl 300
The flags:
--host
: The Redis host. Required.--port
: The port the Redis instance is listening on. Defaults to6379
.--db
: The Redis database. Defaults to0
.--print
: If you want the script to print which keys it is modifying, use this.--match
: What pattern to look for. If you didn't change the default prefix TaskTiger uses for keys, this will bet:lock:*
, otherwise it will bePREFIX:lock:*
. By default, scans all keys.--ttl
: A TTL to set. A TTL of 300 will give you time to undo if you want to halt the migration for whatever reason. (Just call this command again with--ttl -1
.) By default, does not change keys' TTLs.
Plus, there is:
--file
: A log file that will receive the changes made. Defaults toredis-stats.log
in the current working directory.--delay
: How long, in seconds, to wait betweenSCAN
iterations. Defaults to0.1
.
- Drop support for redis-py 2 (#183)
- Make the
TaskTiger
instance available for the task via global state (#170) - Support for custom task runners (#175)
- Add ability to configure a poll- vs push-method for task runners to discover new tasks (#176)
unique_key
specifies the list of kwargs to use to construct the unique key (#180)
- Ensure task exists in the given queue when retrieving it (#184)
- Clear retried executions from successful periodic tasks (#188)
- Drop support for Python 3.4 and add testing for Python 3.7 (#163)