Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Managing the same lock with different handlers #451

Open
arieroos opened this issue Sep 2, 2021 · 8 comments
Open

Managing the same lock with different handlers #451

arieroos opened this issue Sep 2, 2021 · 8 comments

Comments

@arieroos
Copy link

arieroos commented Sep 2, 2021

Describe the bug
Sometimes I need to acquire and release the same lock with different objects. I am constricted by a legacy design to do it this way, because acquiring and releasing the lock will be in separate processes. However, if I open a second handle to the same lock key, I can't release the lock from there.

To Reproduce

Python 3.6.5 (default, Mar 25 2020, 10:32:15) 
[GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pottery
>>> import redis
>>> lock = pottery.Redlock(key="test-lock", masters={redis.Redis()}, auto_release_time=40000)
>>> lock.acquire()
True
>>> lock.locked()
33200
>>> lock2 = pottery.Redlock(key="test-lock", masters={redis.Redis()}, auto_release_time=40000)
>>> lock.locked()
23395
>>> lock2.locked()
0
>>> lock2.release()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/arieroos/Workspace/dos/venv/lib/python3.6/site-packages/pottery/redlock.py", line 572, in release
    redis_errors=redis_errors,
pottery.exceptions.ReleaseUnlockedLock: key='redlock:test-lock', masters=[Redis<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>], redis_errors=[]

Expected behavior

Python 3.6.5 (default, Mar 25 2020, 10:32:15) 
[GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pottery
>>> import redis
>>> lock = pottery.Redlock(key="test-lock", masters={redis.Redis()}, auto_release_time=40000)
>>> lock.acquire()
True
>>> lock.locked()
33200
>>> lock2 = pottery.Redlock(key="test-lock", masters={redis.Redis()}, auto_release_time=40000)
>>> lock.locked()
23395
>>> lock2.locked()
13395  # or some integer like that
>>> lock2.release()
# no exception thrown

Environment (please complete the following information):

  • OS: macOS (However, I use Linux in production)
  • Python version: 3.5.6

Additional context
Currently I just ignore the Exception, as most locks will expire in time anyway. However, that is probably bad practice.

@Lessica
Copy link

Lessica commented Sep 20, 2021

@brainix, you assigned an uuid specific to objects and identify an lock by its uuid in redis scripts:

self._uuid = str(uuid.uuid4())

This is weird.

According to https://redis.io/topics/distlock:

The key is set to a value “my_random_value”. This value must be unique across all clients and all lock requests.

But, I think, different Redlock objects should share the same “random_value”.

It’s safest to instantiate a new Redlock object every time you need to protect your resource and to not share Redlock instances across different parts of code. In other words, think of the key as identifying the resource; don’t think of any particular Redlock as identifying the resource.

@belthaZornv
Copy link

belthaZornv commented Oct 1, 2021

Same issue here - you'd expect a distributed lock - to be distributed; if you can't check if a lock is locked from a different resource then there isn't much use.

@leandrorebelo
Copy link

Same issue here - There is no sense to release the lock with the same Redlock instance from acquiring. In my opinion, the key and master should be sufficient to release.

@WeatherControl
Copy link

Can confirm, there is something wrong with the locks, deadlocking for no reason

In [15]: lock.acquire(blocking=False)
Out[15]: False

In [16]: lock.release()
---------------------------------------------------------------------------
ReleaseUnlockedLock                       Traceback (most recent call last)
<ipython-input-16-5ee12072f478> in <module>
----> 1 lock.release()

/usr/local/lib/python3.8/dist-packages/pottery/redlock.py in release(self, raise_on_redis_errors)
    577
    578         self._check_enough_masters_up(raise_on_redis_errors, redis_errors)
--> 579         raise ReleaseUnlockedLock(
    580             self.key,
    581             self.masters,

ReleaseUnlockedLock: key='redlock:testlock', masters=[Redis<ConnectionPool<Connection<host=*******,port=6379,db=0>>>], redis_errors=[]

In [17]: lock.locked()
Out[17]: 0

@sshahar1
Copy link

That is certainly a problem, and it took me a while to understand why this is not working (the UUID creates uniqueness). I was hoping to use this to create some kind of a semaphore, now I wonder if I should implement it myself or just wait for the resource to be released.

@sriveros95
Copy link

sriveros95 commented Oct 25, 2022

I don't know if we have the same issue.
We use locks in different services (in some services they are also used in different threads), we set their key and master
We use the locks using

with the_lock:
    <code we want to execute with lock>

The same key is used between services. It normally works fine (probably when they are not using it at the same time) and then some times we get ReleaseUnlockedLock

I've added some logs to see if what was running inside and it seems it finishes running. Then would it mean another lock acquired it and released it when this one was supposed to have it?

I'll be checking more in depth to add more info, but if some one has a hint of what could be happening or what I could look for would be greatly appreciated.

Update:
Tried a simple test with different services on different machines and worked correctly, will keep looking

Update 2:
Oh I get it now
You cannot use same lock variable, you need to initialize it again so it would have to be (as in the examples I guess)

with Redlock(the_lock_key):
    <code we want to execute with lock>

@Arrow-Li
Copy link

Arrow-Li commented Nov 29, 2022

I don't know if we have the same issue. We use locks in different services (in some services they are also used in different threads), we set their key and master We use the locks using

with the_lock:
    <code we want to execute with lock>

The same key is used between services. It normally works fine (probably when they are not using it at the same time) and then some times we get ReleaseUnlockedLock

I've added some logs to see if what was running inside and it seems it finishes running. Then would it mean another lock acquired it and released it when this one was supposed to have it?

I'll be checking more in depth to add more info, but if some one has a hint of what could be happening or what I could look for would be greatly appreciated.

Update: Tried a simple test with different services on different machines and worked correctly, will keep looking

Update 2: Oh I get it now You cannot use same lock variable, you need to initialize it again so it would have to be (as in the examples I guess)

with Redlock(the_lock_key):
    <code we want to execute with lock>

It's weird, I got same error by below code.

            with Redlock(
                    key=f"{func.__module__}:{func.__name__}",
                    masters={get_redis_conn()},
                    context_manager_blocking=False
            ):
                pass # do something here

Error Info:

Traceback (most recent call last):
  File "/runtime/python3.9/lib/python3.9/site-packages/apscheduler/executors/base.py", line 125, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "/server/./utils/tools.py", line 118, in wrapper
    return func(*args, **kwargs)
  File "/runtime/python3.9/lib/python3.9/site-packages/pottery/redlock.py", line 673, in __exit__
    self.__release()
  File "/runtime/python3.9/lib/python3.9/site-packages/pottery/redlock.py", line 595, in release
    raise ReleaseUnlockedLock(
pottery.exceptions.ReleaseUnlockedLock: ('redlock:***.***:***', frozenset({Redis<ConnectionPool<Connection<host=***,port=6379,db=5>>>}))

@lightning-zgc
Copy link

I don't know if we have the same issue. We use locks in different services (in some services they are also used in different threads), we set their key and master We use the locks using

with the_lock:
    <code we want to execute with lock>

The same key is used between services. It normally works fine (probably when they are not using it at the same time) and then some times we get ReleaseUnlockedLock
I've added some logs to see if what was running inside and it seems it finishes running. Then would it mean another lock acquired it and released it when this one was supposed to have it?
I'll be checking more in depth to add more info, but if some one has a hint of what could be happening or what I could look for would be greatly appreciated.
Update: Tried a simple test with different services on different machines and worked correctly, will keep looking
Update 2: Oh I get it now You cannot use same lock variable, you need to initialize it again so it would have to be (as in the examples I guess)

with Redlock(the_lock_key):
    <code we want to execute with lock>

It's weird, I got same error by below code.

            with Redlock(
                    key=f"{func.__module__}:{func.__name__}",
                    masters={get_redis_conn()},
                    context_manager_blocking=False
            ):
                pass # do something here

Error Info:

Traceback (most recent call last):
  File "/runtime/python3.9/lib/python3.9/site-packages/apscheduler/executors/base.py", line 125, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "/server/./utils/tools.py", line 118, in wrapper
    return func(*args, **kwargs)
  File "/runtime/python3.9/lib/python3.9/site-packages/pottery/redlock.py", line 673, in __exit__
    self.__release()
  File "/runtime/python3.9/lib/python3.9/site-packages/pottery/redlock.py", line 595, in release
    raise ReleaseUnlockedLock(
pottery.exceptions.ReleaseUnlockedLock: ('redlock:***.***:***', frozenset({Redis<ConnectionPool<Connection<host=***,port=6379,db=5>>>}))

How about larger auto_release_time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants