Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figure out why file is not in mongo before an LS all within the skipper adapter tests #34

Open
willhuang85 opened this issue Nov 10, 2018 · 4 comments

Comments

@willhuang85
Copy link
Owner

willhuang85 commented Nov 10, 2018

Looking at the failing test, the file has not been written to mongo before an ls is being performed. This seems to be because the receiver__'s finish has been called before the file has been fully uploaded to mongo.

I believe the tests just don't treat the methods asynchronously. Possible should not have these tests rely on what is done in a previous test.

@willhuang85 willhuang85 changed the title Figure out why the receiver__ is being close before the __newFile headstream is done piping into mongo Figure out why the receiver__ is being closed before the __newFile headstream is done piping into mongo Nov 10, 2018
@willhuang85 willhuang85 changed the title Figure out why the receiver__ is being closed before the __newFile headstream is done piping into mongo Figure out why the receiver__ is being closed before the __newFile stream is done piping into mongo Nov 10, 2018
@willhuang85 willhuang85 changed the title Figure out why the receiver__ is being closed before the __newFile stream is done piping into mongo Figure out why file is not in mongo before an LS all within the skipper adapter tests Nov 11, 2018
@willhuang85
Copy link
Owner Author

Create issue in skipper tests project instead

@ajuhos
Copy link

ajuhos commented Jun 29, 2019

THIS ISSUE MUST BE REOPENED. This bug exists and happens in real life, not just test cases. We have to process file uploads with a delay when using skipper-gridfs due to this bug, this does not happen with skipper-azure, so it is indeed a bug in this module.

@willhuang85 willhuang85 reopened this Jun 30, 2019
@willhuang85
Copy link
Owner Author

Not sure why the difference of behavior between running the tests locally vs in travis. Maybe slowness of my computer compared to travis as was suggested by it working with a delay?

@dmedina2015
Copy link

dmedina2015 commented Mar 18, 2021

I double checked the workflow of .receive() function and compared to skipper-disk. I noticed that in current code there are some event listeners and emitters not present in skipper-disk and this is causing some asynchronous calling to done before the right time.
Even if you are not getting an error, you can see that something is wrong taking a look at status field of the response. Even when I put the missing event receiver__.emit('writefile',_newfile), it was not reaching the point to tell skipper that the file was saved. It remains in bufferingOrWriting.

[
    {
        "fd": "36e48e96-370f-418a-ac45-8c5017b12c3b.jpg",
        "size": 55241,
        "type": "image/jpeg",
        "filename": "50405223_10156786194446605_3440585807042183168_n.jpg",
        "status": "bufferingOrWriting",
        "field": "avatar"
    }
]

So, I did some cleanup in it, leaving only the listeners for outs__ stream. At the end of the day, fails or sucesses in _newfile or receiver__ will impact outs__, so we only have to watch it. After that, the status changed to finished indicating that the workflow is reaching the desired point:

[
    {
        "fd": "36e48e96-370f-418a-ac45-8c5017b12c3b.jpg",
        "size": 55241,
        "type": "image/jpeg",
        "filename": "50405223_10156786194446605_3440585807042183168_n.jpg",
        "status": "finished",
        "field": "avatar"
    }
]

Aditionally, now the code is compatible with Node 14 & 15, and all tests are passing even when using untouched skipper-adapter-test repo.

PS1: Node 14 and 15 seems to be more memory-consuming than 6 & 8. When running tests on my laptop (8Gb RAM) it fails intermitently with ECONNRESET error during tests with 200 clients at same time. When testing on my desktop (16Gb) or in Travis CI, all tests run successfully 100% of the time.

PS2: I updated PRs #44 and #45 with this code, you can test it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants