-
Notifications
You must be signed in to change notification settings - Fork 305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference on my machine differs from other machines I tested #45
Comments
I further tested this on a windows machine (CPU). Initially I used pre-processed data from my Ubuntu desktop and it did not work well. Then I performed data pre-processing on windows machine and it worked like a charm. I then copied the data from windows to ubuntu and there too it worked well. So looks like something to do with data pre-processing on my Ubuntu machine. Any hints? |
One simple way of seeing what could be going wrong is to see which preprocess files differ from each other. They are json so should be easy enough to compare. |
Yeah, so basically it comes down to char and word dictionaries and embeddings. On all the machines it worked well, the dictionary file follow the sequence (e.g. sequence of tokens in glove embeddings) e.g.
On my ubuntu desktop this sequence is totally different. Not that the sequence matters as long are indexes are pointing to right vector. But due to this, text comparison between files is not easy. |
Hi, I trained the model on AWS (GPU instance) for 60K steps and got the model. I then tested it on several GPU/CPU instance and results are consistent. When I deploy it locally on my Ubuntu desktop (CPU only), the inferences are totally off. I tested on AWS GPU instance (p2.xlarge), AWS CPU instance (c5d.4xlarge) and also on Colab. All three show consistent answers for a given context and questions. Only on my desktop the answers are way off. Any inputs as to why this could be happening would help. Thanks!
The text was updated successfully, but these errors were encountered: