Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Abnormal multithreading inference time #2845

Open
3 tasks done
xzh929 opened this issue Nov 21, 2024 · 0 comments
Open
3 tasks done

[Bug] Abnormal multithreading inference time #2845

xzh929 opened this issue Nov 21, 2024 · 0 comments

Comments

@xzh929
Copy link

xzh929 commented Nov 21, 2024

Checklist

  • I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

When I use model sdk to do inference work and use multithreading,the inference time is abnormal,it increas by the number of threads.

Reproduction

`#include
#include <mmdeploy/detector.hpp>
#include <opencv2/opencv.hpp>
#include
#include

class Model {
public:
Model(std::string model_dir);
void Init(std::string model_dir);
void Infer(cv::Mat &img);

private:
std::shared_ptrmmdeploy::Detector model_;
};

void Model::Init(std::string model_dir) {
mmdeploy::Context context;
std::string device = "cuda";
context.Add(mmdeploy::Device(device));
model_ = std::make_sharedmmdeploy::Detector(mmdeploy::Model{model_dir}, context);
}

void Model::Infer(cv::Mat &img) {
auto start = std::chrono::steady_clock::now();
mmdeploy::Detector::Result dets = model_->Apply(img);
auto end = std::chrono::steady_clock::now();
std::cout << "thread id:" << std::this_thread::get_id() << " infer time: " << std::chrono::duration_caststd::chrono::milliseconds(end - start).count() << "ms" << std::endl;
}

Model::Model(std::string model_dir) {
Init(model_dir);
}

void run(Model model, std::string &img_path) {
cv::Mat img = cv::imread(img_path);
auto start = std::chrono::steady_clock::now();
for (int i = 0; i < 10; ++i) {
model.Infer(img);
}
auto end = std::chrono::steady_clock::now();
std::cout << "thread id:" << std::this_thread::get_id() << " total time: " << std::chrono::duration_caststd::chrono::milliseconds(end - start).count() << "ms" << std::endl;
}

int main() {
std::string model_dir = "./yolov8s_30x0";
std::string img_path = "1383614316_78.jpg";
Model model1 = Model(model_dir);
Model model2 = Model(model_dir);
Model model3 = Model(model_dir);
std::thread t1(run, model1, std::ref(img_path));
std::thread t2(run, model2, std::ref(img_path));
std::thread t3(run, model3, std::ref(img_path));
t1.join();
t2.join();
t3.join();
return 0;
}`

Environment

CUDA 11.8
cudnn 8.9.7.29
TensorRT 8.6.1.6
opencv 4.9.0
mmdeploy 1.3.1

Error traceback

When I use three threads:
thread id:31488 infer time: 64ms
thread id:33456 infer time: 67ms
thread id:32368 infer time: 79ms
thread id:31488 infer time: 35ms
thread id:33456 infer time: 44ms
thread id:32368 infer time: 53ms
thread id:33456 infer time: 37ms
thread id:31488 infer time: 57ms
thread id:32368 infer time: 48ms
thread id:33456 infer time: 37ms
thread id:31488 infer time: 51ms
thread id:32368 infer time: 42ms
thread id:33456 infer time: 57ms
thread id:31488 infer time: 49ms
thread id:32368 infer time: 44ms
thread id:33456 infer time: 40ms
thread id:31488 infer time: 48ms
thread id:32368 infer time: 48ms
thread id:33456 infer time: 47ms
thread id:31488 infer time: 40ms
thread id:32368 infer time: 56msthread id:33456 infer time: 50ms

thread id:31488 infer time: 48ms
thread id:32368 infer time: 44ms
thread id:33456 infer time: 51ms
thread id:31488 infer time: 48ms
thread id:32368 infer time: 47ms
thread id:33456 infer time: 49ms
thread id:33456 total time: 488ms
thread id:31488 infer time: 53ms
thread id:31488 total time: 508ms
thread id:32368 infer time: 39ms
thread id:32368 total time: 521ms

when I use 1 thread:
thread id:40164 infer time: 141ms
thread id:40164 infer time: 19ms
thread id:40164 infer time: 17ms
thread id:40164 infer time: 18ms
thread id:40164 infer time: 17ms
thread id:40164 infer time: 17ms
thread id:40164 infer time: 17ms
thread id:40164 infer time: 17ms
thread id:40164 infer time: 18ms
thread id:40164 infer time: 17ms
thread id:40164 total time: 313ms
@xzh929 xzh929 changed the title [Bug] Multithreading inference [Bug] Abnormal multithreading inference time Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant