You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm getting 80-100ms response times for very minimal rpc function, This is the example I'm using
syntax = "proto3";
package hello;
message Message {
string data = 1;
}
service Hello {
rpc hello (Message) returns (Message);
rpc hi (Message) returns (Message);
}
use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4};
use crate::hello_mod::hello_server::HelloServer;
use hello_mod::{hello_server::Hello, Message};
use tonic::{async_trait, transport::Server, Request, Response, Status};
pub mod hello_mod {
tonic::include_proto!("hello");
}
#[tokio::main]
async fn main() {
let ip = Ipv4Addr::new(127, 0, 0, 1);
let port = 9002;
let address = SocketAddr::V4(SocketAddrV4::new(ip, port));
let _ = Server::builder()
.add_service(HelloServer::new(HelloService))
.serve(address)
.await.unwrap();
}
pub struct HelloService;
#[async_trait]
impl Hello for HelloService {
async fn hello(&self, request: Request<Message>) -> Result<Response<Message>, Status> {
let data = request.into_inner().data;
let data = format!("Hello {}", data);
Ok(Response::new(Message { data }))
}
async fn hi(&self, request: Request<Message>) -> Result<Response<Message>, Status> {
let data = request.into_inner().data;
let data = format!("Hi {}", data);
Ok(Response::new(Message { data }))
}
}
I don't understand much of this flamegraph, but hope it helps
I've tried with multiple clients and all are getting their response on average of 100ms on localhost. I don't think 100ms is normal for a simple request. is it?
The CPU usage seems to be ok, goes high just usual when a lot of requests are sent, but so does the response times.
100ms if only one request is sent at a time, If I send 1000 requests in parallel, each response is received after 1 second.
What can I do to improve its performance, I did try to tweak some of the Server config, but nothings making a difference.
Or how can I find where the actual bottleneck is?
Thank you
I guess the different clients I've used are the problem. If I just use tonic client and tonic server, I'm getting sub millisecond response times, but any other client or language, they are receiving 100ms response time. I've tried with postman and dart grpc, but didn't expect that much of a difference.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm getting 80-100ms response times for very minimal rpc function, This is the example I'm using
I don't understand much of this flamegraph, but hope it helps
I've tried with multiple clients and all are getting their response on average of 100ms on localhost. I don't think 100ms is normal for a simple request. is it?
The CPU usage seems to be ok, goes high just usual when a lot of requests are sent, but so does the response times.
100ms if only one request is sent at a time, If I send 1000 requests in parallel, each response is received after 1 second.
What can I do to improve its performance, I did try to tweak some of the Server config, but nothings making a difference.
Or how can I find where the actual bottleneck is?
Thank you
I guess the different clients I've used are the problem. If I just use tonic client and tonic server, I'm getting sub millisecond response times, but any other client or language, they are receiving 100ms response time. I've tried with postman and dart grpc, but didn't expect that much of a difference.
Beta Was this translation helpful? Give feedback.
All reactions