gRPC Basics for Rust Developers

Working switches in data center with white cables
Syed Murtza

Engineer

Syed Murtza

Your users expect. a digital product with a seamless user experience. Our designers and developers work together to create intuitive, engaging, and high-performing applications. Let’s talk about your project.

In the world of modern software development, building efficient, scalable, and type-safe distributed systems is a necessity. Whether you’re working on microservices, mobile apps, or backend infrastructure, the way services communicate can make or break your architecture. The gRPC is a high performance, language agnostic Remote Procedure Call (RPC) framework that’s gaining traction for its speed, efficiency and versatility. For Rust developers, gRPC offers a particularly compelling combination of performance and safety, thanks to the tonic library and Rust’s robust ecosystem.

In this blog post, we’ll explore the fundamentals of gRPC, its architecture and how to set it up in Rust. We’ll walk through creating a simple gRPC service, interacting with it from a client, and conclude with why gRPC is a game changer for microservices. Whether you’re new to gRPC or a seasoned Rustacean, this guide will help you get started and understand the power of gRPC in Rust.

What is gRPC and Why It’s Important

What is gRPC?

gRPC is an open source Remote Procedure Call framework initially developed by Google. It allows you to define services and their methods using Protocol Buffers (protobuf), a language agnostic, binary serialization format. With gRPC, you can call methods on remote servers as if they were local functions, abstracting away the complexities of network communication.

Unlike traditional REST APIs, which typically rely on HTTP/1.1 and JSON, gRPC uses HTTP/2 as its transport protocol and Protocol Buffers for data serialization. This combination makes gRPC faster, more efficient and better suited for modern, high-performance systems.

gRPC Architecture Overview

Before diving into code, let’s understand how gRPC works under the hood. The gRPC architecture can be broken down into several key components:

  1. Protocol Buffers (.proto files):

    • You define your services, methods, and data structures in a .proto file using the Protocol Buffers language.
    • These files are compiled into language specific code (e.g. Rust structs and traits) using the protoc compiler.
  2. Service Definition:

    • A service is a collection of methods (RPCs) that a client can call. For example, a Greeter service might have a SayHello method.
    • Methods specify input and output messages (defined as protobuf messages).
  3. Client-Server Communication:

    • Client Stub: The generated client code (stub) allows the client to call server methods as if they were local functions.
    • Server Skeleton: The generated server code provides a skeleton (interface) that you implement with your business logic.
  4. HTTP/2 Transport:

    • gRPC uses HTTP/2 as its transport layer, enabling multiplexing, streaming, and efficient binary encoding.
    • HTTP/2’s features reduce latency and improve performance compared to HTTP/1.1.
  5. Serialization:

    • Data is serialized into binary Protocol Buffers, sent over the wire, and deserialized on the other side.
    • This process is fast, compact and less compute intensive, making gRPC ideal for high throughput systems.
  6. Interceptors:

    • gRPC supports middleware (interceptors) for tasks like logging, authentication and metrics, adding flexibility to your services.

Here’s a visual overview of the flow:

Client App --> Client Stub --> HTTP/2 + Protobuf --> Server Skeleton --> Server App
  • Client App: Your application code calling gRPC methods.
  • Client Stub: Generated code that handles serialization and networking.
  • Server Skeleton: Generated code defining the service interface.
  • Server App: Your implementation of the service logic.

With this architecture, gRPC abstracts away the complexities of networking, serialization and error handling, letting you focus on your application logic.

Setting Up gRPC in Rust (protoc, Tonic)

To use gRPC in Rust, you’ll need to set up a few tools and libraries. Rust’s gRPC ecosystem is centered around Tonic, a high performance gRPC implementation built on top of Tokio (Rust’s async runtime) and Prost (a Protocol Buffers library). Here’s how to set it up:

Prerequisites

  1. Rust: Ensure you have Rust installed (use rustup if not: curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh).
  2. Protocol Buffers Compiler (protoc): Install protoc to compile .proto files:
    • macOS: brew install protobuf
    • Ubuntu: sudo apt install protobuf-compiler
    • Windows: Download from GitHub and add to PATH.

Step 1: Create a New Rust Project

Start by creating a new Rust project:

cargo new helloworld-grpc
cd helloworld-grpc

Step 2: Add Dependencies

Edit Cargo.toml to include Tonic and related dependencies:

[package]
name = "helloworld-grpc"
version = "0.1.0"
edition = "2024"

[dependencies]
tonic = "0.12.3"
prost = "0.13.5"
tokio = { version = "1.44.0", features = ["macros", "rt-multi-thread"] }
tokio-stream = "0.1.17"

[build-dependencies]
tonic-build = "0.12.3"
  • tonic: Core gRPC library for Rust.
  • prost: Protocol Buffers implementation used by Tonic.
  • tokio: Async runtime for running gRPC servers and clients.
  • tokio-stream: Utilities for working with streams (e.g., server-side streaming).
  • tonic-build: Build-time dependency to compile .proto files.

Step 3: Configure Code Generation

To compile .proto files into Rust code, create a build.rs file in the project root:

fn main() -> Result<(), Box<dyn std::error::Error>> {
    tonic_build::compile_protos("proto/helloworld.proto")?;
    Ok(())
}

This tells Cargo to run tonic-build during the build process, generating Rust code from your .proto file.

Step 4: Organize Your Project

Create a proto directory to hold your .proto files:

mkdir proto

With this setup, you’re ready to define and implement gRPC services in Rust. Tonic makes it seamless by leveraging Rust’s async/await syntax and type system, ensuring both performance and safety.

Writing a Simple gRPC Service in Rust

Now that our project is set up, let’s create a simple gRPC service called Greeter. The service will have two methods:

  • SayHello: A unary RPC that takes a name and returns a greeting.
  • SayHelloStream: A server-streaming RPC that sends a stream of numbered greetings.

Step 1: Define the Service in proto/helloworld.proto

Create proto/helloworld.proto with the following content:

syntax = "proto3";

package helloworld;

service Greeter {
  rpc SayHello (HelloRequest) returns (HelloReply);
  rpc SayHelloStream (HelloRequest) returns (stream HelloReply);
}

message HelloRequest {
  string name = 1;
}

message HelloReply {
  string message = 1;
}
  • syntax = "proto3": Uses Protocol Buffers version 3.
  • package helloworld: Defines the namespace for generated code.
  • service Greeter: Defines the Greeter service with two methods.
  • message HelloRequest and message HelloReply: Define the input and output data structures.

Step 2: Generate Rust Code

Run a build to generate Rust code from the .proto file:

cargo build

This creates Rust structs, traits, and other code in target/..., accessible via the helloworld module in your Rust code.

Step 3: Implement the Server

Create src/server.rs to implement the Greeter service. We’ll use Tokio for async runtime and Tonic for gRPC:

use tokio::sync::mpsc;
use tonic::{transport::Server, Request, Response, Status};

use hello_world::greeter_server::{Greeter, GreeterServer};
use hello_world::{HelloReply, HelloRequest};

use tokio::time::{sleep, Duration};
use tokio_stream::wrappers::ReceiverStream;

pub mod hello_world {
    tonic::include_proto!("helloworld");
}

#[derive(Debug, Default)]
pub struct MyGreeter {}

#[tonic::async_trait]
impl Greeter for MyGreeter {
    type SayHelloStreamStream = ReceiverStream<Result<HelloReply, Status>>;

    async fn say_hello(
        &self,
        request: Request<HelloRequest>,
    ) -> Result<Response<HelloReply>, Status> {
        println!("Got a request: {:?}", request);
        let name = request.into_inner().name;
        let reply = HelloReply {
            message: format!("Hello {}!", name),
        };

        Ok(Response::new(reply))
    }

    // server-streaming RPC
    async fn say_hello_stream(
        &self,
        request: Request<HelloRequest>,
    ) -> Result<Response<Self::SayHelloStreamStream>, Status> {
        let name = request.into_inner().name;

        let (tx, rx) = mpsc::channel(4);
        
        tokio::spawn(async move {
            let greetings = vec![
                format!("Hello, {}! (1/3)", name),
                format!("Hi again, {}! (2/3)", name),
                format!("Greetings, {}! (3/3)", name),
            ];

            for greeting in greetings {
                if tx.send(Ok(HelloReply { message: greeting })).await.is_err() {
                    break;
                }
                sleep(Duration::from_secs(1)).await;
            }
        });

        Ok(Response::new(ReceiverStream::new(rx)))
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let addr = "[::1]:50051".parse()?;
    let greeter = MyGreeter::default();

    println!("Server listening on {}", addr);

    Server::builder()
        .add_service(GreeterServer::new(greeter))
        .serve(addr)
        .await?;

    Ok(())
}

Key Points:

  • tonic::include_proto!("helloworld"): Includes the generated code from helloworld.proto.
  • MyGreeter: A struct implementing the Greeter trait (generated by Tonic).
  • say_hello: Implements the unary RPC, returning a single HelloReply.
  • say_hello_stream: Implements the server-streaming RPC, using async_stream::stream! to create a stream of greetings.
  • Server::builder(): Sets up a Tonic server listening on port 50051.

Step 4: Configure Cargo to Run the Server

Edit Cargo.toml to define the server as a binary:

[[bin]]
name = "server"
path = "src/server.rs"

Now you can run the server:

cargo run --bin server

Output: Server listening on [::1]:50051

Testing Your gRPC Service with grpcurl on macOS

grpcurl is a curl like powerful command-line tool. If you’re using macOS, you can easily install grpcurl using Homebrew:

brew install grpcurl

Testing a Unary Request

Once you’ve got grpcurl installed, you can test a unary (single request, single response) gRPC method SayHello using the following command:

grpcurl -plaintext -import-path ./proto -proto helloworld.proto -d '{"name": "gRPC"}' '[::1]:50051' helloworld.Greeter/SayHello

You should see a response similar to:

{
  "message": "Hello gRPC!"
}

Testing the Streaming Response

To test the server streaming method SayHelloStream (single request, multiple responses), use the following command:

grpcurl -plaintext -import-path ./proto -proto helloworld.proto -d '{"name": "gRPC Stream"}' '[::1]:50051' helloworld.Greeter/SayHelloStream

You’ll receive a stream of responses like this:

{
  "message": "Hello, gRPC Stream! (1/3)"
}
{
  "message": "Hi again, gRPC Stream! (2/3)"
}
{
  "message": "Greetings, gRPC Stream! (3/3)"
}

This straight forward process will allow you to efficiently test and validate your gRPC services directly from the command line.

Interacting with the Service from a Client

With the server running, let’s create a Rust client to interact with it. The client will call both SayHello and SayHelloStream, demonstrating unary and streaming RPCs.

Step 1: Implement the Client

Create src/client.rs:

use hello_world::greeter_client::GreeterClient;
use hello_world::HelloRequest;
use tokio_stream::StreamExt;

pub mod hello_world {
    tonic::include_proto!("helloworld");
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut client = GreeterClient::connect("http://[::1]:50051").await?;

    // Unary RPC: SayHello
    let request = tonic::Request::new(HelloRequest {
        name: "Alice".into(),
    });

    let response = client.say_hello(request).await?;
    println!("Unary Response: {}", response.into_inner().message);


    // Streaming RPC: SayHelloStream
    let request = tonic::Request::new(HelloRequest {
        name: "Bob".to_string(),
    });
    let mut stream = client.say_hello_stream(request).await?.into_inner();

    while let Some(response) = stream.next().await {
        match response {
            Ok(reply) => {
                println!("Stream Response: {}", reply.message);
            }
            Err(e) => eprintln!("Stream Error: {}", e),
        }
    }

    Ok(())
}

Key Points:

  • GreeterClient::connect: Establishes a connection to the server.
  • say_hello: Calls the unary RPC and prints the response.
  • say_hello_stream: Calls the streaming RPC, processing responses as they arrive.

Step 2: Configure Cargo to Run the Client

Add the client as a binary in Cargo.toml:

[[bin]]
name = "client"
path = "src/client.rs"

Step 3: Run the Client

With the server running in one terminal, open another terminal and run the client:

cargo run --bin client

Output (over ~5 seconds):

Unary Response: Hello Alice!
Stream Response: Hello, Bob! (1/3)
Stream Response: Hi again, Bob! (2/3)
Stream Response: Greetings, Bob! (3/3)

Advantages of gRPC for Microservices

We’ve walked through setting up a gRPC service in Rust, implementing a server, and interacting with it from a client. Now, let’s reflect on why gRPC is a game changer for microservices, especially in Rust:

  1. Performance:

    • gRPC’s use of HTTP/2 and Protocol Buffers makes it significantly faster than REST APIs, reducing latency and bandwidth usage. This is crucial for microservices, where services often communicate frequently over the network.
    • Rust’s zero-cost abstractions and Tonic’s async implementation ensure that your gRPC servers and clients are as fast as possible, with minimal memory overhead.
  2. Scalability:

    • HTTP/2’s multiplexing allows multiple RPCs to share a single connection, making gRPC ideal for high concurrency environments like microservices.
    • Streaming support enables efficient real-time communication, such as pushing updates or processing large datasets in chunks.
  3. Type Safety:

    • Protocol Buffers enforce a strongly typed schema, catching errors at compile time. This aligns perfectly with Rust’s safety guarantees, reducing runtime bugs and improving reliability in distributed systems.
  4. Language Interoperability:

    • gRPC’s language agnostic nature means your Rust backend can seamlessly communicate with clients or services written in Go, Java, Python, or any other supported language. This is invaluable in a microservices architecture where different teams may use different stacks.
  5. Security:

    • gRPC’s built-in TLS support ensures secure communication, a must for microservices deployed in untrusted environments. Rust’s memory safety adds an extra layer of protection against vulnerabilities.
  6. Developer Productivity:

    • Code generation from .proto files eliminates boilerplate, letting you focus on business logic. For Rust developers, Tonic’s integration with async/await makes writing performant, concurrent services intuitive and safe.

When to Use gRPC in Rust?

gRPC is particularly well-suited for:

  • Microservices: Where services need fast, type-safe, and scalable communication.
  • Real-Time Applications: Such as streaming data, chat apps, or live dashboards.
  • Mobile Backends: Where bandwidth and latency savings are critical (e.g. Android/iOS apps).
  • Polyglot Systems: Where Rust services need to interoperate with other languages.

Final Thoughts

gRPC, combined with Rust, offers a powerful toolkit for building modern, efficient, and safe distributed systems. By leveraging Tonic, Rust developers can create gRPC services that are fast, scalable and secure, with the added benefit of Rust’s compile time safety guarantees. Whether you’re building a microservices architecture, a real-time app, or a mobile backend, gRPC in Rust is a combination that’s hard to beat.

Ready to take your Rust skills to the next level? Try extending this example with TLS, adding more RPC methods, or integrating it with a frontend in another language. The possibilities are endless, and gRPC makes them all efficient and safe.

Happy coding, Rustaceans!

Newsletter

Stay in the Know

Get the latest news and insights on Elixir, Phoenix, machine learning, product strategy, and more—delivered straight to your inbox.

Narwin holding a press release sheet while opening the DockYard brand kit box