DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Related

  • Mastering System Design: A Comprehensive Guide to System Scaling for Millions (Part 1)
  • Ensuring API Resilience in Spring Microservices Using Retry and Fallback Mechanisms
  • Supercharging Productivity in Microservice Development With AI Tools
  • Microservices Decoded: Unraveling the Benefits, Challenges, and Best Practices for APIs

Trending

  • Why You Should Move From Monolith to Microservices
  • An Explanation of Jenkins Architecture
  • Telemetry Pipelines Workshop: Introduction To Fluent Bit
  • Generative AI With Spring Boot and Spring AI
  1. DZone
  2. Software Design and Architecture
  3. Microservices
  4. Modern gRPC Microservices: REST Gateways, Part 2

Modern gRPC Microservices: REST Gateways, Part 2

By adding a gRPC Gateway to your gRPC services and applying best practices, your services can now be exposed to clients using different platforms and protocols.

By 
Sriram Panyam user avatar
Sriram Panyam
·
Feb. 08, 24 · Tutorial
Like (3)
Save
Tweet
Share
4.7K Views

Join the DZone community and get the full member experience.

Join For Free

As a quick recap, in Part 1:

  • We built a simple gRPC service for managing topics and messages in a chat service (like a very simple version of Zulip, Slack, or Teams).
  • gRPC provided a very easy way to represent the services and operations of this app.
  • We were able to serve (a very rudimentary implementation) from localhost on an arbitrary port (9000 by default) on a custom TCP protocol.
  • We were able to call the methods on these services both via a CLI utility (grpc_cli) as well as through generated clients (via tests).

The advantage of this approach is that any app/site/service can access this running server via a client (we could also generate JS or Swift or Java clients to make these calls in the respective environments).

At a high level, the downsides to this approach to this are:

  • Network access: Usually, a network request (from an app or a browser client to this service) has to traverse several networks over the internet. Most networks are secured by firewalls that only permit access to specific ports and protocols (80:http, 443:https), and having this custom port (and protocol) whitelisted on every firewall along the way may not be tractable.
  • Discomfort with non-standard tools: Familiarity and comfort with gRPC are still nascent outside the service-building community. For most service consumers, few things are easier and more accessible than HTTP-based tools (cURL, HTTPie, Postman, etc). Similarly, other enterprises/organizations are used to APIs exposed as RESTful endpoints, so having to build/integrate non-HTTP clients imposes a learning curve.

Use a Familiar Cover: gRPC-Gateway

We can have the best of both worlds by enacting a proxy in front of our service that translates gRPC to/from the familiar REST/HTTP to/from the outside world. Given the amazing ecosystem of plugins in gRPC, just such a plugin exists — the gRPC-Gateway. The repo itself contains a very in-depth set of examples and tutorials on how to integrate it into a service. In this guide, we shall apply it to our canonical chat service in small increments.

A very high-level image (courtesy of gRPC-Gateway) shows the final wrapper architecture around our service:

Final wrapper architecture for service

This approach has several benefits:

  1. Interoperability: Clients that need and only support HTTP(s) can now access our service with a familiar facade.
  2. Network support: Most corporate firewalls and networks rarely allow non-HTTP ports. With the gRPC-Gateway, this limitation can be eased as the services are now exposed via an HTTP proxy without any loss in translation.
  3. Client-side support: Today, several client-side libraries already support and enable REST, HTTP, and WebSocket communication with servers. Using the gRPC-Gateway, these existing tools (e.g., cURL, HTTPie, postman) can be used as is. Since no custom protocol is exposed beyond the gRPC-Gateway, complexity (for implementing clients for custom protocols) is eliminated (e.g., no need to implement a gRPC generator for Kotlin or Swift to support Android or Swift).
  4. Scalability: Standard HTTP load balancing techniques can be applied by placing a load-balancer in front of the gRPC-Gateway to distribute requests across multiple gRPC service hosts. Building a protocol/service-specific load balancer is not an easy or rewarding task.

Overview

You might have already guessed: protoc plugins again come to the rescue. In our service's Makefile (see Part 1), we generated messages and service stubs for Go using the protoc-gen-go plugin:

protoc --go_out=$OUT_DIR --go_opt=paths=source_relative               \
       --go-grpc_out=$OUT_DIR --go-grpc_opt=paths=source_relative     \
       --proto_path=$PROTO_DIR                                        \
        $PROTO_DIR/onehub/v1/*.proto


A Brief Introduction to Plugins

The magic of the protoc plugin is that it does not perform any generation on its own but orchestrates plugins by passing the parsed Abstract Syntax Tree (AST) across plugins. This is illustrated below:

Protoc orchestrates plugins by passing the parsed Abstract Syntax Tree (AST) across plugins

  • Step 0: Input files (in the above case, onehub/v1/*.proto) are passed to the protoc plugin.
  • Step 1: The protoc tool first parses and validates all proto files.
  • Step 2:protoc then invokes each plugin in its list command line arguments in turn by passing a serialized version of all the proto files it has parsed into an AST.
  • Step 3: Each proto plugin (in this case, go and go-grpcreads this serialized AST via its stdin. The plugin processes/analyzes these AST representations and generates file artifacts.
    • Note that there does not need to be a 1:1 correspondence between input files (e.g., A.proto, B.proto, C.proto) and the output file artifacts it generates. For example, the plugin may create a "single" unified file artifact encompassing all the information in all the input protos.
    • The plugin writes out the generated file artifacts onto its stdout.
  • Step 4: protoc tool captures the plugin's stdout and for each generated file artifact, serializes it onto disk.

Questions

  • How does protoc know which plugins to invoke?

Any command line argument to protoc in the format --<pluginname>_out is a plugin indicator with the name "pluginname". In the above example, protoc would have encountered two plugins: go and go-grpc.

  • Where does protoc find the plugin?

protoc uses a convention of finding an executable with the name protoc-gen-<pluginname>. This executable must be found in the folders in the $PATH variable. Since plugins are just plain executables these can be written in any language.

  • How can I serialize/deserialize the AST?

The wire format for the AST is not needed. protoc has libraries (in several languages) that can be included by the executables that can deserialize ASTs from stdin and serialize generated file artifacts onto stdout.

Setup

As you may have guessed (again), our plugins will also need to be installed before they can be invoked by protoc. We shall install the gRPC-Gateway plugins.

For a detailed set of instructions, follow the gRPC-Gateway installation setup. Briefly:

go get \
    github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
    github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
    google.golang.org/protobuf/cmd/protoc-gen-go \
    google.golang.org/grpc/cmd/protoc-gen-go-grpc

# Install after the get is required
go install \
    github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
    github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
    google.golang.org/protobuf/cmd/protoc-gen-go \
    google.golang.org/grpc/cmd/protoc-gen-go-grpc


This will install the following four plugins in your $GOBIN folder:

  • protoc-gen-grpc-gateway - The GRPC Gateway generator
  • protoc-gen-openapiv2 - Swagger/OpenAPI spec generator
  • protoc-gen-go - The Go protobuf
  • protoc-gen-go-grpc - Go gRPC server stub and client generator

Make sure that your "GOBIN" folder is in your PATH.

Add Makefile Targets

Assuming you are using the example from Part 1, add an extra target to the Makefile:

gwprotos:
    echo "Generating gRPC Gateway bindings and OpenAPI spec"
    protoc -I . --grpc-gateway_out $(OUT_DIR)                               \
    --grpc-gateway_opt logtostderr=true                                     \
    --grpc-gateway_opt paths=source_relative                                \
    --grpc-gateway_opt generate_unbound_methods=true                        \
    --proto_path=$(PROTO_DIR)/onehub/v1/                                    \
      $(PROTO_DIR)/onehub/v1/*.proto


Notice how the parameter types are similar to one in Part 1 (when we were generating go bindings). For each file X.proto, just like the go and go-grpc plugin, an X.pb.gw.go file is created that contains the HTTP bindings for our service.

Customizing the Generated HTTP Bindings

In the previous sections .pb.gw.go files were created containing default HTTP bindings of our respective services and methods. This is because we had not provided any URL bindings, HTTP verbs (GET, POST, etc.), or parameter mappings. We shall address that shortcoming now by adding custom HTTP annotations to the service's definition.

While all our services have a similar structure, we will look at the Topic service for its HTTP annotations.

Topic service with HTTP annotations:

syntax = "proto3";
import "google/protobuf/field_mask.proto";

option go_package = "github.com/onehub/protos";
package onehub.v1;

import "onehub/v1/models.proto";
import "google/api/annotations.proto";

/**
 * Service for operating on topics
 */
service TopicService {
  /**
   * Create a new sesssion
   */
  rpc CreateTopic(CreateTopicRequest) returns (CreateTopicResponse) {
    option (google.api.http) = {
      post: "/v1/topics",
      body: "*",
    };
  }

  /**
   * List all topics from a user.
   */
  rpc ListTopics(ListTopicsRequest) returns (ListTopicsResponse) { 
    option (google.api.http) = {
      get: "/v1/topics"
    };
  }

  /**
   * Get a particular topic
   */
  rpc GetTopic(GetTopicRequest) returns (GetTopicResponse) { 
    option (google.api.http) = {
      get: "/v1/topics/{id=*}"
    };
  }

  /**
   * Batch get multiple topics by ID
   */
  rpc GetTopics(GetTopicsRequest) returns (GetTopicsResponse) { 
    option (google.api.http) = {
      get: "/v1/topics:batchGet"
    };
  }

  /**
   * Delete a particular topic
   */
  rpc DeleteTopic(DeleteTopicRequest) returns (DeleteTopicResponse) { 
    option (google.api.http) = {
      delete: "/v1/topics/{id=*}"
    };
  }

  /**
   * Updates specific fields of a topic
   */
  rpc UpdateTopic(UpdateTopicRequest) returns (UpdateTopicResponse) {
    option (google.api.http) = {
      patch: "/v1/topics/{topic.id=*}"
      body: "*"
    };
  }
}

/**
 * Topic creation request object
 */
message CreateTopicRequest {
  /**
   * Topic being updated
   */
  Topic topic = 1;
}

/**
 * Response of an topic creation.
 */
message CreateTopicResponse {
  /**
   * Topic being created
   */
  Topic topic = 1;
}

/**
 * An topic search request.  For now only paginations params are provided.
 */
message ListTopicsRequest {
  /**
   * Instead of an offset an abstract  "page" key is provided that offers
   * an opaque "pointer" into some offset in a result set.
   */
  string page_key = 1;

  /**
   * Number of results to return.
   */
  int32 page_size = 2;
}

/**
 * Response of a topic search/listing.
 */
message ListTopicsResponse {
  /**
   * The list of topics found as part of this response.
   */
  repeated Topic topics = 1;

  /**
   * The key/pointer string that subsequent List requests should pass to
   * continue the pagination.
   */
  string next_page_key = 2;
}

/**
 * Request to get an topic.
 */
message GetTopicRequest {
  /**
   * ID of the topic to be fetched
   */
  string id = 1;
}

/**
 * Topic get response
 */
message GetTopicResponse {
  Topic topic = 1;
}

/**
 * Request to batch get topics
 */
message GetTopicsRequest {
  /**
   * IDs of the topic to be fetched
   */
  repeated string ids = 1;
}

/**
 * Topic batch-get response
 */
message GetTopicsResponse {
  map<string, Topic> topics = 1;
}

/**
 * Request to delete an topic.
 */
message DeleteTopicRequest {
  /**
   * ID of the topic to be deleted.
   */
  string id = 1;
}

/**
 * Topic deletion response
 */
message DeleteTopicResponse {
}

/**
 * The request for (partially) updating an Topic.
 */
message UpdateTopicRequest {
  /**
   * Topic being updated
   */
  Topic topic = 1;

  /**
   * Mask of fields being updated in this Topic to make partial changes.
   */
  google.protobuf.FieldMask update_mask = 2;

  /**
   * IDs of users to be added to this topic.
   */
  repeated string add_users = 3;

  /**
   * IDs of users to be removed from this topic.
   */
  repeated string remove_users = 4;
}

/**
 * The request for (partially) updating an Topic.
 */
message UpdateTopicResponse {
  /**
   * Topic being updated
   */
  Topic topic = 1;
}


Instead of having "empty" method definitions (e.g., rpc MethodName(ReqType) returns (RespType) {}), we are now seeing "annotations" being added inside methods. Any number of annotations can be added and each annotation is parsed by the protoc and passed to all the plugins invoked by it. There are tons of annotations that can be passed and this has a "bit of everything" in it.

Back to the HTTP bindings: Typically an HTTP annotation has a method, a URL path (with bindings within { and }), and a marking to indicate what the body parameter maps to (for PUT and POST methods).

For example, in the CreateTopic method, the method is a POST request to "v1/topic " with the body (*) corresponding to the JSON representation of the CreateTopicRequest message type; i.e., our request is expected to look like this:

{
  "Topic": {... topic object...}
}


Naturally, the response object of this would be the JSON representation of the CreateTopicResponse message.

The other examples in the topic service, as well as in the other services, are reasonably intuitive. Feel free to read through it to get any finer details. Before we are off to the next section implementing the proxy, we need to regenerate the pb.gw.go files to incorporate these new bindings:

make all


We will now see the following error:

google/api/annotations.proto: File not found.
topics.proto:8:1: Import "google/api/annotations.proto" was not found or had errors.


Unfortunately, there is no "package manager" for protos at present. This void is being filled by an amazing tool: Buf.build (which will be the main topic in Part 3 of this series). In the meantime, we will resolve this by manually copying (shudder) http.proto and annotations.proto manually.

So, our protos folder will have the following structure:

protos
├── google
│   └── api
│       ├── annotations.proto
│       └── http.proto
└── onehub
    └── v1
        └── topics.proto
        └── messages.proto
        └── ...


However, we will follow a slightly different structure. Instead of copying files to the protos folder, we will create a vendors folder at the root and symlink to it from the protos folder (this symlinking will be taken care of by our Makefile). Our new folder structure is:

onehub
├── Makefile
├── ...
├── vendors
│   ├── google
│   │   └── api
│   │       ├── annotations.proto
│   │       └── http.proto
├── proto
    └── google -> onehub/vendors/google
    └── onehub
        └── v1
            └── topics.proto
            └── messages.proto
            └── ...


Our updated Makefile is shown below.

Makefile for HTTP bindings:


# Some vars to detemrine go locations etc
GOROOT=$(which go)
GOPATH=$(HOME)/go
GOBIN=$(GOPATH)/bin

# Evaluates the abs path of the directory where this Makefile resides
SRC_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))

# Where the protos exist
PROTO_DIR:=$(SRC_DIR)/protos

# where we want to generate server stubs, clients etc
OUT_DIR:=$(SRC_DIR)/gen/go

all: createdirs printenv goprotos gwprotos openapiv2 cleanvendors

goprotos:
    echo "Generating GO bindings"
    protoc --go_out=$(OUT_DIR) --go_opt=paths=source_relative              \
       --go-grpc_out=$(OUT_DIR) --go-grpc_opt=paths=source_relative        \
       --proto_path=$(PROTO_DIR)                                                                             \
      $(PROTO_DIR)/onehub/v1/*.proto

gwprotos:
    echo "Generating gRPC Gateway bindings and OpenAPI spec"
    protoc -I . --grpc-gateway_out $(OUT_DIR)               \
        --grpc-gateway_opt logtostderr=true                   \
        --grpc-gateway_opt paths=source_relative              \
        --grpc-gateway_opt generate_unbound_methods=true      \
    --proto_path=$(PROTO_DIR)                                                                                 \
      $(PROTO_DIR)/onehub/v1/*.proto

openapiv2:
    echo "Generating OpenAPI specs"
    protoc -I . --openapiv2_out $(SRC_DIR)/gen/openapiv2      \
    --openapiv2_opt logtostderr=true                    \
    --openapiv2_opt generate_unbound_methods=true           \
    --openapiv2_opt allow_merge=true                    \
    --openapiv2_opt merge_file_name=allservices             \
    --proto_path=$(PROTO_DIR)                                                             \
      $(PROTO_DIR)/onehub/v1/*.proto

printenv:
    @echo MAKEFILE_LIST=$(MAKEFILE_LIST)
    @echo SRC_DIR=$(SRC_DIR)
    @echo PROTO_DIR=$(PROTO_DIR)
    @echo OUT_DIR=$(OUT_DIR)
    @echo GOROOT=$(GOROOT)
    @echo GOPATH=$(GOPATH)
    @echo GOBIN=$(GOBIN)

createdirs:
    rm -Rf $(OUT_DIR)
    mkdir -p $(OUT_DIR)
    mkdir -p $(SRC_DIR)/gen/openapiv2
    cd $(PROTO_DIR) && (                                                                                            \
         if [ ! -d google ]; then ln -s $(SRC_DIR)/vendors/google . ; fi    \
    )

cleanvendors:
    rm -f $(PROTO_DIR)/google


Now running Make should be error-free and result in the updated bindings in the .pb.gw.go files.

Implementing the HTTP Gateway Proxy

Lo and behold, we now have a "proxy" (in the .pw.gw.go files) that translates HTTP requests and converts them into gRPC requests. On the return path, gRPC responses are also translated to HTTP responses. What is now needed is a service that runs an HTTP server that continuously facilitates this translation.

We have now added a startGatewayService method in cmd/server.go that now also starts an HTTP server to do all this back-and-forth translation:

import (
  ... // previous imports

  // new imports
  "context"
  "net/http"
  "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
)

func startGatewayServer(grpc_addr string, gw_addr string) {

    ctx := context.Background()
    mux := runtime.NewServeMux()

    opts := []grpc.DialOption{grpc.WithInsecure()}
    // Register each server with the mux here
    if err := v1.RegisterTopicServiceHandlerFromEndpoint(ctx, mux, grpc_addr, opts); err != nil {
        log.Fatal(err)
    }
    if err := v1.RegisterMessageServiceHandlerFromEndpoint(ctx, mux, grpc_addr, opts); err != nil {
        log.Fatal(err)
    }

    http.ListenAndServe(gw_addr, mux)
}

func main() {
   flag.Parse()
   go startGRPCServer(*addr)
   startGatewayServer(*gw_addr, *addr)
}


In this implementation, we created a new runtime.ServeMux and registered each of our gRPC services' handlers using the v1.Register<ServiceName>HandlerFromEndpoint method. This method associates all of the URLs found in the <ServiceName> service's protos to this particular mux. Note how all these handlers are associated with the port on which the gRPC service is already running (port 9000 by default). Finally, the HTTP server is started on its own port (8080 by default).

You might be wondering why we are using the NewServeMux in the github.com/grpc-ecosystem/grpc-gateway/v2/runtime module and not the version in the standard library's net/http module.

This is because the grpc-gateway/v2/runtime module's ServeMux is customized to act specifically as a router for the underlying gRPC services it is fronting. It also accepts a list of ServeMuxOption (ServeMux handler) methods that act as a middleware for intercepting an HTTP call that is in the process of being converted to a gRPC message sent to the underlying gRPC service. These middleware can be used to set extra metadata needed by the gRPC service in a common way transparently. We will see more about this in a future post about gRPC interceptors in this demo service.

Generating OpenAPI Specs

Several API consumers seek OpenAPI specs that describe RESTful endpoints (methods, verbs, body payloads, etc). We can generate an OpenAPI spec file (previously Swagger files) that contains information about our service methods along with their HTTP bindings. Add another Makefile target:

openapiv2:
    echo "Generating OpenAPI specs"
    protoc -I . --openapiv2_out $(SRC_DIR)/gen/openapiv2            \
    --openapiv2_opt logtostderr=true                                         \
    --openapiv2_opt generate_unbound_methods=true                     \
    --openapiv2_opt allow_merge=true                                         \
    --openapiv2_opt merge_file_name=allservices                         \
    --proto_path=$(PROTO_DIR)                                             \
      $(PROTO_DIR)/onehub/v1/*.proto


Like all other plugins, the openapiv2 plugin also generates one .swagger.json per .proto file. However, this changes the semantics of Swagger as each Swagger is treated as its own "endpoint." Whereas, in our case, what we really want is a single endpoint that fronts all the services. In order to contain a single "merged" Swagger file, we pass the allow_merge=true parameter to the above command. In addition, we also pass the name of the file to be generated (merge_file_name=allservices). This results in gen/openapiv2/allservices.swagger.json file that can be read, visualized, and tested with SwaggerUI.

Start this new server, and you should see something like this:

% onehub % go run cmd/server.go
Starting grpc endpoint on :9000:
Starting grpc gateway server on:  :8080


The additional HTTP gateway is now running on port 8080, which we will query next.

Testing It All Out

Now, instead of making grpc_cli calls, we can issue HTTP calls via the ubiquitous curl command (also make sure you install jq for pretty printing your JSON output):

Create a Topic

% curl -s -d '{"topic": {"name": "First Topic", "creator_id": "user1"}}' localhost:8080/v1/topics | jq
{
  "topic": {
    "createdAt": "2023-07-07T20:53:31.629771Z",
    "updatedAt": "2023-07-07T20:53:31.629771Z",
    "id": "1",
    "creatorId": "user1",
    "name": "First Topic",
    "users": []
  }
}


And another:

% curl -s localhost:8080/v1/topics -d '{"topic": {"name": "Urgent topic", "creator_id": "user2", "users": ["user1", "user2", "user3"]}}' |
 jq
{
  "topic": {
    "createdAt": "2023-07-07T20:56:52.567691Z",
    "updatedAt": "2023-07-07T20:56:52.567691Z",
    "id": "2",
    "creatorId": "user2",
    "name": "Urgent topic",
    "users": [
      "user1",
      "user2",
      "user3"
    ]
  }
}


List All Topics

% curl -s localhost:8080/v1/topics | jq
{
  "topics": [
    {
      "createdAt": "2023-07-07T20:53:31.629771Z",
      "updatedAt": "2023-07-07T20:53:31.629771Z",
      "id": "1",
      "creatorId": "user1",
      "name": "First Topic",
      "users": []
    },
    {
      "createdAt": "2023-07-07T20:56:52.567691Z",
      "updatedAt": "2023-07-07T20:56:52.567691Z",
      "id": "2",
      "creatorId": "user2",
      "name": "Urgent topic",
      "users": [
        "user1",
        "user2",
        "user3"
      ]
    }
  ],
  "nextPageKey": ""
}


Get Topics by IDs

Here, "list" values (e.g., ids) are possibly by repeating them as query parameters:

% curl -s "localhost:8080/v1/topics?ids=1&ids=2" | jq
{
  "topics": [
    {
      "createdAt": "2023-07-07T20:53:31.629771Z",
      "updatedAt": "2023-07-07T20:53:31.629771Z",
      "id": "1",
      "creatorId": "user1",
      "name": "First Topic",
      "users": []
    },
    {
      "createdAt": "2023-07-07T20:56:52.567691Z",
      "updatedAt": "2023-07-07T20:56:52.567691Z",
      "id": "2",
      "creatorId": "user2",
      "name": "Urgent topic",
      "users": [
        "user1",
        "user2",
        "user3"
      ]
    }
  ],
  "nextPageKey": ""
}


Delete a Topic Followed by a Listing

% curl -sX DELETE "localhost:8080/v1/topics/1" | jq
{}
% curl -s "localhost:8080/v1/topics" | jq
{
  "topics": [
    {
      "createdAt": "2023-07-07T20:56:52.567691Z",
      "updatedAt": "2023-07-07T20:56:52.567691Z",
      "id": "2",
      "creatorId": "user2",
      "name": "Urgent topic",
      "users": [
        "user1",
        "user2",
        "user3"
      ]
    }
  ],
  "nextPageKey": ""
}


Best Practices

Separation of Gateway and gRPC Endpoints

In our example, we served the Gateway and gRPC services on their own addresses. Instead, we could have directly invoked the gRPC service methods, i.e., by directly creating NewTopicService(nil) and invoking methods on those. However, running these two services separately meant we could have other (internal) services directly access the gRPC service instead of going through the Gateway. This separation of concerns also meant these two services could be deployed separately (when on different hosts) instead of needing a full upgrade of the entire stack.

HTTPS Instead of HTTP

However in this example, the startGatewayServer method started an HTTP server, it is highly recommended to have the gateway over an HTTP server for security, preventing man-in-the-middle attacks, and protecting clients' data.

Use of Authentication

This example did not have any authentication built in. However, authentication (authn) and authorization (authz) are very important pillars of any service. The Gateway (and the gRPC service) are no exceptions to this. The use of middleware to handle authn and authz is critical to the gateway. Authentication can be applied with several mechanisms like OAuth2 and JWT to verify users before passing a request to the gRPC service. Alternatively, the tokens could be passed as metadata to the gRPC service, which can perform the validation before processing the request. The use of middleware in the Gateway (and interceptors in the gRPC service) will be shown in Part 4 of this series.

Caching for Improved Performance

Caching improves performance by avoiding database (or heavy) lookups of data that may be frequently accessed (and/or not often modified). The Gateway server can also employ cache responses from the gRPC service (with possible expiration timeouts) to reduce the load on the gRPC server and improve response times for clients.

Note: Just like authentication, caching can also be performed at the gRPC server. However, this would not prevent excess calls that may otherwise have been prevented by the gateway service.

Using Load Balancers

While also applicable to gRPC servers, HTTP load balancers (in front of the Gateway) enable sharding to improve the scalability and reliability of our services, especially during high-traffic periods.

Conclusion

By adding a gRPC Gateway to your gRPC services and applying best practices, your services can now be exposed to clients using different platforms and protocols. Adhering to best practices also ensures reliability, security, and high performance.

In this article, we have:

  • Seen the benefits of wrapping our services with a Gateway service
  • Added HTTP bindings to an existing set of services
  • Learned the best practices for enacting Gateway services over your gRPC services

In the next post, we will take a small detour and introduce a modern tool for managing gRPC plugins and making it easy to work with them.

API gRPC REST Go (programming language) microservice

Published at DZone with permission of Sriram Panyam. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Mastering System Design: A Comprehensive Guide to System Scaling for Millions (Part 1)
  • Ensuring API Resilience in Spring Microservices Using Retry and Fallback Mechanisms
  • Supercharging Productivity in Microservice Development With AI Tools
  • Microservices Decoded: Unraveling the Benefits, Challenges, and Best Practices for APIs

Partner Resources


Comments

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: