(Part 2) Overengineering for a Startup: How not to use Google Firebase
I was fortunate enough to be selected to speak at the Google I/O Extended Cape Town conference earlier this year. The title of my talk for the conference was “Overengineering for a Startup: How not to use Google Firebase”.
In this talk, I shared some of the learnings building for a startup. The current tech stack we’re using (the overengineerd one alluded to, which definitely comes with its pros and cons), the things I would’ve done differently with the learnings I’ve made along the way and a dive into some of the really cool features of Firebase.
The goal of this talk was to help other technical co-founders out there learn from my mistakes and steer them towards leveraging existing technologies to build and iterate quickly. In this series, I share the content of my talk in 2 parts:
- In the previous post, we explored some of the really cool features of Firebase as a product, and how Firebase can help you win as a startup by allowing you to focus on your core value proposition and build quickly.
- In this post, for the nerdier reader, we’ll look at our current tech stack, how I might have “overengineered” when approaching the build for our startup, and the pros and cons of this setup (don’t get me wrong, I still think our tech stack is sick!).
Our Current Setup
The nerd in me knows that our current setup is really cool. It’s got all the bells and whistles. Lekker CI/CD pipelines, scalable backends, some cool auth interceptors and some sleek, beautifully written code (in my unbiased opinion). But the startup founder in me knows that it’s not the best setup for our current phase of development. So let’s take a look at what we’ve got going on.
There are 3 core repositories:
- The
proto
repo- contains all of our protocol buffer definitions, defining our microservice APIs
- CI/CD pipelines setup to automatically generate packages from these definitions in Go and Typescript/NPM for use in the implementation and consumption of our APIs
- The
artbeat
repo- our core backend repository containing the implementation of our APIs, using gRPC, written in Go (using the generated Go package for implementation)
- using Firestore as a database,
- custom authentication interceptors to manage auth
- CI/CD pipelines to automatically build and deploy our microservices to Google Cloud Run in Development and Production environments
- The
artbeat-frontend
repo- our frontend implementation in React (using the generated TypeScript/NPM package for consumption)
- Firebase authentication to access the frontend and make requests to backend
- CI/CD pipelines to deploy to Firebase hosting in Development and Produdction environments, and preview channels for pull requests
At this point in time, some of you may be asking, “Jason, what on earth is a Protocol Buffer and what is gRPC?” Let’s start off by looking at these 2 technologies before diving into the overall architecture and implementing an example.
What are Protocol Buffers and gRPC
Protocol Buffers and gRPC give us a different mechanism for building out APIs. Although there is some fancy computational stuff going on under the hood that gives gRPC a performance edge over traditonal REST APIs, I wouldn’t say for most developers this is the number one reason they choose to use it. The real reason: developer experience and ease of use.
Let’s compare the implementation of a REST API and gRPC to get a feel for the difference in developer experience. We will implement a simple API endpoint that returns a greeting to the caller.
REST
To implement the above REST API in its simplest form in Go might look as follows:
package main
import (
"fmt"
"log"
"net/http"
)
// sayHello generates a greeting message based on the provided name.
func sayHello(name string) string {
if name == "" {
name = "World"
}
return fmt.Sprintf("Hello, %s!", name)
}
// helloHandler handles the HTTP requests for the /hello endpoint.
func helloHandler(w http.ResponseWriter, r *http.Request) {
// Get the 'name' query parameter
name := r.URL.Query().Get("name")
// Generate the greeting message
message := sayHello(name)
// Write the message to the response
w.WriteHeader(http.StatusOK)
w.Write([]byte(message))
}
func main() {
// Handle the /hello endpoint
http.HandleFunc("/hello", helloHandler)
// Start the server on port 8080
log.Println("Server starting on port 8080...")
err := http.ListenAndServe(":8080", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
Here, we have a single endpoint “/hello” that accepts a “name” parameter in the URL and returns the greeting. There are a couple problems with the above implementation, however. Firstly, there is no API documentation immediately available. Secondly, the request and response types are not defined. We could improve the code to look as follows:
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
)
// SayHelloRequest defines the expected input for the sayHello function.
type SayHelloRequest struct {
Name string
}
// SayHelloResponse defines the format of the response message.
type SayHelloResponse struct {
Message string `json:"message"`
}
// sayHello generates a greeting message based on the provided request.
func sayHello(req SayHelloRequest) SayHelloResponse {
if req.Name == "" {
req.Name = "World"
}
return SayHelloResponse{Message: fmt.Sprintf("Hello, %s!", req.Name)}
}
// helloHandler handles the HTTP requests for the /hello endpoint.
func helloHandler(w http.ResponseWriter, r *http.Request) {
// Get the 'name' query parameter and create a SayHelloRequest
req := SayHelloRequest{Name: r.URL.Query().Get("name")}
// Generate the greeting response
response := sayHello(req)
// Set the content type as JSON
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
// Encode the response as JSON
json.NewEncoder(w).Encode(response)
}
func main() {
// Handle the /hello endpoint
http.HandleFunc("/hello", helloHandler)
// Start the server on port 8080
log.Println("Server starting on port 8080...")
err := http.ListenAndServe(":8080", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
Now, the request and response types are defined but there is still no documentation. Furhtermore, we can see that the evolution of the code went from implementation to definition. As any good programmer knows, the definition should be proceeded by the implementation and not the other way around. By first defining the problem well, and putting in the time to define the API, we ensure that the implementation is well thought out and likely to be more robust. When we dive into the implementation too early and only define things later, we are likely to run into unforeseen problems and constatnly be chopping and changing the API. This can lead to breaking changes and a terrible developer experience for anyone using your API (and your own sanity as the one developing the API).
If we did want to add documentation to the above REST API, we could use a tool like Swagger. First run go get -u github.com/go-swagger/go-swagger/cmd/swagger
and then adjust the code as follows:
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
)
// @title Example API
// @version 1
// @description This is a sample server for a simple API.
// @host localhost:8080
// @BasePath /
// SayHelloRequest defines the expected input for the sayHello function.
// swagger:parameters sayHello
type SayHelloRequest struct {
// The name of the person to greet.
// in: query
// required: false
Name string `form:"name"`
}
// SayHelloResponse defines the format of the response message.
// swagger:response sayHelloResponse
type SayHelloResponse struct {
// The greeting message.
// in: body
Message string `json:"message"`
}
// sayHello generates a greeting message based on the provided request.
func sayHello(req SayHelloRequest) SayHelloResponse {
if req.Name == "" {
req.Name = "World"
}
return SayHelloResponse{Message: fmt.Sprintf("Hello, %s!", req.Name)}
}
// helloHandler handles the HTTP requests for the /hello endpoint.
// @Summary Say hello
// @Description say hello to someone
// @ID say-hello
// @Accept json
// @Produce json
// @Param name query string false "Name to say hello to"
// @Success 200 {object} SayHelloResponse
// @Router /hello [get]
func helloHandler(w http.ResponseWriter, r *http.Request) {
// Get the 'name' query parameter and create a SayHelloRequest
req := SayHelloRequest{Name: r.URL.Query().Get("name")}
// Generate the greeting response
response := sayHello(req)
// Set the content type as JSON
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
// Encode the response as JSON
json.NewEncoder(w).Encode(response)
}
func main() {
// Handle the /hello endpoint
http.HandleFunc("/hello", helloHandler)
// Start the server on port 8080
log.Println("Server starting on port 8080...")
err := http.ListenAndServe(":8080", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
Then run swagger generate spec -o ./swagger.json --scan-models
to generate the swagger.json.
gRPC
When we compare this to how the same API would be defined and developed using Protocol Buffers and gRPC, we immediately see a superior developer experience emerging. With gRPC APIs, the implementation and development is first driven from the definition as specified in the Protocol Buffer.
Protocol Buffers are what is known as a Interface Definition Language (IDL) - it is used to define the interface (or API). For the above API with the SayHello endpoint, the Protocol Buffer would look as follows:
syntax = "proto3";
package hello.v1alpha1;
option go_package="github.com/jaebrownn/proto-example/protobuf/go/hello/v1alpha1";
// HelloService is a simple service consisting of methods to return greetings
// to the callers of the service.
service HelloService {
// SayHello returns a greeting message to the caller of the method.
rpc SayHello (SayHelloRequest) returns (SayHelloResponse) {};
}
// Request message for the SayHello method.
message SayHelloRequest {
// The name of the person to greet.
string name = 1;
}
// Response message for the SayHello method.
message SayHelloResponse {
// The greeting.
string greeting = 1;
}
It is clear that this file simply defines a service and some types. The service offers one method, SayHello, which expects an input type corresponding to the SayHelloRequest and returns an output of type SayHelloResponse.
From this definition, we can now generate some code in a variety of coding languages. This code is called the “client and server stubs”. The client code is what a client would use to connect to and use the services offered by your gRPC API, and the server stubs are the scaffolding and interfaces necessary to create the server, aginst which we simply need to implement the business logic and fulfill the interface.
Let’s take a high-level look at what this client and server stub look like to better understand how they are used.
Server Stub
The part of the server stub most important for our understanding is as follows:
// HelloServiceServer is the server API for HelloService service.
// All implementations must embed UnimplementedHelloServiceServer
// for forward compatibility
type HelloServiceServer interface {
// SayHello returns a greeting message to the caller of the method.
SayHello(context.Context, *SayHelloRequest) (*SayHelloResponse, error)
mustEmbedUnimplementedHelloServiceServer()
}
// UnimplementedHelloServiceServer must be embedded to have forward compatible implementations.
type UnimplementedHelloServiceServer struct {
}
func (UnimplementedHelloServiceServer) SayHello(context.Context, *SayHelloRequest) (*SayHelloResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method SayHello not implemented")
}
func (UnimplementedHelloServiceServer) mustEmbedUnimplementedHelloServiceServer() {}
Here, we can see that the server stub is simply an interface against which we need to implement the methods.
An implementation would look as follows:
package main
import (
"context"
pb "github.com/jaebrownn/proto-example/protobuf/go/hello/v1alpha1"
)
// HelloServiceServer is an implementation of hello.v1alpha1.HelloServiceServer
type HelloServiceServer struct {
pb.UnimplementedHelloServiceServer
}
func (s *HelloServiceServer) SayHello(ctx context.Context, req *pb.SayHelloRequest) (*pb.SayHelloResponse, error) {
return &pb.SayHelloResponse{Greeting: "Hello " + req.Name}, nil
}
Client
The client component of the generated code is as follows:
// HelloServiceClient is the client API for HelloService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type HelloServiceClient interface {
// SayHello returns a greeting message to the caller of the method.
SayHello(ctx context.Context, in *SayHelloRequest, opts ...grpc.CallOption) (*SayHelloResponse, error)
}
type helloServiceClient struct {
cc grpc.ClientConnInterface
}
func NewHelloServiceClient(cc grpc.ClientConnInterface) HelloServiceClient {
return &helloServiceClient{cc}
}
func (c *helloServiceClient) SayHello(ctx context.Context, in *SayHelloRequest, opts ...grpc.CallOption) (*SayHelloResponse, error) {
out := new(SayHelloResponse)
err := c.cc.Invoke(ctx, HelloService_SayHello_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
This allows us to connect to and use a gRPC service. the NewHelloServiceClient
method expects a gRPC client connection as input (where the connection contains the necessary security parameters and http endpoint on which the service is being hosted) and returns to us a structure that allows us to call the methods implemented by the service.
Using the client is thus as simple as importing a Go package that contains the client code, setting up the gRPC connection, instantiating the client and then calling the method as if it was a local function.
TODO:
package main
import (
"context"
"google.golang.org/grpc"
pb "github.com/jaebrownn/proto-example/protobuf/go/hello/v1alpha1"
)
func main() {
conn
}
For more information, please refer to the official documentation for Protocol Buffers and gRPC.
Overall architecture
Now that we’ve got a basic understanding of Protocol Buffers and gRPC, let’s dive into the details of how our 3 core repositories work together and look at the CI/CD pipelines and architecture in use.
The benefits of this setup
Although there is definitely some overengineering going on here, there are certainly some benefits:
- Resilient to change: APIs are independent of a service provider or Database, flexible for change in the long run
- Highly scalable: each microservice can scale independently
- Greater access control: certain developers can work on the frontend, others can make changes to API definitions and backends
- Clear separation of concern: improved development speed and focus
- Experience with the stack: increased speed of implementation
The problems of this setup
- Time to get all of these pipelines in place and time spent on technical overhead instead of on our core value proposition
- Large Number of different moving parts that need to be monitored and managed
- Many Processes to go through when trying to push out a feature or prototype and idea: define, implement API, design to dev, integrate with frontend
- 90% of the backend are CRUD methods while a small portion are super custom methods for interacting with third party providers, setting up webhooks etc.
How I might have done things differently
Put simply, I would ensure that ALL time is spent on solving the real business problem, getting that feedback loop short and prototyping as quickly as possible. I would achieve this by:
- Having as few pipelines to be monitored and managed and moving parts as possible
- Leveraging existing technology and not reinventing the wheel
- Spending as little time as possible on technical overhead: container management, scaling, CI/CD pipelines etc.
The best way I could imagine doing this is using Flutterflow and Firebase.