Remove and abandon integration with Google’s Service Infrastructure APIs.
In late 2024, we did a detailed exploration of Google’s Service Infrastructure APIs. This yielded a website of documentation at serviceio.dev and led to the creation of IO.
IO was originally created to be an alternative to the outdated proxies used by Google’s Cloud Endpoints product. However, the scope of IO quickly expanded to include many features that were unsupported by Service Infrastructure or that improved on it with more modern ingredients.
As detailed at serviceio.dev, Service Infrastructure provides these important API management capabilities:
- Configuration
- Logging and Monitoring
- Quotas and rate limiting
- API key checking
But as IO developed, our perspective on the value that Service Infrastructure actually provides has changed.
- With ingress and calling mode features, IO needed configuration that was far outside of the scope of Google’s Service Config. To address this, we developed IO’s HCL-based configuration language, which adds both greater semantics and much better read- and write-ability.
- Service Infrastructure provides logging and monitoring with Google’s proprietary Cloud Logging and Cloud Monitoring APIs. That’s great, but more recently OpenTelemetry has emerged as a vendor independent standard that has been easy to integrate with IO. To us, OpenTelemetry is preferable to Cloud Logging and Cloud Monitoring and we think this is broadly true among potential IO users.
- Quotas and rate limiting are useful but support for them in IO is unfinished and is likely to build on available Envoy building blocks like the Rate Limit Quota Service. Google’s Service Infrastructure relies on per-request checking (the Envoy docs refer to this as a kind of Global Rate Limiting), while Envoy supports both this and local rate limiting using per-proxy token buckets. It seems better now to address user needs directly by building on Envoy’s powerful ingredients than trying to adapt Google’s existing and less-flexible solution.
- After all of this, the only thing left of Service Infrastructure that we might want to use is API key checking. But we don’t need all of Service Infrastructure for this; we can use Google’s API Keys API directly for free! For smaller deployments, we don’t even need an external API: we can configure IO directly to support a set of locally-defined API keys so there’s no requirement to call or manage an external service.
Pros#
- Removing support for Service Infrastructure lets us build the API management features that we want without the baggage of existing and sometimes outdated Google practices. We can instead focus on IO’s HCL-based configuration language and develop exactly the configuration that we intend to use (no more, no less).
- By removing a dependency on Google’s APIs, this clarifies that IO is usable anywhere.
- By abandoning Service Config and dropping calls to the Service Management APIs, users are no longer required to use Google’s outdated Service Compilation service to compile and manage their API configurations.
Cons#
- This reduces opportunities to collaborate with Google and to provide easy alternative proxies to Google Cloud Endpoints users.
- Without the Service Management API, we will need to provide some other mechanism for distributing configuration to remote proxies.