Options
All
  • Public
  • Public/Protected
  • All
Menu

Envoy Node

Travis Coverage Status npm version npm license

This is a boilerplate to help you adopt Envoy.

There are multiple ways to config Envoy, one of the convenience way to mange different egress traffic is route the traffic by hostname (using virtual hosts). By doing so, you can use one egress port for all your egress dependencies:

static_resources:
  listeners:
  - name: egress_listener
    address:
      socket_address: 
        address: 0.0.0.0
        port_value: 12345
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          codec_type: AUTO
          use_remote_address: true
          stat_prefix: http.test.egress
          route_config:
            name: egress_route_config
            virtual_hosts:
            - name: foo_service
              domains:
              - foo.service:8888  # Do not miss the port number here
              routes:
              - match:
                  prefix: /
                route:
                  cluster: remote_foo_server
            - name: bar_service
              domains:
              - bar.service:8888  # Do not miss the port number here
              routes:
              - match:
                  prefix: /
                route:
                  cluster: remote_bar_server
          http_filters:
          - name: envoy.router
            config:
              dynamic_stats: true
          tracing:
            operation_name: EGRESS

But it will bring you new problem, your code is becoming verbose:

  1. routing traffic to 127.0.0.1:12345 where egress port is listening
  2. setting host headers for each request
  3. propagating the tracing information

And this library is going to help you deal with these things elegantly.

First, let's tell the library where the egress port is binding. A recommended way is to set the information on the ingress header by request_headers_to_add:

request_headers_to_add:
- header:
    key: x-tubi-envoy-egress-port
    value: "12345"
- header:
    key: x-tubi-envoy-egress-addr
    value: 127.0.0.1

You can also set this by the constructor parameters of EnvoyContext.

High level APIs

HTTP

For HTTP, you can new the client like this:

const { EnvoyHttpClient, HttpRetryOn } = require("envoy-node");

async function awesomeAPI(req, res) {
  const client = new EnvoyHttpClient(req.headers);
  const url = `http://foo.service:10080/path/to/rpc`
  const request = {
    message: "ping",
  };
  const optionalParams = {
    // timeout 1 second
    timeout: 1000,
    // envoy will retry if server return HTTP 409 (for now)
    retryOn: [HttpRetryOn.RETRIABLE_4XX],
    // retry 3 times at most
    maxRetries: 3,
    // each retry will timeout in 300 ms
    perTryTimeout: 300,
    // any other headers you want to set
    headers: {
      "x-extra-header-you-want": "value",
    },
  };
  const serializedJsonResponse = await client.post(url, request, optionalParams);
  res.send({ serializedJsonResponse });
  res.end();
}

gRPC

For gRPC, you can new the client like this:

General RPC

const grpc = require("grpc");
const { envoyProtoDecorator, GrpcRetryOn } = require("envoy-node");

const PROTO_PATH = __dirname + "/ping.proto";
const Ping = grpc.load(PROTO_PATH).test.Ping;

// the original client will be decorated as a new class
const PingClient = envoyProtoDecorator(Ping);

async function awesomeAPI(call, callback) {
  const client = new PingClient("bar.service:10081", call.metadata);
  const request = {
    message: "ping",
  };
  const optionalParams = {
    // timeout 1 second
    timeout: 1000,
    // envoy will retry if server return DEADLINE_EXCEEDED
    retryOn: [GrpcRetryOn.DEADLINE_EXCEEDED],
    // retry 3 times at most
    maxRetries: 3,
    // each retry will timeout in 300 ms
    perTryTimeout: 300,
    // any other headers you want to set
    headers: {
      "x-extra-header-you-want": "value",
    },
  };
  const response = await client.pathToRpc(request, optionalParams);
  callback(undefined, { remoteResponse: response });
}

Streaming API

But they are also decorated to send the Envoy context. You can also specify the optional params (the last one) for features like timeout / retryOn / maxRetries / perTryTimeout provided by Envoy.

NOTE:

  1. For streaming API, they are not implemented as async signature.
  2. The optional params (timeout etc.) is not tested and Envoy is not documented how it deal with streaming.
Client streaming
const stream = innerClient.clientStream((err, response) => {
  if (err) {
    // error handling
    return;
  }
  console.log("server responses:", response);
});
stream.write({ message: "ping" });
stream.write({ message: "ping again" });
stream.end();
Sever streaming
const stream = innerClient.serverStream({ message: "ping" });
stream.on("error", error => {
  // handle error here
});
stream.on("data", (data: any) => {
  console.log("server sent:", data);
});
stream.on("end", () => {
  // ended
});
Bidirectional streaming
const stream = innerClient.bidiStream();
stream.write({ message: "ping" });
stream.write({ message: "ping again" });
stream.on("error", error => {
  // handle error here
});
stream.on("data", (data: any) => {
  console.log("sever sent:", data);
});
stream.on("end", () => {
  stream.end();
});
stream.end();

Low level APIs

If you want to have more control of your code, you can also use the low level APIs of this library:

const { envoyFetch, EnvoyContext, EnvoyHttpRequestParams, EnvoyGrpcRequestParams, envoyRequestParamsRefiner } = require("envoy-node");

// ...

const context = new EnvoyContext(
  headerOrMetadata,
  // specify port if we cannot indicate from
  // - `x-tubi-envoy-egress-port` header or
  // - environment variable ENVOY_DEFAULT_EGRESS_PORT
  envoyEgressPort,
  // specify address if we cannot indicate from
  // - `x-tubi-envoy-egress-addr` header or
  // - environment variable ENVOY_DEFAULT_EGRESS_ADDR
  envoyEgressAddr
);

// for HTTP
const params = new EnvoyHttpRequestParams(context, optionalParams);
envoyFetch(params, url, init /* init like original node-fetch */)
  .then(res => {
    console.log("envoy tells:", res.overloaded, res.upstreamServiceTime);
    return res.json(); // or res.text(), just use it as what node-fetch returned
  })
  .then(/* ... */)

// you are using request?
const yourOldRequestParams = {}; /* url or options */
request(envoyRequestParamsRefiner(yourOldRequestParams, context /* or headers, grpc.Metadata */ ))

// for gRPC
const client = new Ping((
  `${context.envoyEgressAddr}:${context.envoyEgressPort}`, // envoy egress port
  grpc.credentials.createInsecure()
);
const requestMetadata = params.assembleRequestMeta()
client.pathToRpc(
  request,
  requestMetadata,
  {
    host: "bar.service:10081"
  },
  (error, response) => {
    // ...
  })

Check out the detail document if needed.

Context store

NOTE: experimental feature, still investigating if there is memory leak issue.

Are you finding it's too painful for you to propagate the context information through function calls' parameter?

If you are using Node.js V8, here is a solution for you:

import { envoyContextStore } from "envoy-node"; // import the store

envoyContextStore.enable(); // put this code when you application init

// for each request, call this:
  envoyContextStore.set(new EnvoyContext(req.headers));

// for later get the request, simply:
  envoyContextStore.get();

IMPORTANT: according to the implementation, it's strictly requiring the set method is called exactly once per request. Or you will get incorrect context. Please check the document for more details. (TBD: We are working on a blog post for the details.)

For dev and test, or migrating to Envoy

If you are developing the application, you may probably do not have Envoy running. You may want to call the service directly:

Either:

new EnvoyContext({
  meta: grpcMetadata_Or_HttpHeader,

  /**
   * For dev or test environment, we usually don't have Envoy running. By setting directMode = true
   * will make all the traffic being sent directly.
   * If you set directMode to true, envoyManagedHosts will be ignored and set to an empty set.
   */
  directMode: true,

  /**
   * For easier migrate service to envoy step by step, we can route traffic to envoy for those service
   * migrated. Fill this set for the migrated service.
   * This field is default to `undefined` which means all traffic will be route to envoy.
   * If this field is set to `undefined`, this library will also try to read it from `x-tubi-envoy-managed-host`.
   * You can set in envoy config, like this: 
   * 
   * ``yaml
   * request_headers_to_add:
   * - key: x-tubi-envoy-managed-host
   *   value: hostname:12345
   * - key: x-tubi-envoy-managed-host
   *   value: foo.bar:8080
   * ``
   * 
   * If you set this to be an empty set, then no traffic will be route to envoy.
   */
  envoyManagedHosts: new Set(["some-hostname:8080"]);

})

or:

export ENVOY_DIRECT_MODE=true # 1 works as well

Contributing

For developing or running test of this library, you probably need to:

  1. have an envoy binary in your PATH, or:
    $ npm run download-envoy
    $ export PATH=./node_modules/.bin/:$PATH
    
  2. there is a bug in gRPC's typing, run the following to fix it:
    $ npm run fix-grpc-typing-bug
    
  3. to commit your code change:
    $ git add . # or the things you want to commit
    $ npm run commit # and answer the commit message accordingly
    
  4. for each commit, the CI will auto release base on commit messages, to allow keeping the version align with Envoy, let's use fix instead of feature unless we want to upgrade minor version.

License

MIT

Credits

Index

Type aliases

BidiStreamFunc

BidiStreamFunc: function

Type declaration

ClientStreamFunc

ClientStreamFunc: function

Type declaration

RequestFunc

RequestFunc: function

the API call

param

the request body

param

the option like timeout, retry, etc.

Type declaration

ServerStreamFunc

ServerStreamFunc: function

Type declaration

Variables

APPLICATION_JSON

APPLICATION_JSON: "application/json" = "application/json"

json header

ENVOY_DEFAULT_EGRESS_ADDR

ENVOY_DEFAULT_EGRESS_ADDR: "127.0.0.1" = "127.0.0.1"

ENVOY_DEFAULT_EGRESS_PORT

ENVOY_DEFAULT_EGRESS_PORT: 12345 = 12345

ENVOY_EGRESS_ADDR

ENVOY_EGRESS_ADDR: string = process.env.ENVOY_EGRESS_ADDR || ENVOY_DEFAULT_EGRESS_ADDR

ENVOY_EGRESS_PORT

ENVOY_EGRESS_PORT: number = parseInt(process.env.ENVOY_EGRESS_PORT || `${ENVOY_DEFAULT_EGRESS_PORT}`,10)

X_B3_FLAGS

X_B3_FLAGS: "x-b3-flags" = "x-b3-flags"

X_B3_PARENTSPANID

X_B3_PARENTSPANID: "x-b3-parentspanid" = "x-b3-parentspanid"

X_B3_SAMPLED

X_B3_SAMPLED: "x-b3-sampled" = "x-b3-sampled"

X_B3_SPANID

X_B3_SPANID: "x-b3-spanid" = "x-b3-spanid"

X_B3_TRACEID

X_B3_TRACEID: "x-b3-traceid" = "x-b3-traceid"

X_CLIENT_TRACE_ID

X_CLIENT_TRACE_ID: "x-client-trace-id" = "x-client-trace-id"

X_ENVOY_EXPECTED_RQ_TIMEOUT_MS

X_ENVOY_EXPECTED_RQ_TIMEOUT_MS: "x-envoy-expected-rq-timeout-ms" = "x-envoy-expected-rq-timeout-ms"

X_ENVOY_MAX_RETRIES

X_ENVOY_MAX_RETRIES: "x-envoy-max-retries" = "x-envoy-max-retries"

header of envoy max retries setting

X_ENVOY_OVERLOADED

X_ENVOY_OVERLOADED: "x-envoy-overloaded" = "x-envoy-overloaded"

the header returned by envoy telling upstream is overloaded

X_ENVOY_RETRY_GRPC_ON

X_ENVOY_RETRY_GRPC_ON: "x-envoy-retry-grpc-on" = "x-envoy-retry-grpc-on"

X_ENVOY_RETRY_ON

X_ENVOY_RETRY_ON: "x-envoy-retry-on" = "x-envoy-retry-on"

X_ENVOY_UPSTREAM_RQ_PER_TRY_TIMEOUT_MS

X_ENVOY_UPSTREAM_RQ_PER_TRY_TIMEOUT_MS: "x-envoy-upstream-rq-per-try-timeout-ms" = "x-envoy-upstream-rq-per-try-timeout-ms"

header of envoy timeout per try

X_ENVOY_UPSTREAM_RQ_TIMEOUT_MS

X_ENVOY_UPSTREAM_RQ_TIMEOUT_MS: "x-envoy-upstream-rq-timeout-ms" = "x-envoy-upstream-rq-timeout-ms"

header of envoy request timeout

X_ENVOY_UPSTREAM_SERVICE_TIME

X_ENVOY_UPSTREAM_SERVICE_TIME: "x-envoy-upstream-service-time" = "x-envoy-upstream-service-time"

the header returned by envoy telling upstream duration

X_OT_SPAN_CONTEXT

X_OT_SPAN_CONTEXT: "x-ot-span-context" = "x-ot-span-context"

X_REQUEST_ID

X_REQUEST_ID: "x-request-id" = "x-request-id"

X_TUBI_ENVOY_EGRESS_ADDR

X_TUBI_ENVOY_EGRESS_ADDR: "x-tubi-envoy-egress-addr" = "x-tubi-envoy-egress-addr"

the header set in envoy config for telling this library egress address

X_TUBI_ENVOY_EGRESS_PORT

X_TUBI_ENVOY_EGRESS_PORT: "x-tubi-envoy-egress-port" = "x-tubi-envoy-egress-port"

the header set in envoy config for telling this library egress port

X_TUBI_ENVOY_MANAGED_HOST

X_TUBI_ENVOY_MANAGED_HOST: "x-tubi-envoy-managed-host" = "x-tubi-envoy-managed-host"

the optional header set in envoy config for telling a host is managed by envoy so that this library can route envoy or call directly accordingly

asyncHook

asyncHook: AsyncHook = asyncHooks.createHook({init(asyncId, type, triggerAsyncId, resource) {notDestroyedHooks++;/* istanbul ignore next */if (Date.now() - store.getLastEliminateTime() > eliminateInterval) {store.eliminate();}let triggerInfo = store.get(triggerAsyncId);if (!triggerInfo) {triggerInfo = new NodeInfo(-1);store.set(triggerAsyncId, triggerInfo);}triggerInfo.referenceCount++;const info = new NodeInfo(triggerAsyncId);store.set(asyncId, info);},destroy(asyncId) {notDestroyedHooks--;storeCleanUp(asyncId);}})

eliminateInterval

eliminateInterval: number = 300 * 1000

enabled

enabled: boolean = false

notDestroyedHooks

notDestroyedHooks: number = 0

store

store: EliminateStore = new EliminateStore()

Functions

alwaysReturnArray

  • alwaysReturnArray(input: string | string[]): string[]

assignHeader

  • assignHeader(header: HttpHeader, key: string, value: string | number | undefined | null): void
  • assign key value to header, skip empty value

    Parameters

    • header: HttpHeader

      the http header

    • key: string

      the key

    • value: string | number | undefined | null

      the value

    Returns void

disable

  • disable(): void
  • Disable the context store, all data will be clean up as well. This function is not intended to be call in the application life cycle.

    Returns void

enable

  • enable(): void
  • Enable the context store, you should call this function as early as possible, i.e. put it in your application's start.

    Returns void

ensureItsEnvoyContextInit

envoyFetch

  • the fetch function share most of the signature of the original node-fetch but helps you on setting up the request being send to envoy egress port

    Parameters

    • envoyParams: EnvoyHttpRequestParams

      the params of envoy context as well as request control params (timeout / retry, etc)

    • url: string

      the target url, the same as node-fetch's first param

    • Optional init: RequestInit

      the init, the same as node-fetch's second param

    Returns Promise<EnvoyResponse>

envoyProtoDecorator

envoyRequestParamsRefiner

  • to easier migrate from http request using request library, you can use this function to refine the request params directly

    Parameters

    • params: string | Options

      request params, can be url string or params object

    • Optional ctx: EnvoyContext | Metadata | HttpHeader

      the context, can be EnvoyContext, grpc.Metadata or HttpHeader

    • Optional init: EnvoyHttpRequestInit

      the extra options for the request

    Returns Options

get

getContext

getDebugInfo

  • getDebugInfo(): string

getEliminateInterval

  • getEliminateInterval(): number

getStoreImpl

httpHeader2Metadata

  • httpHeader2Metadata(httpHeader: HttpHeader): Metadata

isEnabled

  • isEnabled(): boolean

makeAsyncFunc

  • this function is to assign new method to the decorated original client by assigning new method, user can call the method easier with async signature

    Parameters

    • name: string

      the function name

    Returns RequestFunc

markContext

  • markContext(triggerAsyncId: number, context: EnvoyContext): void

readHeaderOrUndefined

  • readHeaderOrUndefined(header: HttpHeader, key: string): undefined | string
  • read value of the key from header return undefined if not found or empty return first one if multiple values

    Parameters

    • header: HttpHeader

      the header

    • key: string

      the key

    Returns undefined | string

readMetaAsStringOrUndefined

  • readMetaAsStringOrUndefined(meta: Metadata, key: string): undefined | string
  • read value of the key from meata return undefined if not found or empty return first one if multiple values

    Parameters

    • meta: Metadata

      metadata

    • key: string

      key

    Returns undefined | string

refineManagedHostArray

  • refineManagedHostArray(hosts: string[]): string[]
  • some HTTP framework will do a tricky thing: the merge the headers into one string fixing it here

    Parameters

    • hosts: string[]

      a list of array

    Returns string[]

set

  • According to the context store design, this function is required to be called exactly once for a request. Setting multiple calls to this function will lead to context corruption.

    Parameters

    Returns void

setEliminateInterval

  • setEliminateInterval(interval: number): void
  • set the store's eliminate interval, context data older than this and not read will be eventually eliminated

    Parameters

    • interval: number

      time in milliseconds

    Returns void

storeCleanUp

  • storeCleanUp(asyncId: number): void
  • clean up will decrease the reference count. if the reference count is 0, it will remove it from the store and decrease its parent's reference count. and try to see if its parent needs to be clean up as well.

    Parameters

    • asyncId: number

      the asyncId of the execution needs to be cleaned up

    Returns void

wrapBidiStream

  • wrapBidiStream(name: string): (Anonymous function)
  • this is to wrap the original bidirectional stream, to insert envoy metadata

    Parameters

    • name: string

      the func name

    Returns (Anonymous function)

wrapClientStreamFunc

  • wrapClientStreamFunc(name: string): (Anonymous function)
  • this is to wrap the original client stream func, to insert envoy metadata

    Parameters

    • name: string

      the name of the func

    Returns (Anonymous function)

wrapServerStream

  • wrapServerStream(name: string): (Anonymous function)
  • this is to wrap the original server stream func, to insert envoy metadata

    Parameters

    • name: string

      the name of the func

    Returns (Anonymous function)

Legend

  • Module
  • Object literal
  • Variable
  • Function
  • Function with type parameter
  • Index signature
  • Type alias
  • Enumeration
  • Enumeration member
  • Property
  • Method
  • Interface
  • Interface with type parameter
  • Constructor
  • Property
  • Method
  • Index signature
  • Class
  • Class with type parameter
  • Constructor
  • Property
  • Method
  • Accessor
  • Index signature
  • Inherited constructor
  • Inherited property
  • Inherited method
  • Inherited accessor
  • Protected property
  • Protected method
  • Protected accessor
  • Private property
  • Private method
  • Private accessor
  • Static property
  • Static method

Generated using TypeDoc