Envoy Proxy on Windows Containers

Sotiris Nanopoulos
Envoy Proxy
Published in
6 min readSep 30, 2020

--

Recently the Envoy proxy announced the Alpha version for the Windows platform! You can find the announcement here and the instructions to take part in the Windows Alpha here.

Envoy is an L7 proxy and communication bus designed for large modern service oriented architectures. In this blog post we will walk you through using Envoy and Windows containers to perform A/B testing on a website.

At the end of this post you should know:

  1. How to get started with Envoy proxy.
  2. How to use Envoy proxy as an http proxy inside a Windows Server Container.
  3. How to use Envoy proxy as an edge proxy to split the traffic between different containers.

If you are new to Envoy, we would recommend the following material to onboard:

  1. Intro: Envoy — Matt Klein & Constance Caramanolis, Lyft
  2. The project documentation page at envoproxy.io.

The customer scenario

In this demo we want to split the user traffic of our website to two different services. In practice, traffic split is useful to perform A/B testing and rolling deployments.

The architecture of the system is the following:

Envoy on Windows Containers demo architecture

To build the system we rely on two types of components. These components are:

  1. The Front-end Envoy container that sits at the edge of the network. This container balances the traffic between Service 1 and Service 2.
  2. The Service container. This container serves the front-end of our pets website. There are two different instances of the pets website. Each instance runs on a different service container. One instance of the website prints Dog images whereas the other instance prints Cat images.

All the containers are based on the 2019 Windows Server Core image. For the user code here we use Python3 and Flask although any other server technology will also work.

Building and Running the Demo

In this section we will incrementally build the system. First we will build the two types of containers that we need. For each container we will configure the EnvoyProxy and validate that they work in isolation. Finally, we will compose all the containers together in a single network and have the full architecture up and running.

The code is available at github.com/davinci26/windows-envoy-samples.

Requirements

To follow along the demo you will need the following:

  1. A Windows (Server) machine running on version greater than 2019. The reason is that internally Envoy uses Unix Domain Sockets. This feature is available on Windows after version 2019. You might not run into issues with an older version but you will be on uncharted waters. Also if you are running this code on virtual machine make sure that you have nested virtualization enabled.
  2. Envoy proxy static executable built from source. As Envoy for Windows moves from alpha → beta we will be providing pre-built binaries but currently you will have to build Envoy from source. We have created this document to make the onboarding process a bit easier.
  3. Docker for Windows and make sure that your docker engine is switched to Windows containers.

Setup the Service container

The service container is responsible for hosting the application code. For the service container we will use the following Dockerfile:

# Service Container ImageFROM mcr.microsoft.com/windows/servercore:ltsc2019# Container Variables
ARG servicePath
# Setup Python
COPY ./setup_python.ps1 /
RUN powershell.exe .\\setup_python.ps1
RUN pip3 install -q Flask==0.11.1
# Copy local files for the flask server
RUN powershell -Command mkdir service/
ADD ${servicePath} service/
# Copy envoy and its configuration
RUN powershell -Command mkdir envoy-config/
ADD ./envoy-service-config.yaml ./envoy-config/envoy-service-config.yaml
ADD ./envoy-static.exe ./envoy-static.exe
# Set up the entrypoint
ADD ./service_entrypoint.ps1 ./
ENTRYPOINT powershell ./service_entrypoint.ps1

The setup that we do on the container is to install Python and Flask and copy Envoy and its configuration. The entry point is a PowerShell script that spawns the Python Flask server and Envoy.

For the service container we use the minimal Envoy configuration that allows us to match all the traffic coming to the container port 8000 and forward it to the Python Flask server running on port 8080. The Envoy configuration used is available here.

To build & run the container you need to use:

PS D:\envoy-test> docker build -f "./Dockerfile-service" -t "test:service" --build-arg servicePath=./service1 
PS D:\envoy-test> docker run --publish 3000:8000 --detach --name bb test:service.
PS D:\envoy-test> curl "http:\\localhost:3000" -UseBasicParsing
StatusCode : 200
StatusDescription : OK

Now if you visit http://localhost:3000/ on your browser you should see a cute dog appearing on the screen.

Service 1 output on localhost
If you visit http://localhost:3000/ after running service 1 you should see this output.

To run Service2, build the Service2 container by changing the build argument to --build-arg servicePath=./service2.

Setup the Front-end Envoy container

For the front-end Envoy container we will create a container that is similar to the Service container. The Dockerfile that we use is the following:

FROM mcr.microsoft.com/windows/servercore:ltsc2019# Copy envoy and its configuration
RUN powershell -Command mkdir envoy-config/
ADD ./envoy-frontend.yaml ./envoy-config/envoy-frontend.yaml
ADD ./envoy-static.exe ./envoy-static.exe
# Create a log folder to store the stats
RUN powershell -Command mkdir logs/
ENTRYPOINT ["envoy-static.exe", "-c", "./envoy-config/envoy-frontend.yaml", "--service-cluster", "front-envoy"

In this container we only run an Envoy proxy that handles splitting the traffic between Service 1 and Service 2.

To orchestrate the traffic split, we add the following snippet to Envoy’s configuration:

routes:
- match:
prefix: "/"
runtime_fraction:
default_value:
numerator: 50
denominator: HUNDRED
runtime_key: routing.traffic_shift.placeholder
route:
cluster: service1
- match:
prefix: "/"
route:
cluster: service2

This script creates two matching rules for the traffic coming to the container. The first matching rule, which applies for 50% of the traffic, routes the traffic to Service 1. Envoy routes the remaining traffic (other 50%) to Service 2. With this routing rule we perform A/B testing on the two services.

To build & run the container you need to use:

PS D:\envoy-test> docker run -f --publish 3005:8001 --detach --name fe test:envoy
PS D:\envoy-test> docker run --publish 3005:8081 --detach --name fe test:envoy
PS D:\envoy-test> curl "http:\\localhost:3005" -UseBasicParsing
StatusCode : 200
StatusDescription : OK

Now if you visit http://localhost:3005/ on your browser you should see Envoy admin board.

Compose the network together

At this point every piece of the system is working. Now, we only need to assemble the network and deploy all the containers together. To achieve this we have created a docker compose file.

The docker-compose file is the following:

version: "3.7"
services:
front-envoy:
build:
context: .
dockerfile: Dockerfile-envoy
networks:
- envoymesh
expose:
- "8000"
- "8080"
- "8081"
ports:
- "3000:8080"
- "8081:8081"
depends_on:
- dog-service
- cat-service
dog-service:
build:
context: .
dockerfile: Dockerfile-service
args:
- servicePath=./service1/
expose:
- "8000"
networks:
envoymesh:
aliases:
- service1
environment:
- ServiceId=1
cat-service:
build:
context: .
dockerfile: Dockerfile-service
args:
- servicePath=./service2/
expose:
- "8000"
networks:
envoymesh:
aliases:
- service2
environment:
- ServiceId=2
networks:
envoymesh: {}

To build & run the composed multi-container application we run the following commands:

PS D:\envoy-test> docker-compose build --pull
PS D:\envoy-test> docker-compose up -d
PS D:\envoy-test> docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------------------------------------
envoy-test_cat-service_1 cmd /S /C powershell ./ser ... Up 8000/tcp
envoy-test_dog-service_1 cmd /S /C powershell ./ser ... Up 8000/tcp
envoy-test_front-envoy_1 envoy-static.exe -c ./envo ... Up 8000/tcp, 0.0.0.0:3000->8080/tcp, 0.0.0.0:8081->8081/tcp

Now if you visit http://localhost:3000/ you should see the output from either Service 1 (dog service) or Service 2 (cat service). Refreshing the page triggers another request that goes through the front-end Envoy proxy. Envoy forwards the request to either to Service 1 or Service 2.

Finally, we can go on Envoy admin page hosted on http://localhost:8081 and see the stats that Envoy collects automatically.

Envoy admin page
Envoy admin page hosted on http://localhost:8081

For example we can search for the upstream_rq_completed entry which is available in the stats page. This entry will tell us how many requests each Service completed. You can learn more about how the Envoy stats work in this blog post.

Recap

In this blog post we built a multi-container system to split the traffic our website between two service. To achieve that we relied on Windows Server Core containers and Envoy Proxy. We incrementally built each type of container in our system and then we composed them together.

Envoy on Windows is currently at an Alpha stage and we look forward to hearing your feedback. If you encounter any issue running the code above feel free to open an issue at github.com/envoyproxy/envoy. You can also reach out to the developers on the Envoy proxy slack channel.

--

--

All things tech and business | Ideas are free | Opinions are my own | Eng. @Microsoft