You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: networking/tutorials.md
+8-8
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
2
-
###<aname="pets"></a>Tutorial Application: The Pets App
2
+
###<aname="pets"></a>Tutorial Application: The Pets App
3
3
4
4
In the following example, we will use a fictional app called **[Pets](https://github.com/mark-church/pets)** to illustrate the __Network Deployment Models__. It serves up images of pets on a web page while counting the number of hits to the page in a backend database. It is configurable via two environment variables, `DB` and `ROLE`.
5
5
@@ -20,7 +20,7 @@ We will explore the following network deployment models in this section:
This model is the default behavior of the built-in Docker `bridge` network driver. The `bridge` driver creates a private network internal to the host and provides an external port mapping on a host interface for external connectivity.
25
25
26
26
```bash
@@ -88,7 +88,7 @@ $ docker network inspect catnet
88
88
```
89
89
In this output, we can see that our two containers have automatically been given ip addresses from the `172.19.0.0/16` subnet. This is the subnet of the local `catnet` bridge, and it will provide all connected containers a subnet from this range unless they are statically configured.
Deploying a multi-host application requires some additional configuration so that distributed components can connect with each other. In the following example we explicitly tell the `web` container the location of `redis` with the environment variable `DB=hostB:8001`. Another change is that we are port mapping port `6379` inside the`redis` container to port `8001` on the `hostB`. Without the port mapping, `redis` would only be accessible on its connected networks (the default `bridge` in this case).
94
94
@@ -115,15 +115,15 @@ In the overlay driver example we will see that multi-host service discovery is p
115
115
116
116
117
117
118
-
####Bridge Driver Benefits and Use-Cases
118
+
####Bridge Driver Benefits and Use-Cases
119
119
120
120
- Very simple architecture promotes easy understanding and troubleshooting
121
121
- Widely deployed in current production environments
122
122
- Simple to deploy in any environment from developer laptops to production data center
This model utilizes the built-in `overlay` driver to provide multi-host connectivity out of the box. The default settings of the overlay driver will provide external connectivity to the outside world as well as internal connectivity and service discovery within a container application. The [Overlay Driver Architecture](#overlayarch) section reviews the internals of the Overlay driver which you should review before reading this section.
129
129
@@ -182,7 +182,7 @@ This example uses the following logical topology:
182
182
183
183
184
184
185
-
####Overlay Benefits and Use Cases
185
+
####Overlay Benefits and Use Cases
186
186
187
187
- Very simple multi-host connectivity for small and large deployments
188
188
- Provides service discovery and load balancing with no extra configuration or components
@@ -191,7 +191,7 @@ This example uses the following logical topology:
There may be cases where the application or network environment requires containers to have routable IP addresses that are a part of the underlay subnets. The MACVLAN driver provides an implementation that makes this possible. As described in the [MACVLAN Architecture section](#macvlan), a MACVLAN network binds itself to a host interface. This can be a physical interface, a logical sub-interface, or a bonded logical interface. It acts as a virtual switch and provides communication between containers on the same MACVLAN network. Each container receives a unique MAC address and an IP address of the physical network that the node is attached to.
When `dog-web` communicates with `dog-db`, the physical network will route or switch the packet using the source and destination addresses of the containers. This can simplify network visibility as the packet headers can be linked directly to specific containers. At the same time application portability is decreased as container IPAM is tied to the physical network. Container addressing must adhere to the physical location of container placement in addition to preventing overlapping address assignment. Because of this, care must be taken to manage IPAM externally to a MACVLAN network. Overlapping IP addressing or incorrect subnets can lead to loss of container connectivity.
215
215
216
-
####MACVLAN Benefits and Use Cases
216
+
####MACVLAN Benefits and Use Cases
217
217
218
218
- Very low latency applications can benefit from the `macvlan` driver because it does not utilize NAT.
219
219
- MACVLAN can provide an IP per container, which may be a requirement in some environments.
0 commit comments