Skip to content

Commit d2c2c08

Browse files
committed
Fix MD errors
1 parent 9b92c02 commit d2c2c08

11 files changed

+35
-37
lines changed
+6-6
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,23 @@
11

2-
##<a name="drivers"></a><a name="linuxnetworking"></a>Linux Network Fundamentals
2+
## <a name="drivers"></a><a name="linuxnetworking"></a>Linux Network Fundamentals
33

44
The Linux kernel features an extremely mature and performant implementation of the TCP/IP stack (in addition to other native kernel features like DNS and VXLAN). Docker networking uses the kernel's networking stack as low level primitives to create higher level network drivers. Simply put, _Docker networking <b>is</b> Linux networking._
55

66
This implementation of existing Linux kernel features ensures high performance and robustness. Most importantly, it provides portability across many distributions and versions which enhances application portability.
77

88
There are several Linux networking building blocks which Docker uses to implement its built-in CNM network drivers. This list includes **Linux bridges**, **network namespaces**, **veth pairs**, and **iptables**. The combination of these tools implemented as network drivers provide the forwarding rules, network segmentation, and management tools for complex network policy.
99

10-
###<a name="linuxbridge"></a>The Linux Bridge
10+
### <a name="linuxbridge"></a>The Linux Bridge
1111
A **Linux bridge** is a Layer 2 device that is the virtual implementation of a physical switch inside the Linux kernel. It forwards traffic based on MAC addresses which it learns dynamically by inspecting traffic. Linux bridges are used extensively in many of the Docker network drivers. A Linux bridge is not to be confused with the `bridge` Docker network driver which is a higher level implementation of the Linux bridge.
1212

1313

14-
###Network Namespaces
14+
### Network Namespaces
1515
A Linux **network namespace** is an isolated network stack in the kernel with its own interfaces, routes, and firewall rules. It is a security aspect of containers and Linux, used to isolate containers. In networking terminology they are akin to a VRF that segments the network control and data plane inside the host. Network namespaces ensure that two containers on the same host will not be able to communicate with each other or even the host itself unless configured to do so via Docker networks. Typically, CNM network drivers implement separate namespaces for each container. However, containers can share the same network namespace or even be a part of the host's network namespace. The host network namespace containers the host interfaces and host routing table. This network namespace is called the global network namespace.
1616

17-
###Virtual Ethernet Devices
17+
### Virtual Ethernet Devices
1818
A **virtual ethernet device** or **veth** is a Linux networking interface that acts as a connecting wire between two network namespaces. A veth is a full duplex link that has a single interface in each namespace. Traffic in one interface is directed out the other interface. Docker network drivers utilize veths to provide explicit connections between namespaces when Docker networks are created. When a container is attached to a Docker network, one end of the veth is placed inside the container (usually seen as the `ethX` interface) while the other is attached to the Docker network.
1919

20-
###iptables
20+
### iptables
2121
**`iptables`** is the native packet filtering system that has been a part of the Linux kernel since version 2.4. It's a feature rich L3/L4 firewall that provides rule chains for packet marking, masquerading, and dropping. The built-in Docker network drivers utilize `iptables` extensively to segment network traffic, provide host port mapping, and to mark traffic for load balancing decisions.
2222

23-
Next: **[Docker Network Control Plane](04-docker-network-cp.md)**
23+
Next: **[Docker Network Control Plane](04-docker-network-cp.md)**

networking/concepts/04-docker-network-cp.md

+2-4
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,9 @@
1-
##<a name="controlplane"></a>Docker Network Control Plane
1+
## <a name="controlplane"></a>Docker Network Control Plane
22
The Docker-distributed network control plane manages the state of Swarm-scoped Docker networks in addition to propagating control plane data. It is a built-in capability of Docker Swarm clusters and does not require any extra components such as an external KV store. The control plane uses a [Gossip](https://en.wikipedia.org/wiki/Gossip_protocol) protocol based on [SWIM](https://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf) to propagate network state information and topology across Docker container clusters. The Gossip protocol is highly efficient at reaching eventual consistency within the cluster while maintaining constant rates of message size, failure detection times, and convergence time across very large scale clusters. This ensures that the network is able to scale across many nodes without introducing scaling issues such as slow convergence or false positive node failures.
33

44
The control plane is highly secure, providing confidentiality, integrity, and authentication through encrypted channels. It is also scoped per network which greatly reduces the updates that any given host will receive.
55

6-
<span class="float-right">
76
![Docker Network Control Plane](./img/gossip.png)
8-
</span>
97

108
It is composed of several components that work together to achieve fast convergence across large scale networks. The distributed nature of the control plane ensures that cluster controller failures don't affect network performance.
119

@@ -19,4 +17,4 @@ The Docker network control plane components are as follows:
1917

2018
> The Docker Network Control Plane is a component of [Swarm](https://docs.docker.com/engine/swarm/) and requires a Swarm cluster to operate.
2119
22-
Next: **[Docker Bridge Network Driver Architecture](05-bridge-networks.md)**
20+
Next: **[Docker Bridge Network Driver Architecture](05-bridge-networks.md)**

networking/concepts/05-bridge-networks.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
##<a name="drivers"></a>Docker Bridge Network Driver Architecture
1+
## <a name="drivers"></a>Docker Bridge Network Driver Architecture
22

33
This section explains the default Docker bridge network as well as user-defined bridge networks.
44

5-
###Default Docker Bridge Network
5+
### Default Docker Bridge Network
66
On any host running Docker Engine, there will, by default, be a local Docker network named `bridge`. This network is created using a `bridge` network driver which instantiates a Linux bridge called `docker0`. This may sound confusing.
77

88
- `bridge` is the name of the Docker network
@@ -57,7 +57,7 @@ By default `bridge` will be assigned one subnet from the ranges 172.[17-31].0.0/
5757
5858

5959

60-
###<a name="userdefined"></a>User-Defined Bridge Networks
60+
### <a name="userdefined"></a>User-Defined Bridge Networks
6161
In addition to the default networks, users can create their own networks called **user-defined networks** of any network driver type. In the case of user-defined `bridge` networks, Docker will create a new Linux bridge on the host. Unlike the default `bridge` network, user-defined networks supports manual IP address and subnet assignment. If an assignment isn't given, then Docker's default IPAM driver will assign the next subnet available in the private IP space.
6262

6363
![User-Defined Bridge Network](./img/bridge2.png)
@@ -101,7 +101,7 @@ $ ip link
101101
...
102102
```
103103

104-
###External and Internal Connectivity
104+
### External and Internal Connectivity
105105
By default all containers on the same `bridge` driver network will have connectivity with each other without extra configuration. This is an aspect of most types of Docker networks. By virtue of the Docker network the containers are able to communicate across their network namespaces and (for multi-host drivers) across external networks as well. **Communication between different Docker networks is firewalled by default.** This is a fundamental security aspect that allows us to provide network policy using Docker networks. For example, in the figure above containers `c2` and `c3` have reachability but they cannot reach `c1`.
106106

107107
Docker `bridge` networks are not exposed on the external (underlay) host network by default. Container interfaces are given IPs on the private subnets of the bridge network. Containers communicating with the external network are port mapped or masqueraded so that their traffic uses an IP address of the host. The example below shows outbound and inbound container traffic passing between the host interface and a user-defined `bridge` network.
@@ -117,4 +117,4 @@ This previous diagram shows how port mapping and masquerading takes place on a h
117117
Exposed ports can be configured using `--publish` in the Docker CLI or UCP. The diagram shows an exposed port with the container port `80` mapped to the host interface on port `5000`. The exposed container would be advertised at `192.168.0.2:5000`, and all traffic going to this interface:port would be sent to the container at `10.0.0.2:80`.
118118

119119

120-
Next: **[Overlay Driver Network Architecture](06-overlay-networks.md)**
120+
Next: **[Overlay Driver Network Architecture](06-overlay-networks.md)**

networking/concepts/07-macvlan.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ PING 127.0.0.1 (127.0.0.1): 56 data bytes
2828

2929
As you can see in this diagram, `c1` and `c2` are attached via the MACVLAN network called `macvlan` attached to `eth0` on the host.
3030

31-
###VLAN Trunking with MACVLAN
31+
### VLAN Trunking with MACVLAN
3232

3333
Trunking 802.1q to a Linux host is notoriously painful for many in operations. It requires configuration file changes in order to be persistent through a reboot. If a bridge is involved, a physical NIC needs to be moved into the bridge, and the bridge then gets the IP address. The `macvlan` driver completely manages sub-interfaces and other components of the MACVLAN network through creation, destruction, and host reboots.
3434

@@ -55,4 +55,4 @@ In the preceding configuration we've created two separate networks using the `ma
5555

5656
> Because multiple MAC addresses are living behind a single host interface you might need to enable promiscuous mode on the interface depending on the NIC's support for MAC filtering.
5757
58-
Next: **[Host (Native) Network Driver](08-host-networking.md)**
58+
Next: **[Host (Native) Network Driver](08-host-networking.md)**

networking/concepts/08-host-networking.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11

2-
##<a name="hostdriver"></a>Host (Native) Network Driver
2+
## <a name="hostdriver"></a>Host (Native) Network Driver
33

44
The `host` network driver connects a container directly to the host networking stack. Containers using the `host` driver reside in the same network namespace as the host itself. Thus, containers will have native bare-metal network performance at the cost of namespace isolation.
55

@@ -53,7 +53,7 @@ Every container using the `host` network will all share the same host interfaces
5353

5454
Full host access and no automated policy management may make the `host` driver a difficult fit as a general network driver. However, `host` does have some interesting properties that may be applicable for use cases such as ultra high performance applications, troubleshooting, or monitoring.
5555

56-
##<a name="nonedriver"></a>None (Isolated) Network Driver
56+
## <a name="nonedriver"></a>None (Isolated) Network Driver
5757

5858
Similar to the `host` network driver, the `none` network driver is essentially an unmanaged networking option. Docker Engine will not create interfaces inside the container, establish port mapping, or install routes for connectivity. A container using `--net=none` will be completely isolated from other containers and the host. The networking admin or external tools must be responsible for providing this plumbing. In the following example we see that a container using `none` only has a loopback interface and no other interfaces.
5959

@@ -75,4 +75,4 @@ Unlike the `host` driver, the `none` driver will create a separate namespace for
7575

7676
> Containers using `--net=none` or `--net=host` cannot be connected to any other Docker networks.
7777
78-
Next: **[Physical Network Design Requirements](09-physical-networking.md)**
78+
Next: **[Physical Network Design Requirements](09-physical-networking.md)**

networking/concepts/09-physical-networking.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
##<a name="requirements"></a>Physical Network Design Requirements
1+
## <a name="requirements"></a>Physical Network Design Requirements
22
Docker Datacenter and Docker networking are designed to run over common data center network infrastructure and topologies. Its centralized controller and fault-tolerant cluster guarantee compatibility across a wide range of network environments. The components that provide networking functionality (network provisioning, MAC learning, overlay encryption) are either a part of Docker Engine, UCP, or the Linux kernel itself. No extra components or special networking features are required to run any of the built-in Docker networking drivers.
33

44
More specifically, the Docker built-in network drivers have NO requirements for:
@@ -11,7 +11,7 @@ More specifically, the Docker built-in network drivers have NO requirements for:
1111

1212
This is in line with the Container Networking Model which promotes application portability across all environments while still achieving the performance and policy required of applications.
1313

14-
##<a name="sd"></a>Service Discovery Design Considerations
14+
## <a name="sd"></a>Service Discovery Design Considerations
1515

1616
Docker uses embedded DNS to provide service discovery for containers running on a single Docker Engine and `tasks` running in a Docker Swarm. Docker Engine has an internal DNS server that provides name resolution to all of the containers on the host in user-defined bridge, overlay, and MACVLAN networks. Each Docker container ( or `task` in Swarm mode) has a DNS resolver that forwards DNS queries to Docker Engine, which acts as a DNS server. Docker Engine then checks if the DNS query belongs to a container or `service` on network(s) that the requesting container belongs to. If it does, then Docker Engine looks up the IP address that matches a container, `task`, or`service`'s **name** in its key-value store and returns that IP or `service` Virtual IP (VIP) back to the requester.
1717

@@ -23,7 +23,7 @@ If the destination container or `service` does not belong on same network(s) as
2323

2424
In this example there is a service of two containers called `myservice`. A second service (`client`) exists on the same network. The `client` executes two `curl` operations for `docker.com` and `myservice`. These are the resulting actions:
2525

26-
26+
2727
- DNS queries are initiated by `client` for `docker.com` and `myservice`.
2828
- The container's built in resolver intercepts the DNS queries on `127.0.0.11:53` and sends them to Docker Engine's DNS server.
2929
- `myservice` resolves to the Virtual IP (VIP) of that service which is internally load balanced to the individual task IP addresses. Container names will be resolved as well, albeit directly to their IP address.

networking/concepts/10-load-balancing.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
##<a name="lb"></a>Load Balancing Design Considerations
1+
## <a name="lb"></a>Load Balancing Design Considerations
22

33
Load balancing is a major requirement in modern, distributed applications. Docker Swarm mode introduced in 1.12 comes with a native internal and external load balancing functionalities that utilize both `iptables` and `ipvs`, a transport-layer load balancing inside the Linux kernel.
44

5-
###Internal Load Balancing
5+
### Internal Load Balancing
66
When services are created in a Docker Swarm cluster, they are automatically assigned a Virtual IP (VIP) that is part of the service's network. The VIP is returned when resolving the service's name. Traffic to that VIP will be automatically sent to all healthy tasks of that service across the overlay network. This approach avoids any client-side load balancing because only a single IP is returned to the client. Docker takes care of routing and equally distributing the traffic across the healthy service tasks.
77

88

@@ -57,4 +57,4 @@ This diagram illustrates how the Routing Mesh works.
5757
- Traffic destined for the `app` can enter on any host. In this case the external LB sends the traffic to a host without a service replica.
5858
- The kernel's IPVS load balancer redirects traffic on the `ingress` overlay network to a healthy service replica.
5959

60-
Next: **[Network Security and Encryption Design Considerations](11-security.md)**
60+
Next: **[Network Security and Encryption Design Considerations](11-security.md)**

networking/concepts/11-security.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
##<a name="security"></a>Network Security and Encryption Design Considerations
1+
## <a name="security"></a>Network Security and Encryption Design Considerations
22

33
Network security is a top-of-mind consideration when designing and implementing containerized workloads with Docker. In this section, we will go over three key design considerations that are typically raised around Docker network security and how you can utilize Docker features and best practices to address them.
44

5-
###Container Networking Segmentation
5+
### Container Networking Segmentation
66

77
Docker allows you to create an isolated network per application using the `overlay` driver. By default different Docker networks are firewalled from eachother. This approach provides a true network isolation at Layer 3. No malicious container can communicate with your application's container unless it's on the same network or your applications' containers expose services on the host port. Therefore, creating networks for each applications adds another layer of security. The principles of "Defense in Depth" still recommends application-level security to protect at L3 and L7.
88

9-
###Securing the Control Plane
9+
### Securing the Control Plane
1010

1111
Docker Swarm comes with integrated PKI. All managers and nodes in the Swarm have a cryptographically signed identify in the form of a signed certificate. All manager-to-manager and manager-to-node control communication is secured out of the box with TLS. No need to generate certs externally or set up any CAs manually to get end-to-end control plane traffic secured in Docker Swarm mode. Certificates are periodically and automatically rotated.
1212

13-
###Securing the Data Plane
13+
### Securing the Data Plane
1414

1515
In Docker Swarm mode the data path (e.g. application traffic) can be encrypted out-of-the-box. This feature uses IPSec tunnels to encrypt network traffic as it leaves the source container and decrypts it as it enters the destination container. This ensure that your application traffic is highly secure when it's in transit regardless of the underlying networks. In a hybrid, multi-tenant, or multi-cloud environment, it is crucial to ensure data is secure as it traverses networks you might not have control over.
1616

@@ -22,4 +22,4 @@ This feature works with the `overlay` driver in Swarm mode only and can be enabl
2222

2323
The Swarm leader periodically regenerates a symmetrical key and distributes it securely to all cluster nodes. This key is used by IPsec to encrypt and decrypt data plane traffic. The encryption is implemented via IPSec in host-to-host transport mode using AES-GCM.
2424

25-
Next: **[IP Address Management](12-ipaddress-management.md)**
25+
Next: **[IP Address Management](12-ipaddress-management.md)**

networking/concepts/12-ipaddress-management.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
##<a name="ipam"></a>IP Address Management
1+
## <a name="ipam"></a>IP Address Management
22

33
The Container Networking Model (CNM) provides flexibility in how IP addresses are managed. There are two methods for IP address management.
44

@@ -9,4 +9,4 @@ Manual configuration of container IP addresses and network subnets can be done u
99

1010
Subnet size and design is largely dependent on a given application and the specific network driver. IP address space design is covered in more depth for each [Network Deployment Model](#models) in the next section. The uses of port mapping, overlays, and MACVLAN all have implications on how IP addressing is arranged. In general, container addressing falls into two buckets. Internal container networks (bridge and overlay) address containers with IP addresses that are not routable on the physical network by default. MACVLAN networks provide IP addresses to containers that are on the subnet of the physical network. Thus, traffic from container interfaces can be routable on the physical network. It is important to note that subnets for internal networks (bridge, overlay) should not conflict with the IP space of the physical underlay network. Overlapping address space can cause traffic to not reach its destination.
1111

12-
Next: **[Network Troubleshooting](13-troubleshooting.md)**
12+
Next: **[Network Troubleshooting](13-troubleshooting.md)**

0 commit comments

Comments
 (0)