Backend service-based external passthrough Network Load Balancer overview
Stay organized with collections
Save and categorize content based on your preferences.
External passthrough Network Load Balancers are regional, Layer 4 load balancers that distribute
external traffic among backends (instance groups or network endpoint groups
(NEGs)) in the same region as the load balancer. These backends must be in the
same region and project but can be in different VPC networks.
These load balancers are built on
Maglev
and the Andromeda network virtualization
stack.
External passthrough Network Load Balancers can receive traffic from:
Any client on the internet
Google Cloud VMs with external IPs
Google Cloud VMs that have internet access through Cloud NAT or
instance-based NAT
External passthrough Network Load Balancers are not proxies. The load balancer itself doesn't
terminate user connections. Load-balanced packets are sent to the backend VMs
with their source and destination IP addresses, protocol, and, if applicable,
ports, unchanged. The backend VMs then terminate user connections. Responses
from the backend VMs go directly to the clients, not back through the load
balancer. This process is known as direct server return (DSR).
Backend service-based external passthrough Network Load Balancers support the following features:
Managed and unmanaged instance group backends. Backend service-based
external passthrough Network Load Balancers support both managed and unmanaged instance groups
as backends. Managed instance groups automate certain aspects of backend
management and provide better scalability and reliability as compared to
unmanaged instance groups.
Zonal NEG backends. Backend service-based
external passthrough Network Load Balancers support using zonal NEGs with GCE_VM_IP endpoints.
Zonal NEG GCE_VM_IP endpoints let you do the following:
Forward packets to any network interface, not just nic0.
Place the same GCE_VM_IP endpoint in two or more zonal NEGs connected to
different backend services.
Support for multiple protocols. Backend service-based external passthrough Network Load Balancers
can load-balance
TCP, UDP, ESP, GRE, ICMP, and ICMPv6
traffic.
Support for IPv6 connectivity. Backend service-based
external passthrough Network Load Balancers can handle both IPv4 and IPv6 traffic.
Fine-grained traffic distribution control. A backend service allows
traffic to be distributed according to the configured session affinity,
connection tracking policy, and weighted load balancing settings. The backend
service can also be configured to enable connection draining and designate
failover backends for the load balancer. Most of these settings have default
values that let you get started quickly. For more information, see Traffic
distribution for
external passthrough Network Load Balancers.
Support for non-legacy, regional health checks. Backend service-based
external passthrough Network Load Balancers support regional health checks, which can
use any supported health check protocol.
Google Cloud Armor integration. Cloud Armor supports advanced network
DDoS protection for external passthrough Network Load Balancers. For more information, see Configure
advanced network DDoS protection.
GKE integration. If you are building applications in
GKE, we recommend that you use the built-in GKE Service
controller, which deploys
Google Cloud load balancers on behalf of GKE users. This
is the same as the standalone load balancing architecture described on this
page, except that its lifecycle is fully automated and controlled by
GKE.
The load balancer is made up of several configuration components. A single
load balancer can have the following:
One or more regional external IP addresses
One or more regional external forwarding rules
One regional external backend service
One or more backends: either all instance groups or all zonal NEG backends
(GCE_VM_IP endpoints)
Health check associated with the backend service
Additionally, you must create firewall rules that allow your load balancing
traffic and health check probes to reach the backend VMs.
IP address
An external passthrough Network Load Balancer requires at least one forwarding rule. The forwarding rule
references a regional external IP address that is accessible anywhere on the
internet.
For IPv4 traffic, the forwarding rule references a single regional
external IPv4 address. Regional external IPv4
addresses come from a pool unique to each Google Cloud region.
The IPv4 address can be assigned either by specifying
a reserved external IP address
or by letting Google Cloud automatically assign an ephemeral IPv4
address.
For IPv6 traffic, the forwarding rule references a /96 range of IPv6
addresses from a dual-stack or IPv6-only
subnet. The subnet must have an
assigned external IPv6 subnet
range in the VPC
network. External IPv6 addresses are available only in Premium Tier.
The /96 IPv6 address range can be assigned by either specifying
a reserved external IPv6 address,
specifying a custom ephemeral IPv6 address, or letting Google Cloud
automatically assign an ephemeral IPv6 address.
To specify a custom ephemeral IPv6 address, you must use the
gcloud CLI or the API. The Google Cloud console doesn't support
specifying custom ephemeral IPv6 addresses for forwarding rules.
Use a reserved IP address for the forwarding rule if you need to keep the
address associated with your project for reuse after you delete a forwarding
rule or if you need multiple forwarding rules to reference the same IP address.
External passthrough Network Load Balancers support both Standard Tier and Premium Tier
for regional external IPv4 addresses. Both the IP address and the forwarding
rule must use the same network tier. Regional external IPv6 addresses are only
available in the Premium Tier.
Forwarding rule
A regional external forwarding rule specifies the protocol and ports on which
the load balancer accepts traffic. Because external passthrough Network Load Balancers are not proxies,
they pass traffic to backends on the same protocol and ports, if the packet
carries port information. The forwarding rule in combination with the IP address
forms the frontend of the load balancer.
The load balancer preserves the source IP addresses of incoming packets. The
destination IP address for incoming packets is an IP address associated with
the load balancer's forwarding rule.
Incoming traffic is matched to a forwarding rule, which is a combination of a
particular IP address (either an IPv4 address or an IPv6 address range),
protocol, and if the protocol is port-based, one of port(s), a range of ports,
or all ports. The forwarding rule then directs traffic to the load balancer's
backend service.
If the forwarding rule references an IPv4 address, the forwarding rule is not
associated with any subnet. That is, its IP address comes from outside of any
Google Cloud subnet range.
If the forwarding rule references a /96 IPv6 address range, the forwarding
rule must be associated with a subnet, and that subnet must be (a) dual-stack
and (b) have an external IPv6 subnet range (--ipv6-access-type set to
EXTERNAL). The subnet that the forwarding rule references can be the same
subnet used by the backend instances; however, backend instances can use a
separate subnet if chosen. When backend instances use a separate subnet, the
following must be true:
An external passthrough Network Load Balancer requires at least one forwarding rule.
Forwarding rules can be
configured to direct traffic coming from a specific range of source IP addresses
to a specific backend service (or target instance). For details, see traffic
steering. You can define
multiple forwarding rules for the same load balancer as described in Multiple
forwarding rules.
If you want the load balancer to handle both IPv4 and IPv6 traffic, create two
forwarding rules: one rule for IPv4 traffic that points to IPv4 (or dual-stack)
backends, and one rule for IPv6 traffic that points only to dual-stack backends.
It's possible to have an IPv4 and an IPv6 forwarding rule reference the same
backend service, but the backend service must reference dual-stack backends.
Forwarding rule protocols
External passthrough Network Load Balancers support the following protocol options for each
forwarding rule: TCP, UDP, and L3_DEFAULT.
Use the TCP and UDP options to configure TCP or UDP load balancing.
The L3_DEFAULT protocol option enables an external passthrough Network Load Balancer to
load balance
TCP, UDP, ESP, GRE, ICMP, and ICMPv6
traffic.
In addition to supporting protocols other than TCP and UDP, L3_DEFAULT makes
it possible for a single forwarding rule to serve multiple protocols. For
example, IPsec services typically handle some combination of ESP and
UDP-based IKE and NAT-T traffic. The L3_DEFAULT option allows a single
forwarding rule to be configured to process all of those protocols.
Forwarding rules using the TCP or UDP protocols can reference a backend
service using either the same protocol as the forwarding rule or a backend
service whose protocol is UNSPECIFIED.
L3_DEFAULT forwarding rules can only
reference a backend service with protocol UNSPECIFIED.
If you're using the L3_DEFAULT protocol, you must configure the forwarding
rule to accept traffic on all ports. To configure all ports, either set
--ports=ALL by using
the Google Cloud CLI, or set allPorts to
True by using the API.
The following table summarizes how to use these settings for different
protocols.
Traffic to be load balanced
Forwarding rule protocol
Backend service protocol
TCP
TCP
TCP or UNSPECIFIED
L3_DEFAULT
UNSPECIFIED
UDP
UDP
UDP or UNSPECIFIED
L3_DEFAULT
UNSPECIFIED
ESP, GRE, ICMP/ICMPv6 (echo request only)
L3_DEFAULT
UNSPECIFIED
Multiple forwarding rules
You can configure multiple regional external forwarding rules for the same
external passthrough Network Load Balancer. Each forwarding rule can have a different regional external
IP address, or multiple forwarding rules can have the same regional external IP
address.
Configuring multiple regional external forwarding rules can be useful
for these use cases:
You need to configure more than one external IP address for the same backend
service.
You need to configure different protocols or non-overlapping ports or port
ranges for the same external IP address.
You need to steer traffic from certain source IP addresses to specific load
balancer backends.
Google Cloud requires that incoming packets match no more than one
forwarding rule. Except for steering forwarding rules, which are discussed in
the next section, two or more forwarding rules that use the same regional
external IP address must have unique protocol and port combinations according to
these constraints:
A forwarding rule configured for all ports of a protocol prevents the
creation of other forwarding rules using the same protocol and IP address.
Forwarding rules using TCP or UDP protocols can be configured to use all
ports, or they can be configured for specific ports. For example, if you
create a forwarding rule using IP address 198.51.100.1, the TCP protocol,
and all ports, you cannot create any other forwarding rule using IP address
198.51.100.1and the TCP protocol.
You can create two forwarding rules, both using the IP address 198.51.100.1
and the TCP protocol, if each one has unique ports or non-overlapping port
ranges. For example, you can create two forwarding rules using IP address
198.51.100.1and the TCP protocol, where one forwarding rule's ports are
80,443 and the other uses the port range 81-442.
Only one L3_DEFAULT forwarding rule can be created per IP address. This
is because the L3_DEFAULT protocol uses all ports by definition. In this
context, the all ports term includes protocols without port information.
A single L3_DEFAULT forwarding rule can coexist with other forwarding
rules that use specific protocols (TCP or UDP). The L3_DEFAULT
forwarding rule can be used as a last resort when forwarding rules using the
same IP address but more specific protocols exist. An L3_DEFAULT forwarding
rule processes packets sent to its destination IP address if and only if the
packet's destination IP address, protocol, and destination port don't match a
protocol-specific forwarding rule.
To illustrate this, consider these two scenarios. Forwarding
rules in both scenarios use the same IP address 198.51.100.1.
Scenario 1. The first forwarding rule uses the L3_DEFAULT protocol.
The second forwarding rule uses the TCP protocol and all ports.
TCP packets sent to any destination port of 198.51.100.1 are processed by
the second forwarding rule. Packets using different protocols are processed
by the first forwarding rule.
Scenario 2. The first forwarding rule uses the L3_DEFAULT protocol.
The second forwarding rule uses the TCP protocol and port 8080. TCP
packets sent to 198.51.100.1:8080 are processed by the second
forwarding rule. All other packets, including TCP packets sent to
different destination ports, are processed by the first forwarding rule.
Forwarding rule selection
Google Cloud selects one or zero forwarding rules to process an incoming
packet by using this elimination process, starting with the set of forwarding
rule candidates which match the destination IP address of the packet:
Eliminate forwarding rules whose protocol doesn't match the packet's protocol,
except for L3_DEFAULT forwarding rules. Forwarding rules using the
L3_DEFAULT protocol are never eliminated by this step because L3_DEFAULT
matches all protocols. For example, if the packet's protocol is TCP, only
forwarding rules using the UDP protocol are eliminated.
Eliminate forwarding rules whose port doesn't match the packet's port.
Forwarding rules configured for all ports are never eliminated by this step
because an all ports forwarding rule matches any port.
If the remaining forwarding rule candidates include both L3_DEFAULT and
protocol specific forwarding rules, eliminate the L3_DEFAULT forwarding
rules. If the remaining forwarding rule candidates are all L3_DEFAULT
forwarding rules, none are eliminated at this step.
At this point, either the remaining forwarding rule candidates fall into one
of the following categories:
A single forwarding rule remains which matches the packet's destination IP
address, protocol, and port, and is used to route the packet.
Two or more forwarding rule candidates remain which match the packet's
destination IP address, protocol, and port. This means the remaining
forwarding rule candidates include steering forwarding rules (discussed
in the next section). Select the
steering forwarding rule whose source range includes the most
specific (longest prefix match) CIDR containing the packet's source IP
address. If no steering forwarding rules have a source range including the
packet's source IP address, select the parent forwarding rule.
Zero forwarding rule candidates remain and the packet is dropped.
When using multiple forwarding rules, make sure that you configure the software
running on your backend VMs so that it binds to all the external IP address(es)
of the load balancer's forwarding rule(s).
Traffic steering
Forwarding rules for external passthrough Network Load Balancers can be configured to direct traffic coming
from a specific source IP address or a range of IP addresses to a specific
backend service (or target instance).
Traffic steering is useful for troubleshooting and for advanced configurations.
With traffic steering, you can direct certain clients to a different set of
backends, a different backend service configuration, or both. For example:
Traffic steering lets you create two forwarding rules which direct traffic to
the same backend (instance group or NEG) by way of two backend services. The two
backend services can be configured with different health checks, different
session affinities, or different traffic distribution control policies
(connection tracking, connection draining, and failover).
Traffic steering lets you create a forwarding rule to redirect traffic from a
low-bandwidth backend service to a high-bandwidth backend service. Both
backend services contain the same set of backend VMs or endpoints, but
load-balanced with different weights using weighted load
balancing.
Traffic steering lets you create two forwarding rules which direct traffic to
different backend services, with different backends (instance groups or NEGs).
For example, one backend could be configured using different machine types in
order to better process traffic from a certain source IP addresses.
Traffic steering is configured with a forwarding rule API parameter called
sourceIPRanges. Forwarding rules that have at least one source IP range
configured are called steering forwarding rules.
A steering forwarding rule can use the sourceIPRanges parameter to specify a
comma-separated list of up to 64 source IP addresses or IP address ranges. You
can update this list of source IP address ranges at any time.
Each steering forwarding rule requires that you first create a parent
forwarding rule. The parent and steering forwarding rules share the
same regional external IP address, IP protocol, and port information; however,
the parent forwarding rule does not have any source IP address information.
For example:
Parent forwarding rule: IP address: 198.51.100.1, IP protocol: TCP,
ports: 80
Steering forwarding rule: IP address: 198.51.100.1, IP protocol: TCP,
ports: 80, sourceIPRanges: 203.0.113.0/24
A parent forwarding rule that points to a backend service can be associated with
a steering forwarding rule that points to a backend service or a target
instance.
For a given parent forwarding rule, two or more steering forwarding rules can
have overlapping, but not identical, source IP address ranges and IP
addresses. As an example, one steering forwarding rule can have the source IP
range 203.0.113.0/24 and another steering forwarding rule for the same parent
can have the source IP address 203.0.113.0.
You must delete all steering forwarding rules before you can delete the parent
forwarding rule upon which they depend.
To learn how incoming packets are processed when steering forwarding rules are
used, see Forwarding rule selection.
Session affinity behavior across steering changes
This section describes the conditions under which session affinity might break
when the source IP address ranges configured for a steering forwarding rule are
updated:
If an existing connection continues to match the same forwarding rule after
you change the source IP ranges for a steering forwarding rule, session
affinity doesn't break. If your change results in an existing connection
matching a different forwarding rule, then:
Session affinity always breaks under these circumstances:
The newly matched forwarding rule directs an established connection to a
backend service (or target instance) which doesn't reference the
previously selected backend VM.
The newly matched forwarding rule directs an established connection to a
backend service which does reference the previously selected backend VM,
but the backend service is not configured to persist connections when
backends are
unhealthy,
and the backend VM fails the backend service's health check.
Session affinity might break when the newly matched forwarding rule
directs an established connection to a backend service, and the
backend service does reference the previously selected VM, but the
backend service's combination of session affinity and connection
tracking mode results in a different connection tracking hash.
Preserving session affinity across steering changes
This section describes how to avoid breaking session affinity
when the source IP ranges for steering forwarding rules are updated:
Steering forwarding rules pointing to backend services. If both the parent
and the steering forwarding rule point to backend services, you'll need to
manually make sure that the session
affinity
and connection tracking
policy
settings are identical.
Google Cloud does not automatically reject configurations if they are
not identical.
Steering forwarding rules pointing to target instances. A parent
forwarding rule that points to a backend service can be associated with a
steering forwarding rule that points to a target instance. In this case, the
steering forwarding rule inherits session
affinity
and connection tracking
policy
settings from the parent forwarding rule.
Each external passthrough Network Load Balancer has one regional backend service that defines
the behavior of the load balancer and how traffic is distributed to its
backends. The name of the backend service is the name of the external passthrough Network Load Balancer
shown in the Google Cloud console.
Each backend service defines the following backend parameters:
Protocol. A backend service accepts traffic on the IP address and ports
(if configured) specified by one or more regional external forwarding rules.
The backend service passes packets to backend VMs while preserving the
packet's source and destination IP addresses, protocol, and, if the protocol
is port-based, the source and destination ports.
Backend services used with external passthrough Network Load Balancers support the following protocol
options: TCP, UDP, or UNSPECIFIED.
Backend services with the UNSPECIFIED protocol can be used with any
forwarding rule regardless of the forwarding rule protocol. Backend services
with a specific protocol (TCP or UDP) can only be referenced by forwarding
rules with the same protocol (TCP or UDP). Forwarding rules with the
L3_DEFAULT protocol can only refer to backend services with the
UNSPECIFIED protocol.
Traffic distribution. A backend service allows
traffic to be distributed according to the configured session affinity,
connection tracking policy, and weighted load balancing settings. The backend
service can also be configured to enable connection draining and designate
failover backends for the load balancer. Most of these settings have default
values that let you get started quickly. For more information, see Traffic
distribution for
external passthrough Network Load Balancers.
Health check. A backend service must have an associated regional health
check.
Backends. Each backend service operates in a single region and distributes
traffic to either instance groups or zonal NEGs in the same region. You can
use either instance groups or zonal NEGs, but not a combination of both, as
backends for an external passthrough Network Load Balancer:
If you choose instance groups, you can use unmanaged
instance groups, zonal managed instance groups, regional managed instance
groups, or a combination of instance group types.
If you choose zonal NEGs, you must use GCE_VM_IP zonal NEGs.
An external passthrough Network Load Balancer distributes connections among backend VMs contained within
managed or unmanaged instance groups. Instance groups can be regional or zonal
in scope.
Each instance group has an associated VPC network, even if that
instance group hasn't been connected to a backend service yet. For more
information about how a network is associated with instance groups, see
Instance group backends and network interfaces.
The external passthrough Network Load Balancer is highly available by design. There are no special
steps needed to make the load balancer highly available because the mechanism
doesn't rely on a single device or VM instance. You only need to make sure that
your backend VM instances are deployed to multiple zones so that the load
balancer can work around potential issues in any given zone.
Regional managed instance groups. Use regional managed instance groups if
you can deploy your software by using instance templates. Regional managed
instance groups automatically distribute traffic among multiple zones, providing
the best option to avoid potential issues in any given zone.
An example deployment using a regional managed instance group is shown here.
The instance group has an instance template that defines how instances should
be provisioned, and each group deploys instances within three zones of the
us-central1 region.
Zonal managed or unmanaged instance groups. Use zonal instance groups in
different zones (in the same region) to protect against potential issues in any
given zone.
An example deployment using zonal instance groups is shown here. This load
balancer provides availability across two zones.
An external passthrough Network Load Balancer distributes connections among GCE_VM_IP endpoints contained
within zonal network endpoint
groups. These endpoints must be
located in the same region as the load balancer. For some recommended zonal NEG
use cases, see Zonal network endpoint groups
overview.
Endpoints in the NEG must be primary internal IPv4 addresses of VM network
interfaces that are in the same subnet and zone as the zonal NEG. The primary
internal IPv4 address from any network interface of a multi-NIC VM instance can
be added to a NEG as long as it is in the NEG's subnet.
Zonal NEGs support both IPv4 and dual-stack (IPv4 and IPv6) VMs. For both IPv4
and dual-stack VMs, it is sufficient to specify only the VM instance when
attaching an endpoint to a NEG. You don't need to specify the endpoint's IP
address. The VM instance must always be in the same zone as the NEG.
Each zonal NEG has an associated VPC network and a subnet, even
if that zonal NEG hasn't been connected to a backend service yet. For more
information about how a network is associated with zonal NEGs, see Zonal NEG
backends and network interfaces.
Instance group backends and network interfaces
Within a given (managed or unmanaged) instance group, all VM instances must have
their nic0 network interfaces in the same VPC network.
For managed instance groups (MIGs), the VPC network for the
instance group is defined in the instance template.
For unmanaged instance groups, the VPC network for the instance
group is defined as the VPC network used by the nic0
network interface of the first VM instance that you add to the unmanaged
instance group.
Backend services can't distribute traffic to instance group member VMs on
non-nic0 interfaces. If you want to receive traffic on a non-nic0 network
interface (vNICs or
Dynamic Network Interfaces), you
must use zonal NEGs with GCE_VM_IP endpoints.
Zonal NEG backends and network interfaces
When you create a new zonal NEG with GCE_VM_IP endpoints, you must explicitly
associate the NEG with a subnetwork of a VPC network before you
can add any endpoints to the NEG. Neither the subnet nor the VPC
network can be changed after the NEG is created.
Within a given NEG, each GCE_VM_IP endpoint actually represents a network
interface. The network interface must be in the subnetwork associated with the
NEG. From the perspective of a Compute Engine instance, the network
interface can use any
identifier. From the
perspective of being an endpoint in a NEG, the network interface is identified
by using its primary internal IPv4 address.
There are two ways to add a GCE_VM_IP endpoint to a NEG:
If you specify only a VM name (without any IP address) when adding an
endpoint, Google Cloud requires that the VM has a network interface in
the subnetwork associated with the NEG. The IP address that Google Cloud
chooses for the endpoint is the primary internal IPv4 address of the VM's
network interface in the subnetwork associated with the NEG.
If you specify both a VM name and an IP address when adding an endpoint, the
IP address that you provide must be a primary internal IPv4 address for one of
the VM's network interfaces. That network interface must be in the subnetwork
associated with the NEG. Note that specifying an IP address is redundant
because there can only be a single network interface that is in the subnetwork
associated with the NEG.
A Dynamic NIC can't be deleted if the Dynamic NIC
is an endpoint of a load-balanced network endpoint group.
Backend services and VPC networks
The backend service isn't associated with any VPC network;
however, each backend instance group or zonal NEG is associated with a
VPC network, as noted previously. As long as all backends
are located in the same region and project, and as long as all backends are
of the same type (instance groups or zonal NEGs), you can add backends that use
either the same or different VPC networks.
To distribute packets to a non-nic0 interface, you must use zonal NEG backends
(with GCE_VM_IP endpoints).
Dual-stack backends (IPv4 and IPv6)
If you want the load balancer to use dual-stack backends that handle both IPv4
and IPv6 traffic, note the following requirements:
Backends must be configured in dual-stack
subnets that are in the
same region as the load balancer's IPv6 forwarding rule. For the backends, you
can use a subnet with the ipv6-access-type set to either EXTERNAL or
INTERNAL. If the backend subnet's ipv6-access-type is set to INTERNAL,
you must use a different IPv6-only subnet or dual-stack subnet with ipv6-access-type set to EXTERNAL for the load balancer's
external forwarding rule.
Backends must be configured to be dual-stack with stack-type set to
IPV4_IPV6. If the backend subnet's ipv6-access-type is set to EXTERNAL,
you must also set the --ipv6-network-tier to PREMIUM. For instructions,
see Create an instance template with IPv6
addresses.
IPv6-only backends
If you want the load balancer to use IPv6-only backends, note the
following requirements:
IPv6-only instances are supported in managed and unmanaged instance groups.
No other backend type is supported.
Backends must be configured in either
dual-stack or
IPv6-only
subnets that are
in the same region as the load balancer's IPv6 forwarding rule. For the
backends, you can use a subnet with the ipv6-access-type set to either
INTERNAL or EXTERNAL. If the backend subnet's ipv6-access-type is set to
INTERNAL, you must use a different IPv6-only subnet with ipv6-access-type
set to EXTERNAL for the load balancer's external forwarding rule.
Backends must be configured to be IPv6-only with the VM stack-type set to
IPV6_ONLY. If the backend subnet's ipv6-access-type is set to EXTERNAL,
you must also set the --ipv6-network-tier to PREMIUM. For instructions,
see Create an instance template with IPv6
addresses.
Note that IPv6-only VMs can be created under both dual-stack
and IPv6-only subnets, but dual-stack VMs can't be created under IPv6-only
subnets.
Health checks
Health check information is used to determine eligible backends for new
connections, and you can control whether existing connections persist on
unhealthy backends. For more information about eligible backends, see Traffic
distribution for
external passthrough Network Load Balancers.
Health check type, protocol, and port
The load balancer's backend service must reference a regional health check,
using any supported health check protocol and port. The health check protocol
and port details don't have to match the load balancer backend service protocol
and forwarding rule IP port information.
Because all supported health check protocols rely on TCP, when you use an
external passthrough Network Load Balancer to balance connections and traffic for other protocols, backend
VMs must run a TCP-based server to answer health check probers. For example, you
can use an HTTP health check combined with running an HTTP server on each
backend VM. In this example, your scripts or software are responsible for
configuring the HTTP server so that it returns status 200 only when the
software listening to load-balanced connections is operational.
For instance group backends, health check probers send packets to the nic0
network interface of each backend VM. For GCE_VM_IP zonal NEG backends, health
check probers send packets to the network interface in the VPC
network of the NEG. Health check packets have the
following characteristics:
Source IP address from the relevant health check probe IP
range.
Destination IP address that matches each IP address of a forwarding rule that
references the external passthrough Network Load Balancer backend service.
Destination port that matches the port number you specify in the health check.
Software running on the backend VMs must bind to and listen on relevant IP
address and port combinations. The simplest way to accomplish this is to
configure software to bind to and listen on the relevant ports of any of the
VM's IP addresses (0.0.0.0). For more information, see Destination for probe
packets.
Firewall rules
Because external passthrough Network Load Balancers are passthrough load balancers, you
control access to the load balancer's backends using Google Cloud firewall
rules. You must create ingress allow firewall rules or
an ingress allow hierarchical firewall policy to
permit health checks and the traffic that you're load balancing.
Forwarding rules and ingress allow firewall rules or Hierarchical firewall policies
work together in the following way: a forwarding rule specifies the
protocol and, if defined, port requirements that a packet must meet to be
forwarded to a backend VM. Ingress allow firewall rules control whether the
forwarded packets are delivered to the VM or dropped. All VPC
networks have an implied deny ingress firewall
rule that blocks incoming
packets from any source. The Google Cloud default VPC
network includes a limited set of pre-populated ingress allow firewall
rules.
To accept traffic from any IP address on the internet, you must create
an ingress allow firewall rule with the 0.0.0.0/0 source range. To only
allow traffic from certain IP address ranges, use more restrictive source
ranges.
As a security best practice, your ingress allow firewall rules should only
permit the IP protocols and ports that you
need. Restricting the protocol
(and, if possible, port) configuration is especially important when using
forwarding rules whose protocol is set to
L3_DEFAULT. L3_DEFAULT forwarding rules
forward packets for all supported IP protocols (on all ports if the protocol
and packet have port information).
External passthrough Network Load Balancers use Google Cloud health checks.
Therefore, you must always allow traffic from the health check IP address
ranges. These ingress
allow firewall rules can be made specific to the protocol and ports of the
load balancer's health check.
IP addresses for request and return packets
When a backend VM receives a load-balanced packet from a client, the packet's
source and destination are as follows:
Source: the external IP address associated with a
Google Cloud VM or internet-routable IP address of a system
connecting to the load balancer.
Destination: the IP address of the load balancer's forwarding
rule.
Because the load balancer is a pass-through load balancer (not a proxy), packets
arrive bearing the destination IP address of the load balancer's forwarding
rule. Configure software running on backend VMs to do the following:
Listen on (bind to) the load balancer's forwarding rule IP address or any IP
address (0.0.0.0 or ::)
If the load balancer forwarding rule's protocol supports ports: Listen on
(bind to) a port that's included in the load balancer's forwarding rule
Return packets are sent directly from the load balancer's backend VMs to the
client. The return packet's source and destination IP addresses depend on the
protocol:
TCP is connection-oriented so backend VMs must reply with packets whose
source IP addresses match the forwarding rule's IP address so that the client
can associate the response packets with the appropriate TCP connection.
UDP, ESP, GRE, and ICMP are connectionless. Backend VMs can send response
packets whose source IP addresses either match the forwarding rule's IP address or
match any assigned external IP address for the VM. Practically speaking, most
clients expect the response to come from the same IP address to which they
sent packets.
The following table summarizes sources and destinations for response packets:
Traffic type
Source
Destination
TCP
The IP address of the load balancer's forwarding rule
The requesting packet's source
UDP, ESP, GRE, ICMP
For most use cases, the IP address of the load balancer's forwarding
rule 1
The requesting packet's source.
1 When a VM has an external IP address or when you are using
Cloud NAT, it is also possible to set the response packet's source IP
address to the VM NIC's primary internal IPv4 address. Google Cloud or
Cloud NAT changes the response packet's source IP address to either the
NIC's external IPv4 address or a Cloud NAT external IPv4 address in
order to send the response packet to the client's external IP address. Not using
the forwarding rule's IP address as a source is an advanced scenario because the
client receives a response packet from an external IP address that doesn't
match the IP address to which it sent a request packet.
Return path
External passthrough Network Load Balancers use special
routes outside of
your VPC network to direct incoming requests and health check
probes to each backend VM.
The load balancer preserves the source IP addresses of packets. Responses from
the backend VMs go directly to the clients, not back through the load balancer.
The industry term for this is direct server return.
Outbound internet connectivity from backends
VM instances configured as an external passthrough Network Load Balancer's backend endpoints can initiate
connections to the internet using the load balancer's forwarding rule IP address
as the source IP address of the outbound connection.
Generally, a VM instance always uses its own external IP address or
Cloud NAT to initiate connections. You use the forwarding rule IP
address to initiate connections from backend endpoints only in special scenarios
such as when you need VM instances to originate and receive connections at the
same external IP address, and you also need the backend redundancy provided by
the external passthrough Network Load Balancer for inbound connections.
Outbound packets sent from backend VMs directly to the internet have no
restrictions on traffic protocols and ports. Even if an outbound packet
is using the forwarding rule's IP address as the source, the packet's
protocol and source port don't have to match the forwarding rule's protocol and
port specification. However, inbound response packets must match the forwarding
rule IP address, protocol, and destination port of the forwarding rule. For more
information, see Paths for external passthrough Network Load Balancers and external protocol
forwarding.
Additionally, any responses to the VM's outbound connections are subject to load
balancing, just like all the other incoming packets meant for the load balancer.
This means that responses might not arrive on the same backend VM that initiated
the connection to the internet. If the outbound connections and load balanced
inbound connections share common protocols and ports, then you can try one of
the following suggestions:
Synchronize outbound connection state across backend VMs, so that
connections can be served even if responses arrive at a backend VM other
than the one that has initiated the connection.
Use a failover
configuration,
with a single primary VM and a single backup VM. Then, the active
backend VM that initiates the outbound connections always receives the
response packets.
This path to internet connectivity from an external passthrough Network Load Balancer's backends is the
default intended behavior according to Google Cloud's implied firewall
rules. However, if you have
security concerns about leaving this path open, you can use targeted egress
firewall rules to block unsolicited outbound traffic to the internet.
Shared VPC architecture
Except for the IP address, all of the components of an external passthrough Network Load Balancer must
exist in the same project. The following table summarizes Shared VPC
components for external passthrough Network Load Balancers:
IP address
Forwarding rule
Backend components
A
regional external IP address must be defined in either the same
project as the load balancer or the Shared VPC host project.
The regional backend service must be defined in the same
project and same region where the backends (instance group or zonal NEG)
exist.
Health checks associated with the
backend service must be defined in the same project and the same region
as the backend service.
Traffic distribution
External passthrough Network Load Balancers support a variety of traffic distribution customization
options, including session affinity, connection tracking, weighted load
balancing, and failover. For details about how external passthrough Network Load Balancers distribute
traffic, and how these options interact with each other, see Traffic
distribution for
external passthrough Network Load Balancers.
Limitations
You cannot use the Google Cloud console to do the following tasks:
Create or modify an external passthrough Network Load Balancer whose forwarding rule uses the
L3_DEFAULT protocol.
Create or modify an external passthrough Network Load Balancer whose backend service protocol is set
to UNSPECIFIED.
Create or modify an external passthrough Network Load Balancer that configures a connection tracking
policy.
Create or modify source IP-based traffic steering for a forwarding rule.
Use either the Google Cloud CLI or the REST API instead.
External passthrough Network Load Balancers don't support VPC Network Peering.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025年11月04日 UTC."],[],[]]