public static final class Cluster.PrefetchPolicy.Builder extends com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder> implements Cluster.PrefetchPolicyOrBuilder
[#not-implemented-hide:]Protobuf type
envoy.config.cluster.v3.Cluster.PrefetchPolicy| Modifier and Type | Method and Description |
|---|---|
Cluster.PrefetchPolicy.Builder |
addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field,
Object value) |
Cluster.PrefetchPolicy |
build() |
Cluster.PrefetchPolicy |
buildPartial() |
Cluster.PrefetchPolicy.Builder |
clear() |
Cluster.PrefetchPolicy.Builder |
clearField(com.google.protobuf.Descriptors.FieldDescriptor field) |
Cluster.PrefetchPolicy.Builder |
clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof) |
Cluster.PrefetchPolicy.Builder |
clearPerUpstreamPrefetchRatio()
Indicates how many streams (rounded up) can be anticipated per-upstream for each
incoming stream.
|
Cluster.PrefetchPolicy.Builder |
clearPredictivePrefetchRatio()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each
stream, useful for low QPS services.
|
Cluster.PrefetchPolicy.Builder |
clone() |
Cluster.PrefetchPolicy |
getDefaultInstanceForType() |
static com.google.protobuf.Descriptors.Descriptor |
getDescriptor() |
com.google.protobuf.Descriptors.Descriptor |
getDescriptorForType() |
com.google.protobuf.DoubleValue |
getPerUpstreamPrefetchRatio()
Indicates how many streams (rounded up) can be anticipated per-upstream for each
incoming stream.
|
com.google.protobuf.DoubleValue.Builder |
getPerUpstreamPrefetchRatioBuilder()
Indicates how many streams (rounded up) can be anticipated per-upstream for each
incoming stream.
|
com.google.protobuf.DoubleValueOrBuilder |
getPerUpstreamPrefetchRatioOrBuilder()
Indicates how many streams (rounded up) can be anticipated per-upstream for each
incoming stream.
|
com.google.protobuf.DoubleValue |
getPredictivePrefetchRatio()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each
stream, useful for low QPS services.
|
com.google.protobuf.DoubleValue.Builder |
getPredictivePrefetchRatioBuilder()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each
stream, useful for low QPS services.
|
com.google.protobuf.DoubleValueOrBuilder |
getPredictivePrefetchRatioOrBuilder()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each
stream, useful for low QPS services.
|
boolean |
hasPerUpstreamPrefetchRatio()
Indicates how many streams (rounded up) can be anticipated per-upstream for each
incoming stream.
|
boolean |
hasPredictivePrefetchRatio()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each
stream, useful for low QPS services.
|
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable |
internalGetFieldAccessorTable() |
boolean |
isInitialized() |
Cluster.PrefetchPolicy.Builder |
mergeFrom(Cluster.PrefetchPolicy other) |
Cluster.PrefetchPolicy.Builder |
mergeFrom(com.google.protobuf.CodedInputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
Cluster.PrefetchPolicy.Builder |
mergeFrom(com.google.protobuf.Message other) |
Cluster.PrefetchPolicy.Builder |
mergePerUpstreamPrefetchRatio(com.google.protobuf.DoubleValue value)
Indicates how many streams (rounded up) can be anticipated per-upstream for each
incoming stream.
|
Cluster.PrefetchPolicy.Builder |
mergePredictivePrefetchRatio(com.google.protobuf.DoubleValue value)
Indicates how many many streams (rounded up) can be anticipated across a cluster for each
stream, useful for low QPS services.
|
Cluster.PrefetchPolicy.Builder |
mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields) |
Cluster.PrefetchPolicy.Builder |
setField(com.google.protobuf.Descriptors.FieldDescriptor field,
Object value) |
Cluster.PrefetchPolicy.Builder |
setPerUpstreamPrefetchRatio(com.google.protobuf.DoubleValue.Builder builderForValue)
Indicates how many streams (rounded up) can be anticipated per-upstream for each
incoming stream.
|
Cluster.PrefetchPolicy.Builder |
setPerUpstreamPrefetchRatio(com.google.protobuf.DoubleValue value)
Indicates how many streams (rounded up) can be anticipated per-upstream for each
incoming stream.
|
Cluster.PrefetchPolicy.Builder |
setPredictivePrefetchRatio(com.google.protobuf.DoubleValue.Builder builderForValue)
Indicates how many many streams (rounded up) can be anticipated across a cluster for each
stream, useful for low QPS services.
|
Cluster.PrefetchPolicy.Builder |
setPredictivePrefetchRatio(com.google.protobuf.DoubleValue value)
Indicates how many many streams (rounded up) can be anticipated across a cluster for each
stream, useful for low QPS services.
|
Cluster.PrefetchPolicy.Builder |
setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field,
int index,
Object value) |
Cluster.PrefetchPolicy.Builder |
setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields) |
getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof, internalGetMapField, internalGetMutableMapField, isClean, markClean, newBuilderForField, onBuilt, onChanged, setUnknownFieldsProto3findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeDelimitedFrom, mergeDelimitedFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toStringaddAll, addAll, mergeFrom, newUninitializedMessageExceptionequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitpublic static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
internalGetFieldAccessorTable in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy.Builder clear()
clear in interface com.google.protobuf.Message.Builderclear in interface com.google.protobuf.MessageLite.Builderclear in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
getDescriptorForType in interface com.google.protobuf.Message.BuildergetDescriptorForType in interface com.google.protobuf.MessageOrBuildergetDescriptorForType in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy getDefaultInstanceForType()
getDefaultInstanceForType in interface com.google.protobuf.MessageLiteOrBuildergetDefaultInstanceForType in interface com.google.protobuf.MessageOrBuilderpublic Cluster.PrefetchPolicy build()
build in interface com.google.protobuf.Message.Builderbuild in interface com.google.protobuf.MessageLite.Builderpublic Cluster.PrefetchPolicy buildPartial()
buildPartial in interface com.google.protobuf.Message.BuilderbuildPartial in interface com.google.protobuf.MessageLite.Builderpublic Cluster.PrefetchPolicy.Builder clone()
clone in interface com.google.protobuf.Message.Builderclone in interface com.google.protobuf.MessageLite.Builderclone in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy.Builder setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
setField in interface com.google.protobuf.Message.BuildersetField in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy.Builder clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
clearField in interface com.google.protobuf.Message.BuilderclearField in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy.Builder clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
clearOneof in interface com.google.protobuf.Message.BuilderclearOneof in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy.Builder setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)
setRepeatedField in interface com.google.protobuf.Message.BuildersetRepeatedField in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy.Builder addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
addRepeatedField in interface com.google.protobuf.Message.BuilderaddRepeatedField in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy.Builder mergeFrom(com.google.protobuf.Message other)
mergeFrom in interface com.google.protobuf.Message.BuildermergeFrom in class com.google.protobuf.AbstractMessage.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy.Builder mergeFrom(Cluster.PrefetchPolicy other)
public final boolean isInitialized()
isInitialized in interface com.google.protobuf.MessageLiteOrBuilderisInitialized in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public Cluster.PrefetchPolicy.Builder mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws IOException
mergeFrom in interface com.google.protobuf.Message.BuildermergeFrom in interface com.google.protobuf.MessageLite.BuildermergeFrom in class com.google.protobuf.AbstractMessage.Builder<Cluster.PrefetchPolicy.Builder>IOExceptionpublic boolean hasPerUpstreamPrefetchRatio()
Indicates how many streams (rounded up) can be anticipated per-upstream for each incoming stream. This is useful for high-QPS or latency-sensitive services. Prefetching will only be done if the upstream is healthy. For example if this is 2, for an incoming HTTP/1.1 stream, 2 connections will be established, one for the new incoming stream, and one for a presumed follow-up stream. For HTTP/2, only one connection would be established by default as one connection can serve both the original and presumed follow-up stream. In steady state for non-multiplexed connections a value of 1.5 would mean if there were 100 active streams, there would be 100 connections in use, and 50 connections prefetched. This might be a useful value for something like short lived single-use connections, for example proxying HTTP/1.1 if keep-alive were false and each stream resulted in connection termination. It would likely be overkill for long lived connections, such as TCP proxying SMTP or regular HTTP/1.1 with keep-alive. For long lived traffic, a value of 1.05 would be more reasonable, where for every 100 connections, 5 prefetched connections would be in the queue in case of unexpected disconnects where the connection could not be reused. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight. This means in steady state if a connection is torn down, a subsequent streams will pay an upstream-rtt latency penalty waiting for streams to be prefetched. This is limited somewhat arbitrarily to 3 because prefetching connections too aggressively can harm latency more than the prefetching helps.
.google.protobuf.DoubleValue per_upstream_prefetch_ratio = 1 [(.validate.rules) = { ... }hasPerUpstreamPrefetchRatio in interface Cluster.PrefetchPolicyOrBuilderpublic com.google.protobuf.DoubleValue getPerUpstreamPrefetchRatio()
Indicates how many streams (rounded up) can be anticipated per-upstream for each incoming stream. This is useful for high-QPS or latency-sensitive services. Prefetching will only be done if the upstream is healthy. For example if this is 2, for an incoming HTTP/1.1 stream, 2 connections will be established, one for the new incoming stream, and one for a presumed follow-up stream. For HTTP/2, only one connection would be established by default as one connection can serve both the original and presumed follow-up stream. In steady state for non-multiplexed connections a value of 1.5 would mean if there were 100 active streams, there would be 100 connections in use, and 50 connections prefetched. This might be a useful value for something like short lived single-use connections, for example proxying HTTP/1.1 if keep-alive were false and each stream resulted in connection termination. It would likely be overkill for long lived connections, such as TCP proxying SMTP or regular HTTP/1.1 with keep-alive. For long lived traffic, a value of 1.05 would be more reasonable, where for every 100 connections, 5 prefetched connections would be in the queue in case of unexpected disconnects where the connection could not be reused. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight. This means in steady state if a connection is torn down, a subsequent streams will pay an upstream-rtt latency penalty waiting for streams to be prefetched. This is limited somewhat arbitrarily to 3 because prefetching connections too aggressively can harm latency more than the prefetching helps.
.google.protobuf.DoubleValue per_upstream_prefetch_ratio = 1 [(.validate.rules) = { ... }getPerUpstreamPrefetchRatio in interface Cluster.PrefetchPolicyOrBuilderpublic Cluster.PrefetchPolicy.Builder setPerUpstreamPrefetchRatio(com.google.protobuf.DoubleValue value)
Indicates how many streams (rounded up) can be anticipated per-upstream for each incoming stream. This is useful for high-QPS or latency-sensitive services. Prefetching will only be done if the upstream is healthy. For example if this is 2, for an incoming HTTP/1.1 stream, 2 connections will be established, one for the new incoming stream, and one for a presumed follow-up stream. For HTTP/2, only one connection would be established by default as one connection can serve both the original and presumed follow-up stream. In steady state for non-multiplexed connections a value of 1.5 would mean if there were 100 active streams, there would be 100 connections in use, and 50 connections prefetched. This might be a useful value for something like short lived single-use connections, for example proxying HTTP/1.1 if keep-alive were false and each stream resulted in connection termination. It would likely be overkill for long lived connections, such as TCP proxying SMTP or regular HTTP/1.1 with keep-alive. For long lived traffic, a value of 1.05 would be more reasonable, where for every 100 connections, 5 prefetched connections would be in the queue in case of unexpected disconnects where the connection could not be reused. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight. This means in steady state if a connection is torn down, a subsequent streams will pay an upstream-rtt latency penalty waiting for streams to be prefetched. This is limited somewhat arbitrarily to 3 because prefetching connections too aggressively can harm latency more than the prefetching helps.
.google.protobuf.DoubleValue per_upstream_prefetch_ratio = 1 [(.validate.rules) = { ... }public Cluster.PrefetchPolicy.Builder setPerUpstreamPrefetchRatio(com.google.protobuf.DoubleValue.Builder builderForValue)
Indicates how many streams (rounded up) can be anticipated per-upstream for each incoming stream. This is useful for high-QPS or latency-sensitive services. Prefetching will only be done if the upstream is healthy. For example if this is 2, for an incoming HTTP/1.1 stream, 2 connections will be established, one for the new incoming stream, and one for a presumed follow-up stream. For HTTP/2, only one connection would be established by default as one connection can serve both the original and presumed follow-up stream. In steady state for non-multiplexed connections a value of 1.5 would mean if there were 100 active streams, there would be 100 connections in use, and 50 connections prefetched. This might be a useful value for something like short lived single-use connections, for example proxying HTTP/1.1 if keep-alive were false and each stream resulted in connection termination. It would likely be overkill for long lived connections, such as TCP proxying SMTP or regular HTTP/1.1 with keep-alive. For long lived traffic, a value of 1.05 would be more reasonable, where for every 100 connections, 5 prefetched connections would be in the queue in case of unexpected disconnects where the connection could not be reused. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight. This means in steady state if a connection is torn down, a subsequent streams will pay an upstream-rtt latency penalty waiting for streams to be prefetched. This is limited somewhat arbitrarily to 3 because prefetching connections too aggressively can harm latency more than the prefetching helps.
.google.protobuf.DoubleValue per_upstream_prefetch_ratio = 1 [(.validate.rules) = { ... }public Cluster.PrefetchPolicy.Builder mergePerUpstreamPrefetchRatio(com.google.protobuf.DoubleValue value)
Indicates how many streams (rounded up) can be anticipated per-upstream for each incoming stream. This is useful for high-QPS or latency-sensitive services. Prefetching will only be done if the upstream is healthy. For example if this is 2, for an incoming HTTP/1.1 stream, 2 connections will be established, one for the new incoming stream, and one for a presumed follow-up stream. For HTTP/2, only one connection would be established by default as one connection can serve both the original and presumed follow-up stream. In steady state for non-multiplexed connections a value of 1.5 would mean if there were 100 active streams, there would be 100 connections in use, and 50 connections prefetched. This might be a useful value for something like short lived single-use connections, for example proxying HTTP/1.1 if keep-alive were false and each stream resulted in connection termination. It would likely be overkill for long lived connections, such as TCP proxying SMTP or regular HTTP/1.1 with keep-alive. For long lived traffic, a value of 1.05 would be more reasonable, where for every 100 connections, 5 prefetched connections would be in the queue in case of unexpected disconnects where the connection could not be reused. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight. This means in steady state if a connection is torn down, a subsequent streams will pay an upstream-rtt latency penalty waiting for streams to be prefetched. This is limited somewhat arbitrarily to 3 because prefetching connections too aggressively can harm latency more than the prefetching helps.
.google.protobuf.DoubleValue per_upstream_prefetch_ratio = 1 [(.validate.rules) = { ... }public Cluster.PrefetchPolicy.Builder clearPerUpstreamPrefetchRatio()
Indicates how many streams (rounded up) can be anticipated per-upstream for each incoming stream. This is useful for high-QPS or latency-sensitive services. Prefetching will only be done if the upstream is healthy. For example if this is 2, for an incoming HTTP/1.1 stream, 2 connections will be established, one for the new incoming stream, and one for a presumed follow-up stream. For HTTP/2, only one connection would be established by default as one connection can serve both the original and presumed follow-up stream. In steady state for non-multiplexed connections a value of 1.5 would mean if there were 100 active streams, there would be 100 connections in use, and 50 connections prefetched. This might be a useful value for something like short lived single-use connections, for example proxying HTTP/1.1 if keep-alive were false and each stream resulted in connection termination. It would likely be overkill for long lived connections, such as TCP proxying SMTP or regular HTTP/1.1 with keep-alive. For long lived traffic, a value of 1.05 would be more reasonable, where for every 100 connections, 5 prefetched connections would be in the queue in case of unexpected disconnects where the connection could not be reused. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight. This means in steady state if a connection is torn down, a subsequent streams will pay an upstream-rtt latency penalty waiting for streams to be prefetched. This is limited somewhat arbitrarily to 3 because prefetching connections too aggressively can harm latency more than the prefetching helps.
.google.protobuf.DoubleValue per_upstream_prefetch_ratio = 1 [(.validate.rules) = { ... }public com.google.protobuf.DoubleValue.Builder getPerUpstreamPrefetchRatioBuilder()
Indicates how many streams (rounded up) can be anticipated per-upstream for each incoming stream. This is useful for high-QPS or latency-sensitive services. Prefetching will only be done if the upstream is healthy. For example if this is 2, for an incoming HTTP/1.1 stream, 2 connections will be established, one for the new incoming stream, and one for a presumed follow-up stream. For HTTP/2, only one connection would be established by default as one connection can serve both the original and presumed follow-up stream. In steady state for non-multiplexed connections a value of 1.5 would mean if there were 100 active streams, there would be 100 connections in use, and 50 connections prefetched. This might be a useful value for something like short lived single-use connections, for example proxying HTTP/1.1 if keep-alive were false and each stream resulted in connection termination. It would likely be overkill for long lived connections, such as TCP proxying SMTP or regular HTTP/1.1 with keep-alive. For long lived traffic, a value of 1.05 would be more reasonable, where for every 100 connections, 5 prefetched connections would be in the queue in case of unexpected disconnects where the connection could not be reused. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight. This means in steady state if a connection is torn down, a subsequent streams will pay an upstream-rtt latency penalty waiting for streams to be prefetched. This is limited somewhat arbitrarily to 3 because prefetching connections too aggressively can harm latency more than the prefetching helps.
.google.protobuf.DoubleValue per_upstream_prefetch_ratio = 1 [(.validate.rules) = { ... }public com.google.protobuf.DoubleValueOrBuilder getPerUpstreamPrefetchRatioOrBuilder()
Indicates how many streams (rounded up) can be anticipated per-upstream for each incoming stream. This is useful for high-QPS or latency-sensitive services. Prefetching will only be done if the upstream is healthy. For example if this is 2, for an incoming HTTP/1.1 stream, 2 connections will be established, one for the new incoming stream, and one for a presumed follow-up stream. For HTTP/2, only one connection would be established by default as one connection can serve both the original and presumed follow-up stream. In steady state for non-multiplexed connections a value of 1.5 would mean if there were 100 active streams, there would be 100 connections in use, and 50 connections prefetched. This might be a useful value for something like short lived single-use connections, for example proxying HTTP/1.1 if keep-alive were false and each stream resulted in connection termination. It would likely be overkill for long lived connections, such as TCP proxying SMTP or regular HTTP/1.1 with keep-alive. For long lived traffic, a value of 1.05 would be more reasonable, where for every 100 connections, 5 prefetched connections would be in the queue in case of unexpected disconnects where the connection could not be reused. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight. This means in steady state if a connection is torn down, a subsequent streams will pay an upstream-rtt latency penalty waiting for streams to be prefetched. This is limited somewhat arbitrarily to 3 because prefetching connections too aggressively can harm latency more than the prefetching helps.
.google.protobuf.DoubleValue per_upstream_prefetch_ratio = 1 [(.validate.rules) = { ... }getPerUpstreamPrefetchRatioOrBuilder in interface Cluster.PrefetchPolicyOrBuilderpublic boolean hasPredictivePrefetchRatio()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each stream, useful for low QPS services. This is currently supported for a subset of deterministic non-hash-based load-balancing algorithms (weighted round robin, random). Unlike per_upstream_prefetch_ratio this prefetches across the upstream instances in a cluster, doing best effort predictions of what upstream would be picked next and pre-establishing a connection. For example if prefetching is set to 2 for a round robin HTTP/2 cluster, on the first incoming stream, 2 connections will be prefetched - one to the first upstream for this cluster, one to the second on the assumption there will be a follow-up stream. Prefetching will be limited to one prefetch per configured upstream in the cluster. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight, so during warm up and in steady state if a connection is closed (and per_upstream_prefetch_ratio is not set), there will be a latency hit for connection establishment. If both this and prefetch_ratio are set, Envoy will make sure both predicted needs are met, basically prefetching max(predictive-prefetch, per-upstream-prefetch), for each upstream. TODO(alyssawilk) per LB docs and LB overview docs when unhiding.
.google.protobuf.DoubleValue predictive_prefetch_ratio = 2 [(.validate.rules) = { ... }hasPredictivePrefetchRatio in interface Cluster.PrefetchPolicyOrBuilderpublic com.google.protobuf.DoubleValue getPredictivePrefetchRatio()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each stream, useful for low QPS services. This is currently supported for a subset of deterministic non-hash-based load-balancing algorithms (weighted round robin, random). Unlike per_upstream_prefetch_ratio this prefetches across the upstream instances in a cluster, doing best effort predictions of what upstream would be picked next and pre-establishing a connection. For example if prefetching is set to 2 for a round robin HTTP/2 cluster, on the first incoming stream, 2 connections will be prefetched - one to the first upstream for this cluster, one to the second on the assumption there will be a follow-up stream. Prefetching will be limited to one prefetch per configured upstream in the cluster. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight, so during warm up and in steady state if a connection is closed (and per_upstream_prefetch_ratio is not set), there will be a latency hit for connection establishment. If both this and prefetch_ratio are set, Envoy will make sure both predicted needs are met, basically prefetching max(predictive-prefetch, per-upstream-prefetch), for each upstream. TODO(alyssawilk) per LB docs and LB overview docs when unhiding.
.google.protobuf.DoubleValue predictive_prefetch_ratio = 2 [(.validate.rules) = { ... }getPredictivePrefetchRatio in interface Cluster.PrefetchPolicyOrBuilderpublic Cluster.PrefetchPolicy.Builder setPredictivePrefetchRatio(com.google.protobuf.DoubleValue value)
Indicates how many many streams (rounded up) can be anticipated across a cluster for each stream, useful for low QPS services. This is currently supported for a subset of deterministic non-hash-based load-balancing algorithms (weighted round robin, random). Unlike per_upstream_prefetch_ratio this prefetches across the upstream instances in a cluster, doing best effort predictions of what upstream would be picked next and pre-establishing a connection. For example if prefetching is set to 2 for a round robin HTTP/2 cluster, on the first incoming stream, 2 connections will be prefetched - one to the first upstream for this cluster, one to the second on the assumption there will be a follow-up stream. Prefetching will be limited to one prefetch per configured upstream in the cluster. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight, so during warm up and in steady state if a connection is closed (and per_upstream_prefetch_ratio is not set), there will be a latency hit for connection establishment. If both this and prefetch_ratio are set, Envoy will make sure both predicted needs are met, basically prefetching max(predictive-prefetch, per-upstream-prefetch), for each upstream. TODO(alyssawilk) per LB docs and LB overview docs when unhiding.
.google.protobuf.DoubleValue predictive_prefetch_ratio = 2 [(.validate.rules) = { ... }public Cluster.PrefetchPolicy.Builder setPredictivePrefetchRatio(com.google.protobuf.DoubleValue.Builder builderForValue)
Indicates how many many streams (rounded up) can be anticipated across a cluster for each stream, useful for low QPS services. This is currently supported for a subset of deterministic non-hash-based load-balancing algorithms (weighted round robin, random). Unlike per_upstream_prefetch_ratio this prefetches across the upstream instances in a cluster, doing best effort predictions of what upstream would be picked next and pre-establishing a connection. For example if prefetching is set to 2 for a round robin HTTP/2 cluster, on the first incoming stream, 2 connections will be prefetched - one to the first upstream for this cluster, one to the second on the assumption there will be a follow-up stream. Prefetching will be limited to one prefetch per configured upstream in the cluster. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight, so during warm up and in steady state if a connection is closed (and per_upstream_prefetch_ratio is not set), there will be a latency hit for connection establishment. If both this and prefetch_ratio are set, Envoy will make sure both predicted needs are met, basically prefetching max(predictive-prefetch, per-upstream-prefetch), for each upstream. TODO(alyssawilk) per LB docs and LB overview docs when unhiding.
.google.protobuf.DoubleValue predictive_prefetch_ratio = 2 [(.validate.rules) = { ... }public Cluster.PrefetchPolicy.Builder mergePredictivePrefetchRatio(com.google.protobuf.DoubleValue value)
Indicates how many many streams (rounded up) can be anticipated across a cluster for each stream, useful for low QPS services. This is currently supported for a subset of deterministic non-hash-based load-balancing algorithms (weighted round robin, random). Unlike per_upstream_prefetch_ratio this prefetches across the upstream instances in a cluster, doing best effort predictions of what upstream would be picked next and pre-establishing a connection. For example if prefetching is set to 2 for a round robin HTTP/2 cluster, on the first incoming stream, 2 connections will be prefetched - one to the first upstream for this cluster, one to the second on the assumption there will be a follow-up stream. Prefetching will be limited to one prefetch per configured upstream in the cluster. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight, so during warm up and in steady state if a connection is closed (and per_upstream_prefetch_ratio is not set), there will be a latency hit for connection establishment. If both this and prefetch_ratio are set, Envoy will make sure both predicted needs are met, basically prefetching max(predictive-prefetch, per-upstream-prefetch), for each upstream. TODO(alyssawilk) per LB docs and LB overview docs when unhiding.
.google.protobuf.DoubleValue predictive_prefetch_ratio = 2 [(.validate.rules) = { ... }public Cluster.PrefetchPolicy.Builder clearPredictivePrefetchRatio()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each stream, useful for low QPS services. This is currently supported for a subset of deterministic non-hash-based load-balancing algorithms (weighted round robin, random). Unlike per_upstream_prefetch_ratio this prefetches across the upstream instances in a cluster, doing best effort predictions of what upstream would be picked next and pre-establishing a connection. For example if prefetching is set to 2 for a round robin HTTP/2 cluster, on the first incoming stream, 2 connections will be prefetched - one to the first upstream for this cluster, one to the second on the assumption there will be a follow-up stream. Prefetching will be limited to one prefetch per configured upstream in the cluster. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight, so during warm up and in steady state if a connection is closed (and per_upstream_prefetch_ratio is not set), there will be a latency hit for connection establishment. If both this and prefetch_ratio are set, Envoy will make sure both predicted needs are met, basically prefetching max(predictive-prefetch, per-upstream-prefetch), for each upstream. TODO(alyssawilk) per LB docs and LB overview docs when unhiding.
.google.protobuf.DoubleValue predictive_prefetch_ratio = 2 [(.validate.rules) = { ... }public com.google.protobuf.DoubleValue.Builder getPredictivePrefetchRatioBuilder()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each stream, useful for low QPS services. This is currently supported for a subset of deterministic non-hash-based load-balancing algorithms (weighted round robin, random). Unlike per_upstream_prefetch_ratio this prefetches across the upstream instances in a cluster, doing best effort predictions of what upstream would be picked next and pre-establishing a connection. For example if prefetching is set to 2 for a round robin HTTP/2 cluster, on the first incoming stream, 2 connections will be prefetched - one to the first upstream for this cluster, one to the second on the assumption there will be a follow-up stream. Prefetching will be limited to one prefetch per configured upstream in the cluster. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight, so during warm up and in steady state if a connection is closed (and per_upstream_prefetch_ratio is not set), there will be a latency hit for connection establishment. If both this and prefetch_ratio are set, Envoy will make sure both predicted needs are met, basically prefetching max(predictive-prefetch, per-upstream-prefetch), for each upstream. TODO(alyssawilk) per LB docs and LB overview docs when unhiding.
.google.protobuf.DoubleValue predictive_prefetch_ratio = 2 [(.validate.rules) = { ... }public com.google.protobuf.DoubleValueOrBuilder getPredictivePrefetchRatioOrBuilder()
Indicates how many many streams (rounded up) can be anticipated across a cluster for each stream, useful for low QPS services. This is currently supported for a subset of deterministic non-hash-based load-balancing algorithms (weighted round robin, random). Unlike per_upstream_prefetch_ratio this prefetches across the upstream instances in a cluster, doing best effort predictions of what upstream would be picked next and pre-establishing a connection. For example if prefetching is set to 2 for a round robin HTTP/2 cluster, on the first incoming stream, 2 connections will be prefetched - one to the first upstream for this cluster, one to the second on the assumption there will be a follow-up stream. Prefetching will be limited to one prefetch per configured upstream in the cluster. If this value is not set, or set explicitly to one, Envoy will fetch as many connections as needed to serve streams in flight, so during warm up and in steady state if a connection is closed (and per_upstream_prefetch_ratio is not set), there will be a latency hit for connection establishment. If both this and prefetch_ratio are set, Envoy will make sure both predicted needs are met, basically prefetching max(predictive-prefetch, per-upstream-prefetch), for each upstream. TODO(alyssawilk) per LB docs and LB overview docs when unhiding.
.google.protobuf.DoubleValue predictive_prefetch_ratio = 2 [(.validate.rules) = { ... }getPredictivePrefetchRatioOrBuilder in interface Cluster.PrefetchPolicyOrBuilderpublic final Cluster.PrefetchPolicy.Builder setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
setUnknownFields in interface com.google.protobuf.Message.BuildersetUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>public final Cluster.PrefetchPolicy.Builder mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
mergeUnknownFields in interface com.google.protobuf.Message.BuildermergeUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<Cluster.PrefetchPolicy.Builder>Copyright © 2018–2020 The Envoy Project. All rights reserved.