/*
 * Copyright (c) 2012, 2017, Oracle and/or its affiliates. All rights reserved.
 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 *
 * This code is free software; you can redistribute it and/or modify it
 * under the terms of the GNU General Public License version 2 only, as
 * published by the Free Software Foundation.  Oracle designates this
 * particular file as subject to the "Classpath" exception as provided
 * by Oracle in the LICENSE file that accompanied this code.
 *
 * This code is distributed in the hope that it will be useful, but WITHOUT
 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
 * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
 * version 2 for more details (a copy is included in the LICENSE file that
 * accompanied this code).
 *
 * You should have received a copy of the GNU General Public License version
 * 2 along with this work; if not, write to the Free Software Foundation,
 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
 *
 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
 * or visit www.oracle.com if you need additional information or have any
 * questions.
 */
package java.util.stream;

import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Arrays;
import java.util.Collection;
import java.util.Comparator;
import java.util.Objects;
import java.util.Optional;
import java.util.Spliterator;
import java.util.Spliterators;
import java.util.concurrent.ConcurrentHashMap;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.BinaryOperator;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.IntFunction;
import java.util.function.Predicate;
import java.util.function.Supplier;
import java.util.function.ToDoubleFunction;
import java.util.function.ToIntFunction;
import java.util.function.ToLongFunction;
import java.util.function.UnaryOperator;

A sequence of elements supporting sequential and parallel aggregate operations. The following example illustrates an aggregate operation using Stream and IntStream:

    int sum = widgets.stream()
                     .filter(w -> w.getColor() == RED)
                     .mapToInt(w -> w.getWeight())
                     .sum();
In this example, widgets is a Collection<Widget>. We create a stream of Widget objects via Collection.stream(), filter it to produce a stream containing only the red widgets, and then transform it into a stream of int values representing the weight of each red widget. Then this stream is summed to produce a total weight.

In addition to Stream, which is a stream of object references, there are primitive specializations for IntStream, LongStream, and DoubleStream, all of which are referred to as "streams" and conform to the characteristics and restrictions described here.

To perform a computation, stream operations are composed into a stream pipeline. A stream pipeline consists of a source (which might be an array, a collection, a generator function, an I/O channel, etc), zero or more intermediate operations (which transform a stream into another stream, such as filter(Predicate)), and a terminal operation (which produces a result or side-effect, such as count() or forEach(Consumer)). Streams are lazy; computation on the source data is only performed when the terminal operation is initiated, and source elements are consumed only as needed.

A stream implementation is permitted significant latitude in optimizing the computation of the result. For example, a stream implementation is free to elide operations (or entire stages) from a stream pipeline -- and therefore elide invocation of behavioral parameters -- if it can prove that it would not affect the result of the computation. This means that side-effects of behavioral parameters may not always be executed and should not be relied upon, unless otherwise specified (such as by the terminal operations forEach and forEachOrdered). (For a specific example of such an optimization, see the API note documented on the count operation. For more detail, see the side-effects section of the stream package documentation.)

Collections and streams, while bearing some superficial similarities, have different goals. Collections are primarily concerned with the efficient management of, and access to, their elements. By contrast, streams do not provide a means to directly access or manipulate their elements, and are instead concerned with declaratively describing their source and the computational operations which will be performed in aggregate on that source. However, if the provided stream operations do not offer the desired functionality, the BaseStream.iterator() and BaseStream.spliterator() operations can be used to perform a controlled traversal.

A stream pipeline, like the "widgets" example above, can be viewed as a query on the stream source. Unless the source was explicitly designed for concurrent modification (such as a ConcurrentHashMap), unpredictable or erroneous behavior may result from modifying the stream source while it is being queried.

Most stream operations accept parameters that describe user-specified behavior, such as the lambda expression w -> w.getWeight() passed to mapToInt in the example above. To preserve correct behavior, these behavioral parameters:

  • must be non-interfering (they do not modify the stream source); and
  • in most cases must be stateless (their result should not depend on any state that might change during execution of the stream pipeline).

Such parameters are always instances of a functional interface such as Function, and are often lambda expressions or method references. Unless otherwise specified these parameters must be non-null.

A stream should be operated on (invoking an intermediate or terminal stream operation) only once. This rules out, for example, "forked" streams, where the same source feeds two or more pipelines, or multiple traversals of the same stream. A stream implementation may throw IllegalStateException if it detects that the stream is being reused. However, since some stream operations may return their receiver rather than a new stream object, it may not be possible to detect reuse in all cases.

Streams have a BaseStream.close() method and implement AutoCloseable. Operating on a stream after it has been closed will throw IllegalStateException. Most stream instances do not actually need to be closed after use, as they are backed by collections, arrays, or generating functions, which require no special resource management. Generally, only streams whose source is an IO channel, such as those returned by Files.lines(Path), will require closing. If a stream does require closing, it must be opened as a resource within a try-with-resources statement or similar control structure to ensure that it is closed promptly after its operations have completed.

Stream pipelines may execute either sequentially or in parallel. This execution mode is a property of the stream. Streams are created with an initial choice of sequential or parallel execution. (For example, Collection.stream() creates a sequential stream, and Collection.parallelStream() creates a parallel one.) This choice of execution mode may be modified by the BaseStream.sequential() or BaseStream.parallel() methods, and may be queried with the BaseStream.isParallel() method.

Type parameters:
  • <T> – the type of the stream elements
See Also:
Since:1.8
/** * A sequence of elements supporting sequential and parallel aggregate * operations. The following example illustrates an aggregate operation using * {@link Stream} and {@link IntStream}: * * <pre>{@code * int sum = widgets.stream() * .filter(w -> w.getColor() == RED) * .mapToInt(w -> w.getWeight()) * .sum(); * }</pre> * * In this example, {@code widgets} is a {@code Collection<Widget>}. We create * a stream of {@code Widget} objects via {@link Collection#stream Collection.stream()}, * filter it to produce a stream containing only the red widgets, and then * transform it into a stream of {@code int} values representing the weight of * each red widget. Then this stream is summed to produce a total weight. * * <p>In addition to {@code Stream}, which is a stream of object references, * there are primitive specializations for {@link IntStream}, {@link LongStream}, * and {@link DoubleStream}, all of which are referred to as "streams" and * conform to the characteristics and restrictions described here. * * <p>To perform a computation, stream * <a href="package-summary.html#StreamOps">operations</a> are composed into a * <em>stream pipeline</em>. A stream pipeline consists of a source (which * might be an array, a collection, a generator function, an I/O channel, * etc), zero or more <em>intermediate operations</em> (which transform a * stream into another stream, such as {@link Stream#filter(Predicate)}), and a * <em>terminal operation</em> (which produces a result or side-effect, such * as {@link Stream#count()} or {@link Stream#forEach(Consumer)}). * Streams are lazy; computation on the source data is only performed when the * terminal operation is initiated, and source elements are consumed only * as needed. * * <p>A stream implementation is permitted significant latitude in optimizing * the computation of the result. For example, a stream implementation is free * to elide operations (or entire stages) from a stream pipeline -- and * therefore elide invocation of behavioral parameters -- if it can prove that * it would not affect the result of the computation. This means that * side-effects of behavioral parameters may not always be executed and should * not be relied upon, unless otherwise specified (such as by the terminal * operations {@code forEach} and {@code forEachOrdered}). (For a specific * example of such an optimization, see the API note documented on the * {@link #count} operation. For more detail, see the * <a href="package-summary.html#SideEffects">side-effects</a> section of the * stream package documentation.) * * <p>Collections and streams, while bearing some superficial similarities, * have different goals. Collections are primarily concerned with the efficient * management of, and access to, their elements. By contrast, streams do not * provide a means to directly access or manipulate their elements, and are * instead concerned with declaratively describing their source and the * computational operations which will be performed in aggregate on that source. * However, if the provided stream operations do not offer the desired * functionality, the {@link #iterator()} and {@link #spliterator()} operations * can be used to perform a controlled traversal. * * <p>A stream pipeline, like the "widgets" example above, can be viewed as * a <em>query</em> on the stream source. Unless the source was explicitly * designed for concurrent modification (such as a {@link ConcurrentHashMap}), * unpredictable or erroneous behavior may result from modifying the stream * source while it is being queried. * * <p>Most stream operations accept parameters that describe user-specified * behavior, such as the lambda expression {@code w -> w.getWeight()} passed to * {@code mapToInt} in the example above. To preserve correct behavior, * these <em>behavioral parameters</em>: * <ul> * <li>must be <a href="package-summary.html#NonInterference">non-interfering</a> * (they do not modify the stream source); and</li> * <li>in most cases must be <a href="package-summary.html#Statelessness">stateless</a> * (their result should not depend on any state that might change during execution * of the stream pipeline).</li> * </ul> * * <p>Such parameters are always instances of a * <a href="../function/package-summary.html">functional interface</a> such * as {@link java.util.function.Function}, and are often lambda expressions or * method references. Unless otherwise specified these parameters must be * <em>non-null</em>. * * <p>A stream should be operated on (invoking an intermediate or terminal stream * operation) only once. This rules out, for example, "forked" streams, where * the same source feeds two or more pipelines, or multiple traversals of the * same stream. A stream implementation may throw {@link IllegalStateException} * if it detects that the stream is being reused. However, since some stream * operations may return their receiver rather than a new stream object, it may * not be possible to detect reuse in all cases. * * <p>Streams have a {@link #close()} method and implement {@link AutoCloseable}. * Operating on a stream after it has been closed will throw {@link IllegalStateException}. * Most stream instances do not actually need to be closed after use, as they * are backed by collections, arrays, or generating functions, which require no * special resource management. Generally, only streams whose source is an IO channel, * such as those returned by {@link Files#lines(Path)}, will require closing. If a * stream does require closing, it must be opened as a resource within a try-with-resources * statement or similar control structure to ensure that it is closed promptly after its * operations have completed. * * <p>Stream pipelines may execute either sequentially or in * <a href="package-summary.html#Parallelism">parallel</a>. This * execution mode is a property of the stream. Streams are created * with an initial choice of sequential or parallel execution. (For example, * {@link Collection#stream() Collection.stream()} creates a sequential stream, * and {@link Collection#parallelStream() Collection.parallelStream()} creates * a parallel one.) This choice of execution mode may be modified by the * {@link #sequential()} or {@link #parallel()} methods, and may be queried with * the {@link #isParallel()} method. * * @param <T> the type of the stream elements * @since 1.8 * @see IntStream * @see LongStream * @see DoubleStream * @see <a href="package-summary.html">java.util.stream</a> */
public interface Stream<T> extends BaseStream<T, Stream<T>> {
Returns a stream consisting of the elements of this stream that match the given predicate.

This is an intermediate operation.

Params:
Returns:the new stream
/** * Returns a stream consisting of the elements of this stream that match * the given predicate. * * <p>This is an <a href="package-summary.html#StreamOps">intermediate * operation</a>. * * @param predicate a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * predicate to apply to each element to determine if it * should be included * @return the new stream */
Stream<T> filter(Predicate<? super T> predicate);
Returns a stream consisting of the results of applying the given function to the elements of this stream.

This is an intermediate operation.

Params:
Type parameters:
  • <R> – The element type of the new stream
Returns:the new stream
/** * Returns a stream consisting of the results of applying the given * function to the elements of this stream. * * <p>This is an <a href="package-summary.html#StreamOps">intermediate * operation</a>. * * @param <R> The element type of the new stream * @param mapper a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function to apply to each element * @return the new stream */
<R> Stream<R> map(Function<? super T, ? extends R> mapper);
Returns an IntStream consisting of the results of applying the given function to the elements of this stream.

This is an intermediate operation.

Params:
Returns:the new stream
/** * Returns an {@code IntStream} consisting of the results of applying the * given function to the elements of this stream. * * <p>This is an <a href="package-summary.html#StreamOps"> * intermediate operation</a>. * * @param mapper a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function to apply to each element * @return the new stream */
IntStream mapToInt(ToIntFunction<? super T> mapper);
Returns a LongStream consisting of the results of applying the given function to the elements of this stream.

This is an intermediate operation.

Params:
Returns:the new stream
/** * Returns a {@code LongStream} consisting of the results of applying the * given function to the elements of this stream. * * <p>This is an <a href="package-summary.html#StreamOps">intermediate * operation</a>. * * @param mapper a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function to apply to each element * @return the new stream */
LongStream mapToLong(ToLongFunction<? super T> mapper);
Returns a DoubleStream consisting of the results of applying the given function to the elements of this stream.

This is an intermediate operation.

Params:
Returns:the new stream
/** * Returns a {@code DoubleStream} consisting of the results of applying the * given function to the elements of this stream. * * <p>This is an <a href="package-summary.html#StreamOps">intermediate * operation</a>. * * @param mapper a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function to apply to each element * @return the new stream */
DoubleStream mapToDouble(ToDoubleFunction<? super T> mapper);
Returns a stream consisting of the results of replacing each element of this stream with the contents of a mapped stream produced by applying the provided mapping function to each element. Each mapped stream is closed after its contents have been placed into this stream. (If a mapped stream is null an empty stream is used, instead.)

This is an intermediate operation.

Params:
Type parameters:
  • <R> – The element type of the new stream
API Note: The flatMap() operation has the effect of applying a one-to-many transformation to the elements of the stream, and then flattening the resulting elements into a new stream.

Examples.

If orders is a stream of purchase orders, and each purchase order contains a collection of line items, then the following produces a stream containing all the line items in all the orders:


    orders.flatMap(order -> order.getLineItems().stream())...

If path is the path to a file, then the following produces a stream of the words contained in that file:


    Stream<String> lines = Files.lines(path, StandardCharsets.UTF_8);
    Stream<String> words = lines.flatMap(line -> Stream.of(line.split(" +")));
The mapper function passed to flatMap splits a line, using a simple regular expression, into an array of words, and then creates a stream of words from that array.
Returns:the new stream
/** * Returns a stream consisting of the results of replacing each element of * this stream with the contents of a mapped stream produced by applying * the provided mapping function to each element. Each mapped stream is * {@link java.util.stream.BaseStream#close() closed} after its contents * have been placed into this stream. (If a mapped stream is {@code null} * an empty stream is used, instead.) * * <p>This is an <a href="package-summary.html#StreamOps">intermediate * operation</a>. * * @apiNote * The {@code flatMap()} operation has the effect of applying a one-to-many * transformation to the elements of the stream, and then flattening the * resulting elements into a new stream. * * <p><b>Examples.</b> * * <p>If {@code orders} is a stream of purchase orders, and each purchase * order contains a collection of line items, then the following produces a * stream containing all the line items in all the orders: * <pre>{@code * orders.flatMap(order -> order.getLineItems().stream())... * }</pre> * * <p>If {@code path} is the path to a file, then the following produces a * stream of the {@code words} contained in that file: * <pre>{@code * Stream<String> lines = Files.lines(path, StandardCharsets.UTF_8); * Stream<String> words = lines.flatMap(line -> Stream.of(line.split(" +"))); * }</pre> * The {@code mapper} function passed to {@code flatMap} splits a line, * using a simple regular expression, into an array of words, and then * creates a stream of words from that array. * * @param <R> The element type of the new stream * @param mapper a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function to apply to each element which produces a stream * of new values * @return the new stream */
<R> Stream<R> flatMap(Function<? super T, ? extends Stream<? extends R>> mapper);
Returns an IntStream consisting of the results of replacing each element of this stream with the contents of a mapped stream produced by applying the provided mapping function to each element. Each mapped stream is closed after its contents have been placed into this stream. (If a mapped stream is null an empty stream is used, instead.)

This is an intermediate operation.

Params:
See Also:
Returns:the new stream
/** * Returns an {@code IntStream} consisting of the results of replacing each * element of this stream with the contents of a mapped stream produced by * applying the provided mapping function to each element. Each mapped * stream is {@link java.util.stream.BaseStream#close() closed} after its * contents have been placed into this stream. (If a mapped stream is * {@code null} an empty stream is used, instead.) * * <p>This is an <a href="package-summary.html#StreamOps">intermediate * operation</a>. * * @param mapper a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function to apply to each element which produces a stream * of new values * @return the new stream * @see #flatMap(Function) */
IntStream flatMapToInt(Function<? super T, ? extends IntStream> mapper);
Returns an LongStream consisting of the results of replacing each element of this stream with the contents of a mapped stream produced by applying the provided mapping function to each element. Each mapped stream is closed after its contents have been placed into this stream. (If a mapped stream is null an empty stream is used, instead.)

This is an intermediate operation.

Params:
See Also:
Returns:the new stream
/** * Returns an {@code LongStream} consisting of the results of replacing each * element of this stream with the contents of a mapped stream produced by * applying the provided mapping function to each element. Each mapped * stream is {@link java.util.stream.BaseStream#close() closed} after its * contents have been placed into this stream. (If a mapped stream is * {@code null} an empty stream is used, instead.) * * <p>This is an <a href="package-summary.html#StreamOps">intermediate * operation</a>. * * @param mapper a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function to apply to each element which produces a stream * of new values * @return the new stream * @see #flatMap(Function) */
LongStream flatMapToLong(Function<? super T, ? extends LongStream> mapper);
Returns an DoubleStream consisting of the results of replacing each element of this stream with the contents of a mapped stream produced by applying the provided mapping function to each element. Each mapped stream is closed after its contents have placed been into this stream. (If a mapped stream is null an empty stream is used, instead.)

This is an intermediate operation.

Params:
See Also:
Returns:the new stream
/** * Returns an {@code DoubleStream} consisting of the results of replacing * each element of this stream with the contents of a mapped stream produced * by applying the provided mapping function to each element. Each mapped * stream is {@link java.util.stream.BaseStream#close() closed} after its * contents have placed been into this stream. (If a mapped stream is * {@code null} an empty stream is used, instead.) * * <p>This is an <a href="package-summary.html#StreamOps">intermediate * operation</a>. * * @param mapper a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function to apply to each element which produces a stream * of new values * @return the new stream * @see #flatMap(Function) */
DoubleStream flatMapToDouble(Function<? super T, ? extends DoubleStream> mapper);
Returns a stream consisting of the distinct elements (according to Object.equals(Object)) of this stream.

For ordered streams, the selection of distinct elements is stable (for duplicated elements, the element appearing first in the encounter order is preserved.) For unordered streams, no stability guarantees are made.

This is a stateful intermediate operation.

API Note: Preserving stability for distinct() in parallel pipelines is relatively expensive (requires that the operation act as a full barrier, with substantial buffering overhead), and stability is often not needed. Using an unordered stream source (such as generate(Supplier<? extends Object>)) or removing the ordering constraint with BaseStream.unordered() may result in significantly more efficient execution for distinct() in parallel pipelines, if the semantics of your situation permit. If consistency with encounter order is required, and you are experiencing poor performance or memory utilization with distinct() in parallel pipelines, switching to sequential execution with BaseStream.sequential() may improve performance.
Returns:the new stream
/** * Returns a stream consisting of the distinct elements (according to * {@link Object#equals(Object)}) of this stream. * * <p>For ordered streams, the selection of distinct elements is stable * (for duplicated elements, the element appearing first in the encounter * order is preserved.) For unordered streams, no stability guarantees * are made. * * <p>This is a <a href="package-summary.html#StreamOps">stateful * intermediate operation</a>. * * @apiNote * Preserving stability for {@code distinct()} in parallel pipelines is * relatively expensive (requires that the operation act as a full barrier, * with substantial buffering overhead), and stability is often not needed. * Using an unordered stream source (such as {@link #generate(Supplier)}) * or removing the ordering constraint with {@link #unordered()} may result * in significantly more efficient execution for {@code distinct()} in parallel * pipelines, if the semantics of your situation permit. If consistency * with encounter order is required, and you are experiencing poor performance * or memory utilization with {@code distinct()} in parallel pipelines, * switching to sequential execution with {@link #sequential()} may improve * performance. * * @return the new stream */
Stream<T> distinct();
Returns a stream consisting of the elements of this stream, sorted according to natural order. If the elements of this stream are not Comparable, a java.lang.ClassCastException may be thrown when the terminal operation is executed.

For ordered streams, the sort is stable. For unordered streams, no stability guarantees are made.

This is a stateful intermediate operation.

Returns:the new stream
/** * Returns a stream consisting of the elements of this stream, sorted * according to natural order. If the elements of this stream are not * {@code Comparable}, a {@code java.lang.ClassCastException} may be thrown * when the terminal operation is executed. * * <p>For ordered streams, the sort is stable. For unordered streams, no * stability guarantees are made. * * <p>This is a <a href="package-summary.html#StreamOps">stateful * intermediate operation</a>. * * @return the new stream */
Stream<T> sorted();
Returns a stream consisting of the elements of this stream, sorted according to the provided Comparator.

For ordered streams, the sort is stable. For unordered streams, no stability guarantees are made.

This is a stateful intermediate operation.

Params:
Returns:the new stream
/** * Returns a stream consisting of the elements of this stream, sorted * according to the provided {@code Comparator}. * * <p>For ordered streams, the sort is stable. For unordered streams, no * stability guarantees are made. * * <p>This is a <a href="package-summary.html#StreamOps">stateful * intermediate operation</a>. * * @param comparator a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * {@code Comparator} to be used to compare stream elements * @return the new stream */
Stream<T> sorted(Comparator<? super T> comparator);
Returns a stream consisting of the elements of this stream, additionally performing the provided action on each element as elements are consumed from the resulting stream.

This is an intermediate operation.

For parallel stream pipelines, the action may be called at whatever time and in whatever thread the element is made available by the upstream operation. If the action modifies shared state, it is responsible for providing the required synchronization.

Params:
  • action – a non-interfering action to perform on the elements as they are consumed from the stream
API Note:This method exists mainly to support debugging, where you want to see the elements as they flow past a certain point in a pipeline:

    Stream.of("one", "two", "three", "four")
        .filter(e -> e.length() > 3)
        .peek(e -> System.out.println("Filtered value: " + e))
        .map(String::toUpperCase)
        .peek(e -> System.out.println("Mapped value: " + e))
        .collect(Collectors.toList());

In cases where the stream implementation is able to optimize away the production of some or all the elements (such as with short-circuiting operations like findFirst, or in the example described in count), the action will not be invoked for those elements.

Returns:the new stream
/** * Returns a stream consisting of the elements of this stream, additionally * performing the provided action on each element as elements are consumed * from the resulting stream. * * <p>This is an <a href="package-summary.html#StreamOps">intermediate * operation</a>. * * <p>For parallel stream pipelines, the action may be called at * whatever time and in whatever thread the element is made available by the * upstream operation. If the action modifies shared state, * it is responsible for providing the required synchronization. * * @apiNote This method exists mainly to support debugging, where you want * to see the elements as they flow past a certain point in a pipeline: * <pre>{@code * Stream.of("one", "two", "three", "four") * .filter(e -> e.length() > 3) * .peek(e -> System.out.println("Filtered value: " + e)) * .map(String::toUpperCase) * .peek(e -> System.out.println("Mapped value: " + e)) * .collect(Collectors.toList()); * }</pre> * * <p>In cases where the stream implementation is able to optimize away the * production of some or all the elements (such as with short-circuiting * operations like {@code findFirst}, or in the example described in * {@link #count}), the action will not be invoked for those elements. * * @param action a <a href="package-summary.html#NonInterference"> * non-interfering</a> action to perform on the elements as * they are consumed from the stream * @return the new stream */
Stream<T> peek(Consumer<? super T> action);
Returns a stream consisting of the elements of this stream, truncated to be no longer than maxSize in length.

This is a short-circuiting stateful intermediate operation.

Params:
  • maxSize – the number of elements the stream should be limited to
Throws:
API Note: While limit() is generally a cheap operation on sequential stream pipelines, it can be quite expensive on ordered parallel pipelines, especially for large values of maxSize, since limit(n) is constrained to return not just any n elements, but the first n elements in the encounter order. Using an unordered stream source (such as generate(Supplier<? extends Object>)) or removing the ordering constraint with BaseStream.unordered() may result in significant speedups of limit() in parallel pipelines, if the semantics of your situation permit. If consistency with encounter order is required, and you are experiencing poor performance or memory utilization with limit() in parallel pipelines, switching to sequential execution with BaseStream.sequential() may improve performance.
Returns:the new stream
/** * Returns a stream consisting of the elements of this stream, truncated * to be no longer than {@code maxSize} in length. * * <p>This is a <a href="package-summary.html#StreamOps">short-circuiting * stateful intermediate operation</a>. * * @apiNote * While {@code limit()} is generally a cheap operation on sequential * stream pipelines, it can be quite expensive on ordered parallel pipelines, * especially for large values of {@code maxSize}, since {@code limit(n)} * is constrained to return not just any <em>n</em> elements, but the * <em>first n</em> elements in the encounter order. Using an unordered * stream source (such as {@link #generate(Supplier)}) or removing the * ordering constraint with {@link #unordered()} may result in significant * speedups of {@code limit()} in parallel pipelines, if the semantics of * your situation permit. If consistency with encounter order is required, * and you are experiencing poor performance or memory utilization with * {@code limit()} in parallel pipelines, switching to sequential execution * with {@link #sequential()} may improve performance. * * @param maxSize the number of elements the stream should be limited to * @return the new stream * @throws IllegalArgumentException if {@code maxSize} is negative */
Stream<T> limit(long maxSize);
Returns a stream consisting of the remaining elements of this stream after discarding the first n elements of the stream. If this stream contains fewer than n elements then an empty stream will be returned.

This is a stateful intermediate operation.

Params:
  • n – the number of leading elements to skip
Throws:
API Note: While skip() is generally a cheap operation on sequential stream pipelines, it can be quite expensive on ordered parallel pipelines, especially for large values of n, since skip(n) is constrained to skip not just any n elements, but the first n elements in the encounter order. Using an unordered stream source (such as generate(Supplier<? extends Object>)) or removing the ordering constraint with BaseStream.unordered() may result in significant speedups of skip() in parallel pipelines, if the semantics of your situation permit. If consistency with encounter order is required, and you are experiencing poor performance or memory utilization with skip() in parallel pipelines, switching to sequential execution with BaseStream.sequential() may improve performance.
Returns:the new stream
/** * Returns a stream consisting of the remaining elements of this stream * after discarding the first {@code n} elements of the stream. * If this stream contains fewer than {@code n} elements then an * empty stream will be returned. * * <p>This is a <a href="package-summary.html#StreamOps">stateful * intermediate operation</a>. * * @apiNote * While {@code skip()} is generally a cheap operation on sequential * stream pipelines, it can be quite expensive on ordered parallel pipelines, * especially for large values of {@code n}, since {@code skip(n)} * is constrained to skip not just any <em>n</em> elements, but the * <em>first n</em> elements in the encounter order. Using an unordered * stream source (such as {@link #generate(Supplier)}) or removing the * ordering constraint with {@link #unordered()} may result in significant * speedups of {@code skip()} in parallel pipelines, if the semantics of * your situation permit. If consistency with encounter order is required, * and you are experiencing poor performance or memory utilization with * {@code skip()} in parallel pipelines, switching to sequential execution * with {@link #sequential()} may improve performance. * * @param n the number of leading elements to skip * @return the new stream * @throws IllegalArgumentException if {@code n} is negative */
Stream<T> skip(long n);
Returns, if this stream is ordered, a stream consisting of the longest prefix of elements taken from this stream that match the given predicate. Otherwise returns, if this stream is unordered, a stream consisting of a subset of elements taken from this stream that match the given predicate.

If this stream is ordered then the longest prefix is a contiguous sequence of elements of this stream that match the given predicate. The first element of the sequence is the first element of this stream, and the element immediately following the last element of the sequence does not match the given predicate.

If this stream is unordered, and some (but not all) elements of this stream match the given predicate, then the behavior of this operation is nondeterministic; it is free to take any subset of matching elements (which includes the empty set).

Independent of whether this stream is ordered or unordered if all elements of this stream match the given predicate then this operation takes all elements (the result is the same as the input), or if no elements of the stream match the given predicate then no elements are taken (the result is an empty stream).

This is a short-circuiting stateful intermediate operation.

Params:
Implementation Requirements: The default implementation obtains the spliterator of this stream, wraps that spliterator so as to support the semantics of this operation on traversal, and returns a new stream associated with the wrapped spliterator. The returned stream preserves the execution characteristics of this stream (namely parallel or sequential execution as per BaseStream.isParallel()) but the wrapped spliterator may choose to not support splitting. When the returned stream is closed, the close handlers for both the returned and this stream are invoked.
API Note: While takeWhile() is generally a cheap operation on sequential stream pipelines, it can be quite expensive on ordered parallel pipelines, since the operation is constrained to return not just any valid prefix, but the longest prefix of elements in the encounter order. Using an unordered stream source (such as generate(Supplier<? extends Object>)) or removing the ordering constraint with BaseStream.unordered() may result in significant speedups of takeWhile() in parallel pipelines, if the semantics of your situation permit. If consistency with encounter order is required, and you are experiencing poor performance or memory utilization with takeWhile() in parallel pipelines, switching to sequential execution with BaseStream.sequential() may improve performance.
Returns:the new stream
Since:9
/** * Returns, if this stream is ordered, a stream consisting of the longest * prefix of elements taken from this stream that match the given predicate. * Otherwise returns, if this stream is unordered, a stream consisting of a * subset of elements taken from this stream that match the given predicate. * * <p>If this stream is ordered then the longest prefix is a contiguous * sequence of elements of this stream that match the given predicate. The * first element of the sequence is the first element of this stream, and * the element immediately following the last element of the sequence does * not match the given predicate. * * <p>If this stream is unordered, and some (but not all) elements of this * stream match the given predicate, then the behavior of this operation is * nondeterministic; it is free to take any subset of matching elements * (which includes the empty set). * * <p>Independent of whether this stream is ordered or unordered if all * elements of this stream match the given predicate then this operation * takes all elements (the result is the same as the input), or if no * elements of the stream match the given predicate then no elements are * taken (the result is an empty stream). * * <p>This is a <a href="package-summary.html#StreamOps">short-circuiting * stateful intermediate operation</a>. * * @implSpec * The default implementation obtains the {@link #spliterator() spliterator} * of this stream, wraps that spliterator so as to support the semantics * of this operation on traversal, and returns a new stream associated with * the wrapped spliterator. The returned stream preserves the execution * characteristics of this stream (namely parallel or sequential execution * as per {@link #isParallel()}) but the wrapped spliterator may choose to * not support splitting. When the returned stream is closed, the close * handlers for both the returned and this stream are invoked. * * @apiNote * While {@code takeWhile()} is generally a cheap operation on sequential * stream pipelines, it can be quite expensive on ordered parallel * pipelines, since the operation is constrained to return not just any * valid prefix, but the longest prefix of elements in the encounter order. * Using an unordered stream source (such as {@link #generate(Supplier)}) or * removing the ordering constraint with {@link #unordered()} may result in * significant speedups of {@code takeWhile()} in parallel pipelines, if the * semantics of your situation permit. If consistency with encounter order * is required, and you are experiencing poor performance or memory * utilization with {@code takeWhile()} in parallel pipelines, switching to * sequential execution with {@link #sequential()} may improve performance. * * @param predicate a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * predicate to apply to elements to determine the longest * prefix of elements. * @return the new stream * @since 9 */
default Stream<T> takeWhile(Predicate<? super T> predicate) { Objects.requireNonNull(predicate); // Reuses the unordered spliterator, which, when encounter is present, // is safe to use as long as it configured not to split return StreamSupport.stream( new WhileOps.UnorderedWhileSpliterator.OfRef.Taking<>(spliterator(), true, predicate), isParallel()).onClose(this::close); }
Returns, if this stream is ordered, a stream consisting of the remaining elements of this stream after dropping the longest prefix of elements that match the given predicate. Otherwise returns, if this stream is unordered, a stream consisting of the remaining elements of this stream after dropping a subset of elements that match the given predicate.

If this stream is ordered then the longest prefix is a contiguous sequence of elements of this stream that match the given predicate. The first element of the sequence is the first element of this stream, and the element immediately following the last element of the sequence does not match the given predicate.

If this stream is unordered, and some (but not all) elements of this stream match the given predicate, then the behavior of this operation is nondeterministic; it is free to drop any subset of matching elements (which includes the empty set).

Independent of whether this stream is ordered or unordered if all elements of this stream match the given predicate then this operation drops all elements (the result is an empty stream), or if no elements of the stream match the given predicate then no elements are dropped (the result is the same as the input).

This is a stateful intermediate operation.

Params:
Implementation Requirements: The default implementation obtains the spliterator of this stream, wraps that spliterator so as to support the semantics of this operation on traversal, and returns a new stream associated with the wrapped spliterator. The returned stream preserves the execution characteristics of this stream (namely parallel or sequential execution as per BaseStream.isParallel()) but the wrapped spliterator may choose to not support splitting. When the returned stream is closed, the close handlers for both the returned and this stream are invoked.
API Note: While dropWhile() is generally a cheap operation on sequential stream pipelines, it can be quite expensive on ordered parallel pipelines, since the operation is constrained to return not just any valid prefix, but the longest prefix of elements in the encounter order. Using an unordered stream source (such as generate(Supplier<? extends Object>)) or removing the ordering constraint with BaseStream.unordered() may result in significant speedups of dropWhile() in parallel pipelines, if the semantics of your situation permit. If consistency with encounter order is required, and you are experiencing poor performance or memory utilization with dropWhile() in parallel pipelines, switching to sequential execution with BaseStream.sequential() may improve performance.
Returns:the new stream
Since:9
/** * Returns, if this stream is ordered, a stream consisting of the remaining * elements of this stream after dropping the longest prefix of elements * that match the given predicate. Otherwise returns, if this stream is * unordered, a stream consisting of the remaining elements of this stream * after dropping a subset of elements that match the given predicate. * * <p>If this stream is ordered then the longest prefix is a contiguous * sequence of elements of this stream that match the given predicate. The * first element of the sequence is the first element of this stream, and * the element immediately following the last element of the sequence does * not match the given predicate. * * <p>If this stream is unordered, and some (but not all) elements of this * stream match the given predicate, then the behavior of this operation is * nondeterministic; it is free to drop any subset of matching elements * (which includes the empty set). * * <p>Independent of whether this stream is ordered or unordered if all * elements of this stream match the given predicate then this operation * drops all elements (the result is an empty stream), or if no elements of * the stream match the given predicate then no elements are dropped (the * result is the same as the input). * * <p>This is a <a href="package-summary.html#StreamOps">stateful * intermediate operation</a>. * * @implSpec * The default implementation obtains the {@link #spliterator() spliterator} * of this stream, wraps that spliterator so as to support the semantics * of this operation on traversal, and returns a new stream associated with * the wrapped spliterator. The returned stream preserves the execution * characteristics of this stream (namely parallel or sequential execution * as per {@link #isParallel()}) but the wrapped spliterator may choose to * not support splitting. When the returned stream is closed, the close * handlers for both the returned and this stream are invoked. * * @apiNote * While {@code dropWhile()} is generally a cheap operation on sequential * stream pipelines, it can be quite expensive on ordered parallel * pipelines, since the operation is constrained to return not just any * valid prefix, but the longest prefix of elements in the encounter order. * Using an unordered stream source (such as {@link #generate(Supplier)}) or * removing the ordering constraint with {@link #unordered()} may result in * significant speedups of {@code dropWhile()} in parallel pipelines, if the * semantics of your situation permit. If consistency with encounter order * is required, and you are experiencing poor performance or memory * utilization with {@code dropWhile()} in parallel pipelines, switching to * sequential execution with {@link #sequential()} may improve performance. * * @param predicate a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * predicate to apply to elements to determine the longest * prefix of elements. * @return the new stream * @since 9 */
default Stream<T> dropWhile(Predicate<? super T> predicate) { Objects.requireNonNull(predicate); // Reuses the unordered spliterator, which, when encounter is present, // is safe to use as long as it configured not to split return StreamSupport.stream( new WhileOps.UnorderedWhileSpliterator.OfRef.Dropping<>(spliterator(), true, predicate), isParallel()).onClose(this::close); }
Performs an action for each element of this stream.

This is a terminal operation.

The behavior of this operation is explicitly nondeterministic. For parallel stream pipelines, this operation does not guarantee to respect the encounter order of the stream, as doing so would sacrifice the benefit of parallelism. For any given element, the action may be performed at whatever time and in whatever thread the library chooses. If the action accesses shared state, it is responsible for providing the required synchronization.

Params:
/** * Performs an action for each element of this stream. * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * <p>The behavior of this operation is explicitly nondeterministic. * For parallel stream pipelines, this operation does <em>not</em> * guarantee to respect the encounter order of the stream, as doing so * would sacrifice the benefit of parallelism. For any given element, the * action may be performed at whatever time and in whatever thread the * library chooses. If the action accesses shared state, it is * responsible for providing the required synchronization. * * @param action a <a href="package-summary.html#NonInterference"> * non-interfering</a> action to perform on the elements */
void forEach(Consumer<? super T> action);
Performs an action for each element of this stream, in the encounter order of the stream if the stream has a defined encounter order.

This is a terminal operation.

This operation processes the elements one at a time, in encounter order if one exists. Performing the action for one element happens-before performing the action for subsequent elements, but for any given element, the action may be performed in whatever thread the library chooses.

Params:
See Also:
/** * Performs an action for each element of this stream, in the encounter * order of the stream if the stream has a defined encounter order. * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * <p>This operation processes the elements one at a time, in encounter * order if one exists. Performing the action for one element * <a href="../concurrent/package-summary.html#MemoryVisibility"><i>happens-before</i></a> * performing the action for subsequent elements, but for any given element, * the action may be performed in whatever thread the library chooses. * * @param action a <a href="package-summary.html#NonInterference"> * non-interfering</a> action to perform on the elements * @see #forEach(Consumer) */
void forEachOrdered(Consumer<? super T> action);
Returns an array containing the elements of this stream.

This is a terminal operation.

Returns:an array, whose runtime component type is Object, containing the elements of this stream
/** * Returns an array containing the elements of this stream. * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * @return an array, whose {@linkplain Class#getComponentType runtime component * type} is {@code Object}, containing the elements of this stream */
Object[] toArray();
Returns an array containing the elements of this stream, using the provided generator function to allocate the returned array, as well as any additional arrays that might be required for a partitioned execution or for resizing.

This is a terminal operation.

Params:
  • generator – a function which produces a new array of the desired type and the provided length
Type parameters:
  • <A> – the component type of the resulting array
Throws:
API Note: The generator function takes an integer, which is the size of the desired array, and produces an array of the desired size. This can be concisely expressed with an array constructor reference:

    Person[] men = people.stream()
                         .filter(p -> p.getGender() == MALE)
                         .toArray(Person[]::new);
Returns:an array containing the elements in this stream
/** * Returns an array containing the elements of this stream, using the * provided {@code generator} function to allocate the returned array, as * well as any additional arrays that might be required for a partitioned * execution or for resizing. * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * @apiNote * The generator function takes an integer, which is the size of the * desired array, and produces an array of the desired size. This can be * concisely expressed with an array constructor reference: * <pre>{@code * Person[] men = people.stream() * .filter(p -> p.getGender() == MALE) * .toArray(Person[]::new); * }</pre> * * @param <A> the component type of the resulting array * @param generator a function which produces a new array of the desired * type and the provided length * @return an array containing the elements in this stream * @throws ArrayStoreException if the runtime type of any element of this * stream is not assignable to the {@linkplain Class#getComponentType * runtime component type} of the generated array */
<A> A[] toArray(IntFunction<A[]> generator);
Performs a reduction on the elements of this stream, using the provided identity value and an associative accumulation function, and returns the reduced value. This is equivalent to:

    T result = identity;
    for (T element : this stream)
        result = accumulator.apply(result, element)
    return result;
but is not constrained to execute sequentially.

The identity value must be an identity for the accumulator function. This means that for all t, accumulator.apply(identity, t) is equal to t. The accumulator function must be an associative function.

This is a terminal operation.

Params:
API Note:Sum, min, max, average, and string concatenation are all special cases of reduction. Summing a stream of numbers can be expressed as:

    Integer sum = integers.reduce(0, (a, b) -> a+b);
or:

    Integer sum = integers.reduce(0, Integer::sum);

While this may seem a more roundabout way to perform an aggregation compared to simply mutating a running total in a loop, reduction operations parallelize more gracefully, without needing additional synchronization and with greatly reduced risk of data races.

Returns:the result of the reduction
/** * Performs a <a href="package-summary.html#Reduction">reduction</a> on the * elements of this stream, using the provided identity value and an * <a href="package-summary.html#Associativity">associative</a> * accumulation function, and returns the reduced value. This is equivalent * to: * <pre>{@code * T result = identity; * for (T element : this stream) * result = accumulator.apply(result, element) * return result; * }</pre> * * but is not constrained to execute sequentially. * * <p>The {@code identity} value must be an identity for the accumulator * function. This means that for all {@code t}, * {@code accumulator.apply(identity, t)} is equal to {@code t}. * The {@code accumulator} function must be an * <a href="package-summary.html#Associativity">associative</a> function. * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * @apiNote Sum, min, max, average, and string concatenation are all special * cases of reduction. Summing a stream of numbers can be expressed as: * * <pre>{@code * Integer sum = integers.reduce(0, (a, b) -> a+b); * }</pre> * * or: * * <pre>{@code * Integer sum = integers.reduce(0, Integer::sum); * }</pre> * * <p>While this may seem a more roundabout way to perform an aggregation * compared to simply mutating a running total in a loop, reduction * operations parallelize more gracefully, without needing additional * synchronization and with greatly reduced risk of data races. * * @param identity the identity value for the accumulating function * @param accumulator an <a href="package-summary.html#Associativity">associative</a>, * <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function for combining two values * @return the result of the reduction */
T reduce(T identity, BinaryOperator<T> accumulator);
Performs a reduction on the elements of this stream, using an associative accumulation function, and returns an Optional describing the reduced value, if any. This is equivalent to:

    boolean foundAny = false;
    T result = null;
    for (T element : this stream) {
        if (!foundAny) {
            foundAny = true;
            result = element;
        }
        else
            result = accumulator.apply(result, element);
    }
    return foundAny ? Optional.of(result) : Optional.empty();
but is not constrained to execute sequentially.

The accumulator function must be an associative function.

This is a terminal operation.

Params:
Throws:
See Also:
Returns:an Optional describing the result of the reduction
/** * Performs a <a href="package-summary.html#Reduction">reduction</a> on the * elements of this stream, using an * <a href="package-summary.html#Associativity">associative</a> accumulation * function, and returns an {@code Optional} describing the reduced value, * if any. This is equivalent to: * <pre>{@code * boolean foundAny = false; * T result = null; * for (T element : this stream) { * if (!foundAny) { * foundAny = true; * result = element; * } * else * result = accumulator.apply(result, element); * } * return foundAny ? Optional.of(result) : Optional.empty(); * }</pre> * * but is not constrained to execute sequentially. * * <p>The {@code accumulator} function must be an * <a href="package-summary.html#Associativity">associative</a> function. * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * @param accumulator an <a href="package-summary.html#Associativity">associative</a>, * <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function for combining two values * @return an {@link Optional} describing the result of the reduction * @throws NullPointerException if the result of the reduction is null * @see #reduce(Object, BinaryOperator) * @see #min(Comparator) * @see #max(Comparator) */
Optional<T> reduce(BinaryOperator<T> accumulator);
Performs a reduction on the elements of this stream, using the provided identity, accumulation and combining functions. This is equivalent to:

    U result = identity;
    for (T element : this stream)
        result = accumulator.apply(result, element)
    return result;
but is not constrained to execute sequentially.

The identity value must be an identity for the combiner function. This means that for all u, combiner(identity, u) is equal to u. Additionally, the combiner function must be compatible with the accumulator function; for all u and t, the following must hold:


    combiner.apply(u, accumulator.apply(identity, t)) == accumulator.apply(u, t)

This is a terminal operation.

Params:
Type parameters:
  • <U> – The type of the result
See Also:
API Note:Many reductions using this form can be represented more simply by an explicit combination of map and reduce operations. The accumulator function acts as a fused mapper and accumulator, which can sometimes be more efficient than separate mapping and reduction, such as when knowing the previously reduced value allows you to avoid some computation.
Returns:the result of the reduction
/** * Performs a <a href="package-summary.html#Reduction">reduction</a> on the * elements of this stream, using the provided identity, accumulation and * combining functions. This is equivalent to: * <pre>{@code * U result = identity; * for (T element : this stream) * result = accumulator.apply(result, element) * return result; * }</pre> * * but is not constrained to execute sequentially. * * <p>The {@code identity} value must be an identity for the combiner * function. This means that for all {@code u}, {@code combiner(identity, u)} * is equal to {@code u}. Additionally, the {@code combiner} function * must be compatible with the {@code accumulator} function; for all * {@code u} and {@code t}, the following must hold: * <pre>{@code * combiner.apply(u, accumulator.apply(identity, t)) == accumulator.apply(u, t) * }</pre> * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * @apiNote Many reductions using this form can be represented more simply * by an explicit combination of {@code map} and {@code reduce} operations. * The {@code accumulator} function acts as a fused mapper and accumulator, * which can sometimes be more efficient than separate mapping and reduction, * such as when knowing the previously reduced value allows you to avoid * some computation. * * @param <U> The type of the result * @param identity the identity value for the combiner function * @param accumulator an <a href="package-summary.html#Associativity">associative</a>, * <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function for incorporating an additional element into a result * @param combiner an <a href="package-summary.html#Associativity">associative</a>, * <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function for combining two values, which must be * compatible with the accumulator function * @return the result of the reduction * @see #reduce(BinaryOperator) * @see #reduce(Object, BinaryOperator) */
<U> U reduce(U identity, BiFunction<U, ? super T, U> accumulator, BinaryOperator<U> combiner);
Performs a mutable reduction operation on the elements of this stream. A mutable reduction is one in which the reduced value is a mutable result container, such as an ArrayList, and elements are incorporated by updating the state of the result rather than by replacing the result. This produces a result equivalent to:

    R result = supplier.get();
    for (T element : this stream)
        accumulator.accept(result, element);
    return result;

Like reduce(Object, BinaryOperator), collect operations can be parallelized without requiring additional synchronization.

This is a terminal operation.

Params:
  • supplier – a function that creates a new mutable result container. For a parallel execution, this function may be called multiple times and must return a fresh value each time.
  • accumulator – an associative, non-interfering, stateless function that must fold an element into a result container.
  • combiner – an associative, non-interfering, stateless function that accepts two partial result containers and merges them, which must be compatible with the accumulator function. The combiner function must fold the elements from the second result container into the first result container.
Type parameters:
  • <R> – the type of the mutable result container
API Note:There are many existing classes in the JDK whose signatures are well-suited for use with method references as arguments to collect(). For example, the following will accumulate strings into an ArrayList:

    List<String> asList = stringStream.collect(ArrayList::new, ArrayList::add,
                                               ArrayList::addAll);

The following will take a stream of strings and concatenates them into a single string:


    String concat = stringStream.collect(StringBuilder::new, StringBuilder::append,
                                         StringBuilder::append)
                                .toString();
Returns:the result of the reduction
/** * Performs a <a href="package-summary.html#MutableReduction">mutable * reduction</a> operation on the elements of this stream. A mutable * reduction is one in which the reduced value is a mutable result container, * such as an {@code ArrayList}, and elements are incorporated by updating * the state of the result rather than by replacing the result. This * produces a result equivalent to: * <pre>{@code * R result = supplier.get(); * for (T element : this stream) * accumulator.accept(result, element); * return result; * }</pre> * * <p>Like {@link #reduce(Object, BinaryOperator)}, {@code collect} operations * can be parallelized without requiring additional synchronization. * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * @apiNote There are many existing classes in the JDK whose signatures are * well-suited for use with method references as arguments to {@code collect()}. * For example, the following will accumulate strings into an {@code ArrayList}: * <pre>{@code * List<String> asList = stringStream.collect(ArrayList::new, ArrayList::add, * ArrayList::addAll); * }</pre> * * <p>The following will take a stream of strings and concatenates them into a * single string: * <pre>{@code * String concat = stringStream.collect(StringBuilder::new, StringBuilder::append, * StringBuilder::append) * .toString(); * }</pre> * * @param <R> the type of the mutable result container * @param supplier a function that creates a new mutable result container. * For a parallel execution, this function may be called * multiple times and must return a fresh value each time. * @param accumulator an <a href="package-summary.html#Associativity">associative</a>, * <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function that must fold an element into a result * container. * @param combiner an <a href="package-summary.html#Associativity">associative</a>, * <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * function that accepts two partial result containers * and merges them, which must be compatible with the * accumulator function. The combiner function must fold * the elements from the second result container into the * first result container. * @return the result of the reduction */
<R> R collect(Supplier<R> supplier, BiConsumer<R, ? super T> accumulator, BiConsumer<R, R> combiner);
Performs a mutable reduction operation on the elements of this stream using a Collector. A Collector encapsulates the functions used as arguments to collect(Supplier, BiConsumer, BiConsumer), allowing for reuse of collection strategies and composition of collect operations such as multiple-level grouping or partitioning.

If the stream is parallel, and the Collector is concurrent, and either the stream is unordered or the collector is unordered, then a concurrent reduction will be performed (see Collector for details on concurrent reduction.)

This is a terminal operation.

When executed in parallel, multiple intermediate results may be instantiated, populated, and merged so as to maintain isolation of mutable data structures. Therefore, even when executed in parallel with non-thread-safe data structures (such as ArrayList), no additional synchronization is needed for a parallel reduction.

Params:
  • collector – the Collector describing the reduction
Type parameters:
  • <R> – the type of the result
  • <A> – the intermediate accumulation type of the Collector
See Also:
API Note: The following will accumulate strings into an ArrayList:

    List<String> asList = stringStream.collect(Collectors.toList());

The following will classify Person objects by city:


    Map<String, List<Person>> peopleByCity
        = personStream.collect(Collectors.groupingBy(Person::getCity));

The following will classify Person objects by state and city, cascading two Collectors together:


    Map<String, Map<String, List<Person>>> peopleByStateAndCity
        = personStream.collect(Collectors.groupingBy(Person::getState,
                                                     Collectors.groupingBy(Person::getCity)));
Returns:the result of the reduction
/** * Performs a <a href="package-summary.html#MutableReduction">mutable * reduction</a> operation on the elements of this stream using a * {@code Collector}. A {@code Collector} * encapsulates the functions used as arguments to * {@link #collect(Supplier, BiConsumer, BiConsumer)}, allowing for reuse of * collection strategies and composition of collect operations such as * multiple-level grouping or partitioning. * * <p>If the stream is parallel, and the {@code Collector} * is {@link Collector.Characteristics#CONCURRENT concurrent}, and * either the stream is unordered or the collector is * {@link Collector.Characteristics#UNORDERED unordered}, * then a concurrent reduction will be performed (see {@link Collector} for * details on concurrent reduction.) * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * <p>When executed in parallel, multiple intermediate results may be * instantiated, populated, and merged so as to maintain isolation of * mutable data structures. Therefore, even when executed in parallel * with non-thread-safe data structures (such as {@code ArrayList}), no * additional synchronization is needed for a parallel reduction. * * @apiNote * The following will accumulate strings into an ArrayList: * <pre>{@code * List<String> asList = stringStream.collect(Collectors.toList()); * }</pre> * * <p>The following will classify {@code Person} objects by city: * <pre>{@code * Map<String, List<Person>> peopleByCity * = personStream.collect(Collectors.groupingBy(Person::getCity)); * }</pre> * * <p>The following will classify {@code Person} objects by state and city, * cascading two {@code Collector}s together: * <pre>{@code * Map<String, Map<String, List<Person>>> peopleByStateAndCity * = personStream.collect(Collectors.groupingBy(Person::getState, * Collectors.groupingBy(Person::getCity))); * }</pre> * * @param <R> the type of the result * @param <A> the intermediate accumulation type of the {@code Collector} * @param collector the {@code Collector} describing the reduction * @return the result of the reduction * @see #collect(Supplier, BiConsumer, BiConsumer) * @see Collectors */
<R, A> R collect(Collector<? super T, A, R> collector);
Returns the minimum element of this stream according to the provided Comparator. This is a special case of a reduction.

This is a terminal operation.

Params:
Throws:
Returns:an Optional describing the minimum element of this stream, or an empty Optional if the stream is empty
/** * Returns the minimum element of this stream according to the provided * {@code Comparator}. This is a special case of a * <a href="package-summary.html#Reduction">reduction</a>. * * <p>This is a <a href="package-summary.html#StreamOps">terminal operation</a>. * * @param comparator a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * {@code Comparator} to compare elements of this stream * @return an {@code Optional} describing the minimum element of this stream, * or an empty {@code Optional} if the stream is empty * @throws NullPointerException if the minimum element is null */
Optional<T> min(Comparator<? super T> comparator);
Returns the maximum element of this stream according to the provided Comparator. This is a special case of a reduction.

This is a terminal operation.

Params:
Throws:
Returns:an Optional describing the maximum element of this stream, or an empty Optional if the stream is empty
/** * Returns the maximum element of this stream according to the provided * {@code Comparator}. This is a special case of a * <a href="package-summary.html#Reduction">reduction</a>. * * <p>This is a <a href="package-summary.html#StreamOps">terminal * operation</a>. * * @param comparator a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * {@code Comparator} to compare elements of this stream * @return an {@code Optional} describing the maximum element of this stream, * or an empty {@code Optional} if the stream is empty * @throws NullPointerException if the maximum element is null */
Optional<T> max(Comparator<? super T> comparator);
Returns the count of elements in this stream. This is a special case of a reduction and is equivalent to:

    return mapToLong(e -> 1L).sum();

This is a terminal operation.

API Note: An implementation may choose to not execute the stream pipeline (either sequentially or in parallel) if it is capable of computing the count directly from the stream source. In such cases no source elements will be traversed and no intermediate operations will be evaluated. Behavioral parameters with side-effects, which are strongly discouraged except for harmless cases such as debugging, may be affected. For example, consider the following stream:

    List<String> l = Arrays.asList("A", "B", "C", "D");
    long count = l.stream().peek(System.out::println).count();
The number of elements covered by the stream source, a List, is known and the intermediate operation, peek, does not inject into or remove elements from the stream (as may be the case for flatMap or filter operations). Thus the count is the size of the List and there is no need to execute the pipeline and, as a side-effect, print out the list elements.
Returns:the count of elements in this stream
/** * Returns the count of elements in this stream. This is a special case of * a <a href="package-summary.html#Reduction">reduction</a> and is * equivalent to: * <pre>{@code * return mapToLong(e -> 1L).sum(); * }</pre> * * <p>This is a <a href="package-summary.html#StreamOps">terminal operation</a>. * * @apiNote * An implementation may choose to not execute the stream pipeline (either * sequentially or in parallel) if it is capable of computing the count * directly from the stream source. In such cases no source elements will * be traversed and no intermediate operations will be evaluated. * Behavioral parameters with side-effects, which are strongly discouraged * except for harmless cases such as debugging, may be affected. For * example, consider the following stream: * <pre>{@code * List<String> l = Arrays.asList("A", "B", "C", "D"); * long count = l.stream().peek(System.out::println).count(); * }</pre> * The number of elements covered by the stream source, a {@code List}, is * known and the intermediate operation, {@code peek}, does not inject into * or remove elements from the stream (as may be the case for * {@code flatMap} or {@code filter} operations). Thus the count is the * size of the {@code List} and there is no need to execute the pipeline * and, as a side-effect, print out the list elements. * * @return the count of elements in this stream */
long count();
Returns whether any elements of this stream match the provided predicate. May not evaluate the predicate on all elements if not necessary for determining the result. If the stream is empty then false is returned and the predicate is not evaluated.

This is a short-circuiting terminal operation.

Params:
API Note: This method evaluates the existential quantification of the predicate over the elements of the stream (for some x P(x)).
Returns:true if any elements of the stream match the provided predicate, otherwise false
/** * Returns whether any elements of this stream match the provided * predicate. May not evaluate the predicate on all elements if not * necessary for determining the result. If the stream is empty then * {@code false} is returned and the predicate is not evaluated. * * <p>This is a <a href="package-summary.html#StreamOps">short-circuiting * terminal operation</a>. * * @apiNote * This method evaluates the <em>existential quantification</em> of the * predicate over the elements of the stream (for some x P(x)). * * @param predicate a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * predicate to apply to elements of this stream * @return {@code true} if any elements of the stream match the provided * predicate, otherwise {@code false} */
boolean anyMatch(Predicate<? super T> predicate);
Returns whether all elements of this stream match the provided predicate. May not evaluate the predicate on all elements if not necessary for determining the result. If the stream is empty then true is returned and the predicate is not evaluated.

This is a short-circuiting terminal operation.

Params:
API Note: This method evaluates the universal quantification of the predicate over the elements of the stream (for all x P(x)). If the stream is empty, the quantification is said to be vacuously satisfied and is always true (regardless of P(x)).
Returns:true if either all elements of the stream match the provided predicate or the stream is empty, otherwise false
/** * Returns whether all elements of this stream match the provided predicate. * May not evaluate the predicate on all elements if not necessary for * determining the result. If the stream is empty then {@code true} is * returned and the predicate is not evaluated. * * <p>This is a <a href="package-summary.html#StreamOps">short-circuiting * terminal operation</a>. * * @apiNote * This method evaluates the <em>universal quantification</em> of the * predicate over the elements of the stream (for all x P(x)). If the * stream is empty, the quantification is said to be <em>vacuously * satisfied</em> and is always {@code true} (regardless of P(x)). * * @param predicate a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * predicate to apply to elements of this stream * @return {@code true} if either all elements of the stream match the * provided predicate or the stream is empty, otherwise {@code false} */
boolean allMatch(Predicate<? super T> predicate);
Returns whether no elements of this stream match the provided predicate. May not evaluate the predicate on all elements if not necessary for determining the result. If the stream is empty then true is returned and the predicate is not evaluated.

This is a short-circuiting terminal operation.

Params:
API Note: This method evaluates the universal quantification of the negated predicate over the elements of the stream (for all x ~P(x)). If the stream is empty, the quantification is said to be vacuously satisfied and is always true, regardless of P(x).
Returns:true if either no elements of the stream match the provided predicate or the stream is empty, otherwise false
/** * Returns whether no elements of this stream match the provided predicate. * May not evaluate the predicate on all elements if not necessary for * determining the result. If the stream is empty then {@code true} is * returned and the predicate is not evaluated. * * <p>This is a <a href="package-summary.html#StreamOps">short-circuiting * terminal operation</a>. * * @apiNote * This method evaluates the <em>universal quantification</em> of the * negated predicate over the elements of the stream (for all x ~P(x)). If * the stream is empty, the quantification is said to be vacuously satisfied * and is always {@code true}, regardless of P(x). * * @param predicate a <a href="package-summary.html#NonInterference">non-interfering</a>, * <a href="package-summary.html#Statelessness">stateless</a> * predicate to apply to elements of this stream * @return {@code true} if either no elements of the stream match the * provided predicate or the stream is empty, otherwise {@code false} */
boolean noneMatch(Predicate<? super T> predicate);
Returns an Optional describing the first element of this stream, or an empty Optional if the stream is empty. If the stream has no encounter order, then any element may be returned.

This is a short-circuiting terminal operation.

Throws:
Returns:an Optional describing the first element of this stream, or an empty Optional if the stream is empty
/** * Returns an {@link Optional} describing the first element of this stream, * or an empty {@code Optional} if the stream is empty. If the stream has * no encounter order, then any element may be returned. * * <p>This is a <a href="package-summary.html#StreamOps">short-circuiting * terminal operation</a>. * * @return an {@code Optional} describing the first element of this stream, * or an empty {@code Optional} if the stream is empty * @throws NullPointerException if the element selected is null */
Optional<T> findFirst();
Returns an Optional describing some element of the stream, or an empty Optional if the stream is empty.

This is a short-circuiting terminal operation.

The behavior of this operation is explicitly nondeterministic; it is free to select any element in the stream. This is to allow for maximal performance in parallel operations; the cost is that multiple invocations on the same source may not return the same result. (If a stable result is desired, use findFirst() instead.)

Throws:
See Also:
Returns:an Optional describing some element of this stream, or an empty Optional if the stream is empty
/** * Returns an {@link Optional} describing some element of the stream, or an * empty {@code Optional} if the stream is empty. * * <p>This is a <a href="package-summary.html#StreamOps">short-circuiting * terminal operation</a>. * * <p>The behavior of this operation is explicitly nondeterministic; it is * free to select any element in the stream. This is to allow for maximal * performance in parallel operations; the cost is that multiple invocations * on the same source may not return the same result. (If a stable result * is desired, use {@link #findFirst()} instead.) * * @return an {@code Optional} describing some element of this stream, or an * empty {@code Optional} if the stream is empty * @throws NullPointerException if the element selected is null * @see #findFirst() */
Optional<T> findAny(); // Static factories
Returns a builder for a Stream.
Type parameters:
  • <T> – type of elements
Returns:a stream builder
/** * Returns a builder for a {@code Stream}. * * @param <T> type of elements * @return a stream builder */
public static<T> Builder<T> builder() { return new Streams.StreamBuilderImpl<>(); }
Returns an empty sequential Stream.
Type parameters:
  • <T> – the type of stream elements
Returns:an empty sequential stream
/** * Returns an empty sequential {@code Stream}. * * @param <T> the type of stream elements * @return an empty sequential stream */
public static<T> Stream<T> empty() { return StreamSupport.stream(Spliterators.<T>emptySpliterator(), false); }
Returns a sequential Stream containing a single element.
Params:
  • t – the single element
Type parameters:
  • <T> – the type of stream elements
Returns:a singleton sequential stream
/** * Returns a sequential {@code Stream} containing a single element. * * @param t the single element * @param <T> the type of stream elements * @return a singleton sequential stream */
public static<T> Stream<T> of(T t) { return StreamSupport.stream(new Streams.StreamBuilderImpl<>(t), false); }
Returns a sequential Stream containing a single element, if non-null, otherwise returns an empty Stream.
Params:
  • t – the single element
Type parameters:
  • <T> – the type of stream elements
Returns:a stream with a single element if the specified element is non-null, otherwise an empty stream
Since:9
/** * Returns a sequential {@code Stream} containing a single element, if * non-null, otherwise returns an empty {@code Stream}. * * @param t the single element * @param <T> the type of stream elements * @return a stream with a single element if the specified element * is non-null, otherwise an empty stream * @since 9 */
public static<T> Stream<T> ofNullable(T t) { return t == null ? Stream.empty() : StreamSupport.stream(new Streams.StreamBuilderImpl<>(t), false); }
Returns a sequential ordered stream whose elements are the specified values.
Params:
  • values – the elements of the new stream
Type parameters:
  • <T> – the type of stream elements
Returns:the new stream
/** * Returns a sequential ordered stream whose elements are the specified values. * * @param <T> the type of stream elements * @param values the elements of the new stream * @return the new stream */
@SafeVarargs @SuppressWarnings("varargs") // Creating a stream from an array is safe public static<T> Stream<T> of(T... values) { return Arrays.stream(values); }
Returns an infinite sequential ordered Stream produced by iterative application of a function f to an initial element seed, producing a Stream consisting of seed, f(seed), f(f(seed)), etc.

The first element (position 0) in the Stream will be the provided seed. For n > 0, the element at position n, will be the result of applying the function f to the element at position n - 1.

The action of applying f for one element happens-before the action of applying f for subsequent elements. For any given element the action may be performed in whatever thread the library chooses.

Params:
  • seed – the initial element
  • f – a function to be applied to the previous element to produce a new element
Type parameters:
  • <T> – the type of stream elements
Returns:a new sequential Stream
/** * Returns an infinite sequential ordered {@code Stream} produced by iterative * application of a function {@code f} to an initial element {@code seed}, * producing a {@code Stream} consisting of {@code seed}, {@code f(seed)}, * {@code f(f(seed))}, etc. * * <p>The first element (position {@code 0}) in the {@code Stream} will be * the provided {@code seed}. For {@code n > 0}, the element at position * {@code n}, will be the result of applying the function {@code f} to the * element at position {@code n - 1}. * * <p>The action of applying {@code f} for one element * <a href="../concurrent/package-summary.html#MemoryVisibility"><i>happens-before</i></a> * the action of applying {@code f} for subsequent elements. For any given * element the action may be performed in whatever thread the library * chooses. * * @param <T> the type of stream elements * @param seed the initial element * @param f a function to be applied to the previous element to produce * a new element * @return a new sequential {@code Stream} */
public static<T> Stream<T> iterate(final T seed, final UnaryOperator<T> f) { Objects.requireNonNull(f); Spliterator<T> spliterator = new Spliterators.AbstractSpliterator<>(Long.MAX_VALUE, Spliterator.ORDERED | Spliterator.IMMUTABLE) { T prev; boolean started; @Override public boolean tryAdvance(Consumer<? super T> action) { Objects.requireNonNull(action); T t; if (started) t = f.apply(prev); else { t = seed; started = true; } action.accept(prev = t); return true; } }; return StreamSupport.stream(spliterator, false); }
Returns a sequential ordered Stream produced by iterative application of the given next function to an initial element, conditioned on satisfying the given hasNext predicate. The stream terminates as soon as the hasNext predicate returns false.

Stream.iterate should produce the same sequence of elements as produced by the corresponding for-loop:


    for (T index=seed; hasNext.test(index); index = next.apply(index)) {
        ...
    }

The resulting sequence may be empty if the hasNext predicate does not hold on the seed value. Otherwise the first element will be the supplied seed value, the next element (if present) will be the result of applying the next function to the seed value, and so on iteratively until the hasNext predicate indicates that the stream should terminate.

The action of applying the hasNext predicate to an element happens-before the action of applying the next function to that element. The action of applying the next function for one element happens-before the action of applying the hasNext predicate for subsequent elements. For any given element an action may be performed in whatever thread the library chooses.

Params:
  • seed – the initial element
  • hasNext – a predicate to apply to elements to determine when the stream must terminate.
  • next – a function to be applied to the previous element to produce a new element
Type parameters:
  • <T> – the type of stream elements
Returns:a new sequential Stream
Since:9
/** * Returns a sequential ordered {@code Stream} produced by iterative * application of the given {@code next} function to an initial element, * conditioned on satisfying the given {@code hasNext} predicate. The * stream terminates as soon as the {@code hasNext} predicate returns false. * * <p>{@code Stream.iterate} should produce the same sequence of elements as * produced by the corresponding for-loop: * <pre>{@code * for (T index=seed; hasNext.test(index); index = next.apply(index)) { * ... * } * }</pre> * * <p>The resulting sequence may be empty if the {@code hasNext} predicate * does not hold on the seed value. Otherwise the first element will be the * supplied {@code seed} value, the next element (if present) will be the * result of applying the {@code next} function to the {@code seed} value, * and so on iteratively until the {@code hasNext} predicate indicates that * the stream should terminate. * * <p>The action of applying the {@code hasNext} predicate to an element * <a href="../concurrent/package-summary.html#MemoryVisibility"><i>happens-before</i></a> * the action of applying the {@code next} function to that element. The * action of applying the {@code next} function for one element * <i>happens-before</i> the action of applying the {@code hasNext} * predicate for subsequent elements. For any given element an action may * be performed in whatever thread the library chooses. * * @param <T> the type of stream elements * @param seed the initial element * @param hasNext a predicate to apply to elements to determine when the * stream must terminate. * @param next a function to be applied to the previous element to produce * a new element * @return a new sequential {@code Stream} * @since 9 */
public static<T> Stream<T> iterate(T seed, Predicate<? super T> hasNext, UnaryOperator<T> next) { Objects.requireNonNull(next); Objects.requireNonNull(hasNext); Spliterator<T> spliterator = new Spliterators.AbstractSpliterator<>(Long.MAX_VALUE, Spliterator.ORDERED | Spliterator.IMMUTABLE) { T prev; boolean started, finished; @Override public boolean tryAdvance(Consumer<? super T> action) { Objects.requireNonNull(action); if (finished) return false; T t; if (started) t = next.apply(prev); else { t = seed; started = true; } if (!hasNext.test(t)) { prev = null; finished = true; return false; } action.accept(prev = t); return true; } @Override public void forEachRemaining(Consumer<? super T> action) { Objects.requireNonNull(action); if (finished) return; finished = true; T t = started ? next.apply(prev) : seed; prev = null; while (hasNext.test(t)) { action.accept(t); t = next.apply(t); } } }; return StreamSupport.stream(spliterator, false); }
Returns an infinite sequential unordered stream where each element is generated by the provided Supplier. This is suitable for generating constant streams, streams of random elements, etc.
Params:
  • s – the Supplier of generated elements
Type parameters:
  • <T> – the type of stream elements
Returns:a new infinite sequential unordered Stream
/** * Returns an infinite sequential unordered stream where each element is * generated by the provided {@code Supplier}. This is suitable for * generating constant streams, streams of random elements, etc. * * @param <T> the type of stream elements * @param s the {@code Supplier} of generated elements * @return a new infinite sequential unordered {@code Stream} */
public static<T> Stream<T> generate(Supplier<? extends T> s) { Objects.requireNonNull(s); return StreamSupport.stream( new StreamSpliterators.InfiniteSupplyingSpliterator.OfRef<>(Long.MAX_VALUE, s), false); }
Creates a lazily concatenated stream whose elements are all the elements of the first stream followed by all the elements of the second stream. The resulting stream is ordered if both of the input streams are ordered, and parallel if either of the input streams is parallel. When the resulting stream is closed, the close handlers for both input streams are invoked.

This method operates on the two input streams and binds each stream to its source. As a result subsequent modifications to an input stream source may not be reflected in the concatenated stream result.

Params:
  • a – the first stream
  • b – the second stream
Type parameters:
  • <T> – The type of stream elements
Implementation Note: Use caution when constructing streams from repeated concatenation. Accessing an element of a deeply concatenated stream can result in deep call chains, or even StackOverflowError.

Subsequent changes to the sequential/parallel execution mode of the returned stream are not guaranteed to be propagated to the input streams.

API Note: To preserve optimization opportunities this method binds each stream to its source and accepts only two streams as parameters. For example, the exact size of the concatenated stream source can be computed if the exact size of each input stream source is known. To concatenate more streams without binding, or without nested calls to this method, try creating a stream of streams and flat-mapping with the identity function, for example:

    Stream<T> concat = Stream.of(s1, s2, s3, s4).flatMap(s -> s);
Returns:the concatenation of the two input streams
/** * Creates a lazily concatenated stream whose elements are all the * elements of the first stream followed by all the elements of the * second stream. The resulting stream is ordered if both * of the input streams are ordered, and parallel if either of the input * streams is parallel. When the resulting stream is closed, the close * handlers for both input streams are invoked. * * <p>This method operates on the two input streams and binds each stream * to its source. As a result subsequent modifications to an input stream * source may not be reflected in the concatenated stream result. * * @implNote * Use caution when constructing streams from repeated concatenation. * Accessing an element of a deeply concatenated stream can result in deep * call chains, or even {@code StackOverflowError}. * * <p>Subsequent changes to the sequential/parallel execution mode of the * returned stream are not guaranteed to be propagated to the input streams. * * @apiNote * To preserve optimization opportunities this method binds each stream to * its source and accepts only two streams as parameters. For example, the * exact size of the concatenated stream source can be computed if the exact * size of each input stream source is known. * To concatenate more streams without binding, or without nested calls to * this method, try creating a stream of streams and flat-mapping with the * identity function, for example: * <pre>{@code * Stream<T> concat = Stream.of(s1, s2, s3, s4).flatMap(s -> s); * }</pre> * * @param <T> The type of stream elements * @param a the first stream * @param b the second stream * @return the concatenation of the two input streams */
public static <T> Stream<T> concat(Stream<? extends T> a, Stream<? extends T> b) { Objects.requireNonNull(a); Objects.requireNonNull(b); @SuppressWarnings("unchecked") Spliterator<T> split = new Streams.ConcatSpliterator.OfRef<>( (Spliterator<T>) a.spliterator(), (Spliterator<T>) b.spliterator()); Stream<T> stream = StreamSupport.stream(split, a.isParallel() || b.isParallel()); return stream.onClose(Streams.composedClose(a, b)); }
A mutable builder for a Stream. This allows the creation of a Stream by generating elements individually and adding them to the Builder (without the copying overhead that comes from using an ArrayList as a temporary buffer.)

A stream builder has a lifecycle, which starts in a building phase, during which elements can be added, and then transitions to a built phase, after which elements may not be added. The built phase begins when the build() method is called, which creates an ordered Stream whose elements are the elements that were added to the stream builder, in the order they were added.

Type parameters:
  • <T> – the type of stream elements
See Also:
Since:1.8
/** * A mutable builder for a {@code Stream}. This allows the creation of a * {@code Stream} by generating elements individually and adding them to the * {@code Builder} (without the copying overhead that comes from using * an {@code ArrayList} as a temporary buffer.) * * <p>A stream builder has a lifecycle, which starts in a building * phase, during which elements can be added, and then transitions to a built * phase, after which elements may not be added. The built phase begins * when the {@link #build()} method is called, which creates an ordered * {@code Stream} whose elements are the elements that were added to the stream * builder, in the order they were added. * * @param <T> the type of stream elements * @see Stream#builder() * @since 1.8 */
public interface Builder<T> extends Consumer<T> {
Adds an element to the stream being built.
Throws:
  • IllegalStateException – if the builder has already transitioned to the built state
/** * Adds an element to the stream being built. * * @throws IllegalStateException if the builder has already transitioned to * the built state */
@Override void accept(T t);
Adds an element to the stream being built.
Params:
  • t – the element to add
Throws:
Implementation Requirements: The default implementation behaves as if:

    accept(t)
    return this;
Returns:this builder
/** * Adds an element to the stream being built. * * @implSpec * The default implementation behaves as if: * <pre>{@code * accept(t) * return this; * }</pre> * * @param t the element to add * @return {@code this} builder * @throws IllegalStateException if the builder has already transitioned to * the built state */
default Builder<T> add(T t) { accept(t); return this; }
Builds the stream, transitioning this builder to the built state. An IllegalStateException is thrown if there are further attempts to operate on the builder after it has entered the built state.
Throws:
Returns:the built stream
/** * Builds the stream, transitioning this builder to the built state. * An {@code IllegalStateException} is thrown if there are further attempts * to operate on the builder after it has entered the built state. * * @return the built stream * @throws IllegalStateException if the builder has already transitioned to * the built state */
Stream<T> build(); } }