See also

numpy.broadcast

The term broadcasting describes how NumPy treats arrays with differentshapes during arithmetic operations. Subject to certain constraints,the smaller array is “broadcast” across the larger array so that theyhave compatible shapes. Broadcasting provides a means of vectorizingarray operations so that looping occurs in C instead of Python. It doesthis without making needless copies of data and usually leads toefficient algorithm implementations. There are, however, cases wherebroadcasting is a bad idea because it leads to inefficient use of memorythat slows computation.

NumPy operations are usually done on pairs of arrays on anelement-by-element basis. In the simplest case, the two arrays musthave exactly the same shape, as in the following example:

>>> a = np.array([1.0, 2.0, 3.0])>>> b = np.array([2.0, 2.0, 2.0])>>> a * barray([2., 4., 6.])

NumPy’s broadcasting rule relaxes this constraint when the arrays’shapes meet certain constraints. The simplest broadcasting example occurswhen an array and a scalar value are combined in an operation:

>>> a = np.array([1.0, 2.0, 3.0])>>> b = 2.0>>> a * barray([2., 4., 6.])

The result is equivalent to the previous example where `b`

was an array.We can think of the scalar `b`

being *stretched* during the arithmeticoperation into an array with the same shape as `a`

. The new elements in`b`

, as shown in Figure 1, are simply copies of theoriginal scalar. The stretching analogy isonly conceptual. NumPy is smart enough to use the original scalar valuewithout actually making copies so that broadcasting operations are asmemory and computationally efficient as possible.

The code in the second example is more efficient than that in the firstbecause broadcasting moves less memory around during the multiplication(`b`

is a scalar rather than an array).

## General broadcasting rules#

When operating on two arrays, NumPy compares their shapes element-wise.It starts with the trailing (i.e. rightmost) dimension and works itsway left. Two dimensions are compatible when

they are equal, or

one of them is 1.

If these conditions are not met, a`ValueError: operands could not be broadcast together`

exception isthrown, indicating that the arrays have incompatible shapes.

Input arrays do not need to have the same *number* of dimensions. Theresulting array will have the same number of dimensions as the input arraywith the greatest number of dimensions, where the *size* of each dimension isthe largest size of the corresponding dimension among the input arrays. Notethat missing dimensions are assumed to have size one.

For example, if you have a `256x256x3`

array of RGB values, and you wantto scale each color in the image by a different value, you can multiply theimage by a one-dimensional array with 3 values. Lining up the sizes of thetrailing axes of these arrays according to the broadcast rules, shows thatthey are compatible:

Image (3d array): 256 x 256 x 3Scale (1d array): 3Result (3d array): 256 x 256 x 3

When either of the dimensions compared is one, the other isused. In other words, dimensions with size 1 are stretched or “copied”to match the other.

In the following example, both the `A`

and `B`

arrays have axes withlength one that are expanded to a larger size during the broadcastoperation:

A (4d array): 8 x 1 x 6 x 1B (3d array): 7 x 1 x 5Result (4d array): 8 x 7 x 6 x 5

## Broadcastable arrays#

A set of arrays is called “broadcastable” to the same shape ifthe above rules produce a valid result.

For example, if `a.shape`

is (5,1), `b.shape`

is (1,6), `c.shape`

is (6,)and `d.shape`

is () so that *d* is a scalar, then *a*, *b*, *c*,and *d* are all broadcastable to dimension (5,6); and

*a*acts like a (5,6) array where`a[:,0]`

is broadcast to the othercolumns,*b*acts like a (5,6) array where`b[0,:]`

is broadcastto the other rows,*c*acts like a (1,6) array and therefore like a (5,6) arraywhere`c[:]`

is broadcast to every row, and finally,*d*acts like a (5,6) array where the single value is repeated.

Here are some more examples:

A (2d array): 5 x 4B (1d array): 1Result (2d array): 5 x 4A (2d array): 5 x 4B (1d array): 4Result (2d array): 5 x 4A (3d array): 15 x 3 x 5B (3d array): 15 x 1 x 5Result (3d array): 15 x 3 x 5A (3d array): 15 x 3 x 5B (2d array): 3 x 5Result (3d array): 15 x 3 x 5A (3d array): 15 x 3 x 5B (2d array): 3 x 1Result (3d array): 15 x 3 x 5

Here are examples of shapes that do not broadcast:

A (1d array): 3B (1d array): 4 # trailing dimensions do not matchA (2d array): 2 x 1B (3d array): 8 x 4 x 3 # second from last dimensions mismatched

An example of broadcasting when a 1-d array is added to a 2-d array:

>>> a = np.array([[ 0.0, 0.0, 0.0],... [10.0, 10.0, 10.0],... [20.0, 20.0, 20.0],... [30.0, 30.0, 30.0]])>>> b = np.array([1.0, 2.0, 3.0])>>> a + barray([[ 1., 2., 3.], [11., 12., 13.], [21., 22., 23.], [31., 32., 33.]])>>> b = np.array([1.0, 2.0, 3.0, 4.0])>>> a + bTraceback (most recent call last):ValueError: operands could not be broadcast together with shapes (4,3) (4,)

As shown in Figure 2, `b`

is added to each row of `a`

.In Figure 3, an exception is raised because of theincompatible shapes.

Broadcasting provides a convenient way of taking the outer product (orany other outer operation) of two arrays. The following example shows anouter addition operation of two 1-d arrays:

>>> a = np.array([0.0, 10.0, 20.0, 30.0])>>> b = np.array([1.0, 2.0, 3.0])>>> a[:, np.newaxis] + barray([[ 1., 2., 3.], [11., 12., 13.], [21., 22., 23.], [31., 32., 33.]])

Here the `newaxis`

index operator inserts a new axis into `a`

,making it a two-dimensional `4x1`

array. Combining the `4x1`

arraywith `b`

, which has shape `(3,)`

, yields a `4x3`

array.

## A practical example: vector quantization#

Broadcasting comes up quite often in real world problems. A typical exampleoccurs in the vector quantization (VQ) algorithm used in information theory,classification, and other related areas. The basic operation in VQ findsthe closest point in a set of points, called `codes`

in VQ jargon, to a givenpoint, called the `observation`

. In the very simple, two-dimensional caseshown below, the values in `observation`

describe the weight and height of anathlete to be classified. The `codes`

represent different classes ofathletes. [1] Finding the closest point requires calculating the distancebetween observation and each of the codes. The shortest distance provides thebest match. In this example, `codes[0]`

is the closest class indicating thatthe athlete is likely a basketball player.

>>> from numpy import array, argmin, sqrt, sum>>> observation = array([111.0, 188.0])>>> codes = array([[102.0, 203.0],... [132.0, 193.0],... [45.0, 155.0],... [57.0, 173.0]])>>> diff = codes - observation # the broadcast happens here>>> dist = sqrt(sum(diff**2,axis=-1))>>> argmin(dist)0

In this example, the `observation`

array is stretched to matchthe shape of the `codes`

array:

Observation (1d array): 2Codes (2d array): 4 x 2Diff (2d array): 4 x 2

Typically, a large number of `observations`

, perhaps read from a database,are compared to a set of `codes`

. Consider this scenario:

Observation (2d array): 10 x 3Codes (3d array): 5 x 1 x 3Diff (3d array): 5 x 10 x 3

The three-dimensional array, `diff`

, is a consequence of broadcasting, not anecessity for the calculation. Large data sets will generate a largeintermediate array that is computationally inefficient. Instead, if eachobservation is calculated individually using a Python loop around the codein the two-dimensional example above, a much smaller array is used.

Broadcasting is a powerful tool for writing short and usually intuitive codethat does its computations very efficiently in C. However, there are caseswhen broadcasting uses unnecessarily large amounts of memory for a particularalgorithm. In these cases, it is better to write the algorithm’s outer loop inPython. This may also produce more readable code, as algorithms that usebroadcasting tend to become more difficult to interpret as the number ofdimensions in the broadcast increases.

Footnotes