Google JAX
Developer(s) | |
---|---|
Preview release | v0.3.13
/ 16 May 2022 |
Repository | github |
Written in | Python, C++ |
Operating system | Linux, macOS, Windows |
Platform | Python, NumPy |
Size | 9.0 MB |
Type | Machine learning |
License | Apache 2.0 |
Website | jax |
Google JAX is a machine learning framework for transforming numerical functions.[1][2][3] It is described as bringing together a modified version of autograd (automatic obtaining of the gradient function through differentiation of a function) and TensorFlow's XLA (Accelerated Linear Algebra). It is designed to follow the structure and workflow of NumPy as closely as possible and works with various existing frameworks such as TensorFlow and PyTorch.[4][5] The primary functions of JAX are:[1]
- grad: automatic differentiation
- jit: compilation
- vmap: auto-vectorization
- pmap: SPMD programming
grad
The below code demonstrates the grad function's automatic differentiation.
# imports
from jax import grad
import jax.numpy as jnp
# define the logistic function
def logistic(x):
return jnp.exp(x) / (jnp.exp(x) + 1)
# obtain the gradient function of the logistic function
grad_logistic = grad(logistic)
# evaluate the gradient of the logistic function at x = 1
grad_log_out = grad_logistic(1.0)
print(grad_log_out)
The final line should outputː
0.19661194
jit
The below code demonstrates the jit function's optimization through fusion.
# imports
from jax import jit
import jax.numpy as jnp
# define the cube function
def cube(x):
return x * x * x
# generate data
x = jnp.ones((10000, 10000))
# create the jit version of the cube function
jit_cube = jit(cube)
# apply the cube and jit_cube functions to the same data for speed comparison
cube(x)
jit_cube(x)
The computation time for jit_cube (line no.17) should be noticeably shorter than that for cube (line no.16). Increasing the values on line no. 7, will increase the difference.
vmap
The below code demonstrates the vmap function's vectorization.
# imports
from functools import partial
from jax import vmap
import jax.numpy as jnp
# define function
def grads(self, inputs):
in_grad_partial = partial(self._net_grads, self._net_params)
grad_vmap = jax.vmap(in_grad_partial)
rich_grads = grad_vmap(inputs)
flat_grads = np.asarray(self._flatten_batch(rich_grads))
assert flat_grads.ndim == 2 and flat_grads.shape[0] == inputs.shape[0]
return flat_grads
The GIF on the right of this section illustrates the notion of vectorized addition.
pmap
The below code demonstrates the pmap function's parallelization for matrix multiplication.
# import pmap and random from JAX; import JAX NumPy
from jax import pmap, random
import jax.numpy as jnp
# generate 2 random matrices of dimensions 5000 x 6000, one per device
random_keys = random.split(random.PRNGKey(0), 2)
matrices = pmap(lambda key: random.normal(key, (5000, 6000)))(random_keys)
# without data transfer, in parallel, perform a local matrix multiplication on each CPU/GPU
outputs = pmap(lambda x: jnp.dot(x, x.T))(matrices)
# without data transfer, in parallel, obtain the mean for both matrices on each CPU/GPU separately
means = pmap(jnp.mean)(outputs)
print(means)
The final line should print the valuesː
[1.1566595 1.1805978]
Libraries using JAX
Several python libraries use JAX as a backend, including:
- Flax, a high level neural network library initially developed by Google Brain.[6]
- Haiku, an object-oriented library for neural networks developed by DeepMind.[7]
- Equinox, a library that revolves around the idea of representing parameterised functions (including neural networks) as PyTrees. It was created by Patrick Kidger.[8]
- Optax, a library for gradient processing and optimisation developed by DeepMind.[9]
- RLax, a library for developing reinforcement learning agents developed by DeepMind.[10]
See also
- NumPy
- TensorFlow
- PyTorch
- CUDA
- Automatic differentiation
- Just-in-time compilation
- Vectorization
- Automatic parallelization
External links
- Documentationː jax
.readthedocs .io - Colab (Jupyter/iPython) Quickstart Guideː colab
.research .google .com /github /google /jax /blob /main /docs /notebooks /quickstart .ipynb - TensorFlow's XLAː www
.tensorflow .org /xla (Accelerated Linear Algebra) - Intro to JAX: Accelerating Machine Learning research on YouTube
- Original paperː mlsys
.org /Conferences /doc /2018 /146 .pdf
References
- ^ a b Bradbury, James; Frostig, Roy; Hawkins, Peter; Johnson, Matthew James; Leary, Chris; MacLaurin, Dougal; Necula, George; Paszke, Adam; Vanderplas, Jake; Wanderman-Milne, Skye; Zhang, Qiao (2022-06-18), "JAX: Autograd and XLA", Astrophysics Source Code Library, Google, Bibcode:2021ascl.soft11002B, archived from the original on 2022-06-18, retrieved 2022-06-18
- ^ Frostig, Roy; Johnson, Matthew James; Leary, Chris (2018-02-02). "Compiling machine learning programs via high-level tracing" (PDF). MLsys: 1–3. Archived (PDF) from the original on 2022-06-21.
{{cite journal}}
: CS1 maint: date and year (link) - ^ "Using JAX to accelerate our research". www.deepmind.com. Archived from the original on 2022-06-18. Retrieved 2022-06-18.
- ^ Lynley, Matthew. "Google is quietly replacing the backbone of its AI product strategy after its last big push for dominance got overshadowed by Meta". Business Insider. Archived from the original on 2022-06-21. Retrieved 2022-06-21.
- ^ "Why is Google's JAX so popular?". Analytics India Magazine. 2022-04-25. Archived from the original on 2022-06-18. Retrieved 2022-06-18.
- ^ Flax: A neural network library and ecosystem for JAX designed for flexibility, Google, 2022-07-29, retrieved 2022-07-29
- ^ Haiku: Sonnet for JAX, DeepMind, 2022-07-29, retrieved 2022-07-29
- ^ Kidger, Patrick (2022-07-29), Equinox, retrieved 2022-07-29
- ^ Optax, DeepMind, 2022-07-28, retrieved 2022-07-29
- ^ RLax, DeepMind, 2022-07-29, retrieved 2022-07-29