What are TPUs? Why do we use them?

29/3/2022

The development of Cedille was made possible by the generous support of Google's TRC programme, with which we were able to access a cluster of 1,000 cloud-based TPUs (Tensor Processing Units), including one large instance and a few dozen smaller instances. Researchers accepted into the TRC programme have free access and can exploit a variety of frameworks and languages such as TensorFlow, PyTorch, Julia and JAX, to accelerate the next wave of open research breakthroughs. Offering TPUs as a service on Cloud allows users to start building their models without needing up-front capital investment. In other words, researchers, engineers, small businesses, and even students can start machine learning projects with ease. We would like to thank Google once again for supporting our research and the launch of Cedille. Thanks to them, we have also been able to share our model in open source and share a publication so that other researchers and students can benefit from it.

A little background

For those who are wondering what TPUs are and are not experts in the field: TPUs are application-specific integrated circuits (ASICs) and enable faster calculations and algorithms in an AI. TPUs were designed from the ground up by Google, which began using them in 2015 and then made them public in 2018. Google is developing TPUs specifically for Neural Network Machine Learning for TensorFlow, the open-source ML platform. TensorFlow provides access to tools, libraries and a community so that Machine Learning applications can be built and deployed quickly.


We chose to go with TPUs as they operate more efficiently with large batch sizes. CPUs (Central Processing Units) and GPUs (Graphics Processing Units) compute most ML problems, but also take a lot of time. With TPUs, deep learning models that previously took weeks to train on GPUs now only take hours. TPUs also deliver 15 to 30 times higher performance and 30 to 80 times higher performance-per-watt than contemporary CPUs and GPUs. For Cedille we could therefore train much faster with TPUs, taking 2 weeks for a 6 billion parameter model instead of over a month!

​​Unlimited possibilities for the future of Cedille


Because we use TPUs to train our models, we can perform matrix multiplications quickly and in large volumes. The combination of Jax (a recent Google alternative to TensorFlow) and TPUs can do wonders in the fields of medicine, image processing and machine learning. For Cedille, this essentially means that we will be able to train the model on a new language in a fortnight, so that all the skills and functionality you can use with Cedille will then be available in other languages. Note that to use the models after training them, for example on our platform, we use GPUs.

We are currently working on adding new languages to our platform, so stay tuned for more updates!

Try it yourself

The model is available on a test platform, generate your own texts!
Try cedille now