AI and GPU’s (a very simplified overview)
Posted by Anne Tarantino, VP Corporate Sales on 12th Feb 2020
When I first heard the words “Artificial Intelligence” I thought that was referring to politicians, and then when I understood that the GPU’s were needed for “deep learning” then I knew for sure it had nothing to do with politicians.
Deep learning are models of neural networks that mimic the human brain.
GPU’s for AI and not a CPU? Why?
GPU’s have a larger number of simple cores that allow parallel computing through thousands of threads at a time. CPU’s have a few cores that processes sequentially with a few threads at a time. For deep learning CUDA code runs on the GPU. The GPUs are bandwidth optimized and CPUs are latency (memory access time optimized). Bandwidth is what makes GPU’s the winner in speed for computing large databases. In keeping it simple, compute with GPU’s process with CPUS.
What’s next?
ASIC’s (Application Specific Integrated Circuits). These are single purpose chips customized for one type of function. The best example I can think of for ASIC’s is Bitcoin Mining. Each miner is constructed to mine a specific digital currency.
In this example ASIC's job is to review and verify previous bitcoin transactions and create a new block so that information can be added to the blockchain. Mining in this case involves solving complex mathematical problems using hash functions linked to the block that contain transaction data. It’s like one huge math puzzle. The name of the game is managing the blockchain and creating new blocks so new data can be added to the block chain. It’s the need for speed in math.
As much as all of this is interesting, we really have become a society of immediate gratification. Our data, our money, our math calculations, our food and anything we consume. As cool as all of this is, I’ll keep my need for speed like this: