NVIDIA RTX 2080 Ti set for Enabling Faster Deep Learning
- NVIDIA has launched a new series of graphics cards called the GeForce RTX 2000
- The company claims that the RTX 2070 is 40% faster than it’s previous release, GTX 1070
- Check out the comparison chart below which illustrates the difference between different offerings in this space
Nvidia unveiled its new GeForce RTX 2000 series of graphics cards at Gamescom earlier today. While there has been a lot of anticipation in the gaming community, my eyes are gleaming with the possibilities in Deep Learning as I am writing this post.
Nvidia announced RTX 2070, which is claimed to be 40% faster than GTX 1070.
The beast – RTX 2080 Ti comes with 11 GB GDDR6, 4352 CUDA cores (yes – you read it right), that is 21% more CUDA cores than GTX 1080 Ti. I think that this would result in a 40%+ performance improvement over GTX 1080 Ti – although only time will tell.
The cards are up for pre-orders and will be delivered from 20th September 2018. Here is a brief summary of the specifications of the new cards against the older ones:
|RTX 2080 Ti||RTX 2080||GTX 1080 Ti||GTX 1080||RTX 2070||GTX 1070|
|Memory||11GB GDDR6||8GB GDDR6||11GB GDDR5X||8GB GDDR5X||8GB GDDR6||8GB GDDR5|
Our take on this
We think NVIDIA is set to have a big hardware impact on Deep Learning. A 20% – 40% increase in hardware performance combined with the advancements happening in the algorithms should accelerate the Deep Learning innovations and have huge impact on real world applications in coming 6 – 12 months. We can’t wait to get our hands on this new beast.
Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!