Kunal Jain — Published On August 21, 2018 and Last Modified On May 7th, 2019
AVbytes

Overview

  • NVIDIA has launched a new series of graphics cards called the GeForce RTX 2000
  • The company claims that the RTX 2070 is 40% faster than it’s previous release, GTX 1070
  • Check out the comparison chart below which illustrates the difference between different offerings in this space

 

Introduction

Nvidia unveiled its new GeForce RTX 2000 series of graphics cards at Gamescom earlier today. While there has been a lot of anticipation in the gaming community, my eyes are gleaming with the possibilities in Deep Learning as I am writing this post.

Nvidia announced RTX 2070, which is claimed to be 40% faster than GTX 1070.

The beast – RTX 2080 Ti comes with 11 GB GDDR6, 4352 CUDA cores (yes – you read it right), that is 21% more CUDA cores than GTX 1080 Ti. I think that this would result in a 40%+ performance improvement over GTX 1080 Ti – although only time will tell.

The cards are up for pre-orders and will be delivered from 20th September 2018. Here is a brief summary of the specifications of the new cards against the older ones:

 

RTX 2080 Ti RTX 2080  GTX 1080 Ti GTX 1080 RTX 2070 GTX 1070
Memory 11GB GDDR6 8GB GDDR6 11GB GDDR5X 8GB GDDR5X 8GB GDDR6 8GB GDDR5
CUDA Cores 4352 2944 3584 2560 2304 1920
Memory interface 352-bit 256-bit 352-bit 256-bit 256-bit 256-bit
TDP 285W 285W 250W 180W 180W 150W

 

 

Our take on this

We think NVIDIA is set to have a big hardware impact on Deep Learning. A 20% – 40% increase in hardware performance combined with the advancements happening in the algorithms should accelerate the Deep Learning innovations and have huge impact on real world applications in coming 6 – 12 months. We can’t wait to get our hands on this new beast.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

About the Author

Kunal Jain
Kunal Jain

Kunal is a post graduate from IIT Bombay in Aerospace Engineering. He has spent more than 10 years in field of Data Science. His work experience ranges from mature markets like UK to a developing market like India. During this period he has lead teams of various sizes and has worked on various tools like SAS, SPSS, Qlikview, R, Python and Matlab.

Our Top Authors

Download Analytics Vidhya App for the Latest blog/Article

4 thoughts on "NVIDIA RTX 2080 Ti set for Enabling Faster Deep Learning"

ChEd
ChEd says: August 21, 2018 at 1:23 pm
And the 2080ti has almost as many tensor cores than the TitanV, which the Pascal based cards don't have at all. For half the price of a Titan V... That's very exciting! Reply
FRAN
FRAN says: August 21, 2018 at 5:00 pm
I want confirmation about that, their web say 100TFLOPS + for AI but could not find the tensorcore info, so I'm a little worried in case they are capped. If we have those tensorcores it will be a fast buy for me. Reply
Tony Holdroyd
Tony Holdroyd says: August 22, 2018 at 6:27 pm
Does anyone have any information about when the Linux and CUDA cuDNN drivers will be available? Hopefully at launch? Reply
Jeremy Poulain
Jeremy Poulain says: August 22, 2018 at 10:40 pm
On the paper, the card sounds interesting for ML: More memory bandwidth, more cuda cores, tensor cores,NVLINK Bridge… the only drawback would be that Founders Edition cards won’t come with a blower style cooler (which could be problematic for a usage in a workstation) – but well given, the price tag, I would wait to see some ML benchmarks to see how the new cards behave in real world conditions. Moreover in the coming months, there will be certainly some interesting deals on the GTX 1080ti/1080 – which could turn those cards in real bombs in terms of performance/$. There were also some rumors about the release of a 16Gb version of the RTX series (Might be the future Titan RTX?) – such version would be REALLY great !! Reply

Leave a Reply Your email address will not be published. Required fields are marked *