Learn everything about Analytics

Home » NVIDIA RTX 2080 Ti set for Enabling Faster Deep Learning

NVIDIA RTX 2080 Ti set for Enabling Faster Deep Learning


  • NVIDIA has launched a new series of graphics cards called the GeForce RTX 2000
  • The company claims that the RTX 2070 is 40% faster than it’s previous release, GTX 1070
  • Check out the comparison chart below which illustrates the difference between different offerings in this space



Nvidia unveiled its new GeForce RTX 2000 series of graphics cards at Gamescom earlier today. While there has been a lot of anticipation in the gaming community, my eyes are gleaming with the possibilities in Deep Learning as I am writing this post.

Nvidia announced RTX 2070, which is claimed to be 40% faster than GTX 1070.

The beast – RTX 2080 Ti comes with 11 GB GDDR6, 4352 CUDA cores (yes – you read it right), that is 21% more CUDA cores than GTX 1080 Ti. I think that this would result in a 40%+ performance improvement over GTX 1080 Ti – although only time will tell.

The cards are up for pre-orders and will be delivered from 20th September 2018. Here is a brief summary of the specifications of the new cards against the older ones:


RTX 2080 Ti RTX 2080  GTX 1080 Ti GTX 1080 RTX 2070 GTX 1070
CUDA Cores 4352 2944 3584 2560 2304 1920
Memory interface 352-bit 256-bit 352-bit 256-bit 256-bit 256-bit
TDP 285W 285W 250W 180W 180W 150W



Our take on this

We think NVIDIA is set to have a big hardware impact on Deep Learning. A 20% – 40% increase in hardware performance combined with the advancements happening in the algorithms should accelerate the Deep Learning innovations and have huge impact on real world applications in coming 6 – 12 months. We can’t wait to get our hands on this new beast.


Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!


You can also read this article on our Mobile APP Get it on Google Play
This article is quite old and you might not get a prompt response from the author. We request you to post this comment on Analytics Vidhya's Discussion portal to get your queries resolved


  • ChEd says:

    And the 2080ti has almost as many tensor cores than the TitanV, which the Pascal based cards don’t have at all.
    For half the price of a Titan V… That’s very exciting!

    • FRAN says:

      I want confirmation about that, their web say 100TFLOPS + for AI but could not find the tensorcore info, so I’m a little worried in case they are capped.

      If we have those tensorcores it will be a fast buy for me.

  • Tony Holdroyd says:

    Does anyone have any information about when the Linux and CUDA cuDNN drivers will be available? Hopefully at launch?

  • Jeremy Poulain says:

    On the paper, the card sounds interesting for ML: More memory bandwidth, more cuda cores, tensor cores,NVLINK Bridge… the only drawback would be that Founders Edition cards won’t come with a blower style cooler (which could be problematic for a usage in a workstation) – but well given, the price tag, I would wait to see some ML benchmarks to see how the new cards behave in real world conditions.
    Moreover in the coming months, there will be certainly some interesting deals on the GTX 1080ti/1080 – which could turn those cards in real bombs in terms of performance/$.
    There were also some rumors about the release of a 16Gb version of the RTX series (Might be the future Titan RTX?) – such version would be REALLY great !!