E2E Network launches Nvidia’s Tesla® V100 GPU based instances

E2E Networks, a leading Indian Cloud Provider, announced today the launch of its datacenter grade Cloud GPU instances based on Nvidia’s Tesla V100 via its Public Cloud, which helps companies/startups where data scientists and engineers work on AI/ML workloads. Nvidia’s Tesla v100 GPUs with 32 GB onboard graphics memory based compute instances are ideal for Machine Learning, Deep Learning for Natural Language Processing and structured data analytics, Convolutional Neural Networks for Image recognition/generation, Deep Analytics, Computer Vision etc.  amongst other uses like Conversational Speech Recognition.

E2E Networks GPU instances offer bare metal performance via Passthrough mode by being directly attached to the Virtual Compute Nodes; whether your workloads require CUDA, Tensorflow, MXNet, Caffe2, OpenFOAM, Theano, PyTorch and many more AI/ML/DL/CNN frameworks.

E2E Networks GPU instances can help in optimizing operational costs by as much as 70% compared to the other leading Public Cloud providers. The GPU instances are available as hourly billed instances or pre-committed instances at deeply discounted pricing. E2E Networks GPU instances are available from Indian datacenters ensuring data locality for your critical India centered data.

Speaking about the GPU instances, Tarun Dua, CMD, E2E Networks says, “What we have seen amongst Cloud Computing users is that the AI/ML/Deep Learning workloads are usually NOT a part of the HTTP-Request-Response cycle and it is usually more effective to use GPU instances at a Cloud Provider which provides them better value. Multi-Cloud is a reality today and there is a clearly established trend in moving significantly large AI/ML workloads away from the primary public cloud operators to derive better value from their deep learning training models leveraging the power of data.”

Marketing
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment