Connect with us

Medtech

AiThority Interview with Manuvir Das, VP, Enterprise Computing at NVIDIA

The post AiThority Interview with Manuvir Das, VP, Enterprise Computing at NVIDIA appeared first on AiThority.

Published

on

This article was originally published by AITHORITY
AiThority Interview with Manuvir Das, VP, Enterprise Computing at NVIDIA
Manuvir Das, VP, Enterprise Computing at NVIDIA

What are the key focus areas in your partner news with Microsoft. How would AI developers benefit from this unique combination of AI initiatives?

Overall, our MS Build announcements reveal the work NVIDIA is doing with Microsoft — NVIDIA AI Enterprise integration with Azure Machine Learning, NeMo availability on Windows, and the availability of NVIDIA AI Enterprise & Omniverse Cloud on Azure Marketplace — to advance AI and generative AI, and provide customers and developers the ability to do leading-edge AI wherever they may be and on whatever their unique device or infrastructure may be.

The NVIDIA AI Enterprise integration with Azure Machine Learning creates the first enterprise-grade, end-to-end platform for developers to build, deploy and manage AI applications based on custom large language models. This makes it easier for enterprises to adopt high-performance, enterprise-ready MLOps with security, reliability, and API stability — all provided with enterprise-grade support.

The availability of NeMo on Windows provides developers with a suite of advancements in Windows 11 PCs and workstations with NVIDIA RTX GPUs to meet the demands of generative AI. It brings the tools to develop AI on Windows PCs, frameworks to optimize and deploy AI, and driver performance and efficiency improvements to those who are building out the next generation of Windows apps with generative AI at their core.

Additionally, the availability of both NVIDIA AI Enterprise and Omniverse Cloud on Azure Marketplace is another way we’re making sure all these world-class development tools are easily accessible to all the developers in the global AI community who can benefit from their use.

Recommended: AiThority Interview with Itamar Kandel, Chief Executive Officer at Vista.ai

Every AI company is looking to build its own customizable MLOps stack for faster development and deployment; how does NVIDIA AI Enterprise accelerate MLOps? (share case study/customer success story)

NVIDIA is making enterprise-grade MLOps accessible to all with the integration of NVIDIA AI Enterprise with Azure Machine Learning. ‌It creates a reliable adn secure platform where organizations can easily adopt high-performance and enterprise-ready software that includes frameworks and pretrained models for a wide variety of AI use cases, including generative AI, speech AI, cybersecurity, medical imaging, and more.

One exciting use case is pose estimation, which is a computer vision technique that predicts and tracks the location of a person or object by looking at a combination of the pose and the orientation. Pose estimation is a key task in computer vision and AI and is used across multiple industries.

In healthcare, pose estimation is used to address fall detection, which helps to alert caregivers of a patient in need of assistance. In sports training, fitness, and dance, pose estimation offers more accurate insights on an athlete’s or dancer’s technique.

Tell us more about your pretrained models and how these integrate with the Microsoft Azure Machine Learning platform.

A pretrained AI model is one that’s trained on large datasets to accomplish a specific task. It can be used as is or customized to suit application requirements across multiple industries. Developers building on pretrained models can create AI applications faster.

NVIDIA AI Enterprise includes an extensive library of pretrained models that are unencrypted. This allows developers and enterprises looking to integrate NVIDIA pretrained models into their custom AI applications to view model weights and biases, improve explainability, and debug easily.

Integration with Azure Machine Learning will allow developers to more easily access and use the pretrained models that are included in NVIDIA AI Enterprise to build their own AI solutions.

Recommended: AiThority Interview with Sumeet Arora, Chief Development Officer at ThoughtSpot

Share one untapped area where LLMs and generative AI have yet to be tested and applied?

As ChatGPT made its public debut only months ago, many enterprises are still in the early stages of developing plans for custom large language models and generative AI chatbots that are tuned to their business needs. NVIDIA is working with our partners to help create a technology foundation for enterprises everywhere to leverage their proprietary data for models and applications that are tailored to their unique industry and needs.

Share the NVIDIA focus for supporting the open source LLM development community.

The AI community by nature has been open source for many years, and NVIDIA fully expects that to continue. NVIDIA AI Enterprise is our commercial offering, built to bring value to enterprise customers by creating a straightforward and accessible way to consume much of the open source software to which NVIDIA is a big contributor. For those who aren’t aware, NVIDIA recently released NeMo Guardrails, open-source software that allows developers to add programmable guardrails to conversational systems, such as chatbots, that keep responses on track and relevant to the context of the conversation. We had a hugely positive response to this from the AI community.

We fully expect open source to continue to be a significant part of generative AI and large language model-based applications, and open source offerings will likely evolve beyond software. As an example, we are currently seeing interest in pretrained models that, like open source software such as NeMo, are easily available to developers. All these things are helping researchers and developers to more easily experiment and advance some of the technology that ends up becoming a part of NVIDIA AI Enterprise, which enables enterprises to adopt the latest advancements with the security, stability, and support they require.

Recommended: AiThority Interview with James Rubin, Product Manager at Google

Thank you, Manuvir! That was fun and we hope to see you back on AiThority.com soon.

[To share your insights with us, please write to sghosh@martechseries.com]

NVIDIA Logo
Since its founding in 1993, NVIDIA has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.

The post AiThority Interview with Manuvir Das, VP, Enterprise Computing at NVIDIA appeared first on AiThority.






machine learning



Medtech

ETF Talk: AI is ‘Big Generator’

Second nature comes alive Even if you close your eyes We exist through this strange device — Yes, “Big Generator” Artificial intelligence (AI) has…

Continue Reading
Medtech

Apple gets an appeals court win for its Apple Watch

Apple has at least a couple more weeks before it has to worry about another sales ban.

Continue Reading
Medtech

Federal court blocks ban on Apple Watches after Apple appeal

A federal appeals court has temporarily blocked a sweeping import ban on Apple’s latest smartwatches while the patent dispute winds its way through…

Continue Reading

Trending