NVIDIA Teams With Google DeepMind to Drive LLM Innovation

From NVIDIA: 2024-05-14 15:40:49

AI model innovation is soaring, but deployment challenges persist. NVIDIA and Google team up to optimize Gemma 2 and PaliGemma models for vision-language tasks. Gemma 2 offers breakthrough performance, while PaliGemma excels in fine-tuning tasks. Both models now supported by NVIDIA NIM inference microservices for easy deployment at scale.

RAPIDS cuDF, a GPU dataframe library, now default on Google Colab, speeding up pandas-based Python workflows by up to 50x. With RAPIDS cuDF, developers can accelerate exploratory analysis and data pipelines without changing a line of code, even with large datasets that can slow down CPU processing.

Google and NVIDIA collaborate on Firebase Genkit to integrate AI models, like Gemma, into web and mobile apps for custom content and semantic search. Developers can leverage NVIDIA RTX GPUs locally before transitioning seamlessly to Google Cloud. This partnership expands AI possibilities with NVIDIA technologies on Google Cloud.



Read more at NVIDIA: NVIDIA Teams With Google DeepMind to Drive LLM Innovation