Ngôn ngữ

Cloud Covered: What was new with Google Cloud in February

Cloud Covered: What was new with Google Cloud in February

AI and Machine Learning: The Core of Innovation

February unleashed a wave of artificial intelligence advancements on Google Cloud, headlined by new model releases and deeper integrations that make AI more practical and powerful for developers. The debut of Gemini 3.1 Flash-Lite provided a fast, cost-optimized option for high-volume workloads, while Gemini 3.1 Pro raised the bar for complex reasoning tasks. Simultaneously, Nano Banana 2 democratized professional-grade image generation, offering Pro-tier quality at Flash-tier speeds.

Perhaps more transformative was the general availability of Vertex AI integration with Cloud SQL for MySQL, enabling SQL queries to directly invoke online predictions and generate vector embeddings. This move effectively erases the traditional boundary between databases and AI platforms, creating a more unified data intelligence layer.

Gemini Models: Architecting for Scale and Nuance

The dual release of Gemini 3.1 Flash-Lite and Pro showcases a strategic approach to model tiering. Flash-Lite is engineered for massive-scale operations where throughput and cost-per-task are paramount, such as bulk content moderation or translation. In contrast, 3.1 Pro serves as a smarter baseline for intricate problem-solving, from generating detailed simulations to constructing user interfaces from natural language prompts.

Vertex AI: Weaving Intelligence into Data Fabrics

The GA of Vertex AI integration for Cloud SQL is a milestone for operational AI. By allowing models hosted in Vertex AI to be called via simple SQL, it eliminates complex data movement and API orchestration. This tight coupling means data scientists and application developers can enrich transactional data with AI insights in real-time, significantly accelerating the path from data to decision.

Database and Analytics: Smarter, Faster, and More Connected

The data layer received substantial upgrades focused on scalability, intelligence, and seamless AI interaction. Cloud Spanner introduced powerful new autoscaling capabilities, while the preview of remote Model Context Protocol (MCP) servers for Cloud SQL, Spanner, and Firestore created a standardized bridge for AI agents to interact with live data.

BigQuery continued its evolution with new AI-powered features like dataset insights and multicategory classification models. A critical reminder for users was the impending deadline for Legacy SQL, pushing adoption towards modern, more performant query syntax. These updates collectively push databases from passive stores to active, intelligent participants in the application stack.

Cloud Spanner and the Autoscaling Revolution

Spanner's new autoscaling features represent a leap forward for managing unpredictable workloads. The system can now dynamically adjust compute and storage resources based on actual demand, ensuring performance without over-provisioning. This is especially crucial for global applications that experience variable traffic patterns across different regions and times.

The MCP Server: A Universal Adapter for AI

The introduction of remote MCP servers in preview for key database services is a foundational shift. These servers act as a universal translator, allowing AI applications and agents like Gemini or Claude to securely query, update, and reason over database content using a standardized protocol. It effectively turns any supported database into a conversant data source for the next generation of agentic applications.

Serverless Computing: Building Resilience and Secure Connectivity

Serverless platforms like Cloud Run and Cloud Functions gained features that enhance robustness for mission-critical applications. Cloud Run entered preview for multi-region deployments with automated failover, a game-changer for building globally resilient services without managing infrastructure. Cloud Functions made direct VPC egress generally available for second-generation functions, enabling secure, private connections to internal resources.

The OSON24 runtime for Cloud Run also reached general availability, catering to developers who prefer deploying directly from source code. Furthermore, a new migration path from App Engine to Cloud Run simplifies modernizing legacy applications, offering a clearer trajectory towards a fully serverless future.

Cloud Run Achieves Global Resilience

The preview of multi-region failover for Cloud Run external traffic is a monumental step for serverless reliability. Services can now be deployed across multiple geographical regions, with Google's infrastructure automatically routing users to the healthy region in case of an outage. This brings enterprise-grade high availability to a fully managed, scale-to-zero platform.

Securing the Serverless Perimeter with VPC Egress

The GA of direct VPC egress for Cloud Functions 2nd gen addresses a key security and connectivity concern. Functions can now access databases, private APIs, or other services within a Virtual Private Cloud without traversing the public internet. This maintains a strong security perimeter while enabling serverless functions to act as integral parts of a private, hybrid architecture.

Security and Operations: Unifying Control and Proactive Insights

Security and operational management saw significant consolidation and automation. Google SecOps reached a major milestone with the general availability of unified Role-Based Access Control (RBAC), allowing administrators to manage permissions for SIEM and SOAR features directly through Google Cloud IAM. The Security Command Center deepened its integration with AppHub for contextual risk analysis.

For database administrators, a preview feature brought Gemini-powered investigation capabilities to Cloud SQL and AlloyDB, using AI to help troubleshoot slow queries. These updates point towards a future where security is seamlessly embedded and operational burdens are alleviated through intelligent assistance.

Unified RBAC: Simplifying SecOps Governance

The GA of unified RBAC for Google SecOps eliminates the need for separate permission systems. By leveraging the robust IAM framework of Google Cloud, teams can now define precise, granular access controls for security analysts and responders across both SIEM and SOAR functionalities. This streamlines administration and enhances security posture through consistent policy enforcement.

AI-Powered Database Troubleshooting

The preview of Gemini Cloud Assist investigation for Cloud SQL and AlloyDB introduces an AI co-pilot for DBAs. When faced with performance issues like slow queries, the system can analyze metrics, logs, and configuration to suggest root causes and potential optimizations. This transforms troubleshooting from a manual detective game into a guided, intelligent workflow.

Developer Experience and Infrastructure: Polish and Power Under the Hood

The developer interface and core infrastructure received thoughtful enhancements aimed at productivity and flexibility. The Google Cloud console welcomed the general availability of Dark Mode, a long-requested feature for reduced eye strain during extended work sessions. API Gateway and Cloud Endpoints gained native OpenAPI v3 support, modernizing API governance.

On the infrastructure front, Google Kubernetes Engine (GKE) introduced Dynamic Default Storage Class to automatically match storage types with node hardware, and Cloud Build expanded its footprint to the Jakarta region. These updates, while sometimes subtle, collectively refine the day-to-day experience of building and running on Google Cloud.

Console Dark Mode and Modern API Specifications

The GA of Dark Mode for the Google Cloud console is more than an aesthetic update; it's a wellness and productivity feature for teams working long hours. Meanwhile, native OpenAPI v3 support in API Gateway means developers can use the latest specification standard without downgrading, ensuring better tooling compatibility and more expressive API contracts from the start.

Smarter Infrastructure with GKE and Cloud Build

GKE's Dynamic Default Storage Class automates a previously manual chore. By inspecting node capabilities, it automatically provisions the correct disk type (Persistent Disk or Hyperdisk), ensuring optimal performance and cost without requiring complex scheduling rules. Cloud Build's expansion into Asia Southeast 3 offers lower latency for CI/CD pipelines in that growing market.

The Integrating Thread: From Silos to a Cohesive Intelligent System

Looking across February's updates, a powerful narrative emerges: Google Cloud is aggressively dismantling silos between AI, data, compute, and operations. The launch of MCP servers isn't just a new feature; it's a philosophical commitment to making every service an accessible component for AI agents. Similarly, unifying SecOps RBAC with Cloud IAM or embedding Vertex AI into Cloud SQL reflects a drive towards a cohesive, intelligent platform.

This shift is foundational for the agentic future. It's no longer about isolated tools performing individual tasks, but about creating an interoperable ecosystem where intelligence flows freely. The innovations of February lay the groundwork for building applications where AI doesn't just analyze data in a vacuum but actively orchestrates and optimizes the entire cloud environment in real-time. The cloud is becoming less of a collection of services and more of a unified, thinking partner.

Quay lại