Google Cloud Storage Connector โ Docker๐
The Google Cloud Storage connector monitors a GCS bucket and sends objects to DSX for scanning.
It supports:
- Full scans of an entire bucket or prefix
- Continuous monitoring of new objects
- Remediation actions such as delete, move, or tag after malicious verdicts
Monitoring can be triggered using:
- Google Cloud Pub/Sub notifications (recommended)
- Webhook events from Cloud Run, Cloud Functions, or other middleware
Prerequisites๐
Before deploying the connector you must create a Google Cloud service account with access to the target bucket.
Required:
- A service account JSON credential
- Permission to list and read objects
Optional (for remediation actions):
- Permission to move or delete objects
See:
โก๏ธ Google Cloud Credentials
Minimal Deployment๐
The following steps will install the connector with minimal configuration changes, supporting full-scan only.
Using the Docker bundle
All Docker connector deployments use the official DSX-Connect Docker bundle, which contains the compose files and sample environment files for each connector.
Download the DSX-Connect Docker bundle and navigate to the Google Cloud Storage connector directory:
dsx-connect-<core_version>/google-cloud-storage-connector-<connector_version>/
The easiest way to deploy the GCS connector is by editing the supplied sample.gcs.env file
and using it with the supplied docker-compose-google-cloud-storage-connector.yaml compose file.
Mount the service account JSON๐
Place the GCS Service Account JSON credential in the same directory as the compose file. The default mount path is /app/creds/gcp-sa.json to a file named ./gcp-sa.json in the same directory.
Excerpt from the compose file:
volumes:
- type: bind
source: ./gcp-sa.json
target: /app/creds/gcp-sa.json
read_only: true
Set scan parameters๐
In this minimal deployment, the connector will scan the bucket your-bucket and perform no remediation actions.
# Google Cloud Storage connector env (sample)
# GCS_IMAGE=dsxconnect/google-cloud-storage-connector:0.5.43 # Optional: only need if overriding what's in t eh compose file
# GOOGLE_APPLICATION_CREDENTIALS=/app/creds/gcp-sa.json # typically unchanged: container mounted location of GCS service account credentials
DSXCONNECTOR_ASSET=your-bucket # Required: GCS bucket
DSXCONNECTOR_FILTER= # Optional: bucket filter to apply
DSX_CONNECTOR_ITEM_ACTION=nothing # nothing, move, move_tag, delete
DSX_CONNECTOR_ITEM_ACTION_MOVE_METAINFO="" # if move, where to
Deploy๐
docker compose --env-file sample.gcs.env -f docker-compose-google-cloud-storage-connector.yaml up -d
Required Settings๐
| Variable | Description |
|---|---|
DSXCONNECTOR_ASSET |
Root asset location (bucket, path, etc.). |
DSXCONNECTOR_FILTER |
Optional rsync-style include/exclude rules. |
DSXCONNECTOR_ITEM_ACTION |
Action taken when malicious files are detected (nothing, delete, move, move_tag) |
DSXCONNECTOR_ITEM_ACTION_MOVE_METAINFO |
If move or move_tag, destination for moved objects. |
DSXCONNECTOR_ASSET๐
Defines the object store asset root to scan.
Example:
DSXCONNECTOR_ASSET=name_of_bucket_or_container
DSXCONNECTOR_FILTER๐
Defines a rsync-like filter to apply to files and folders, such as bucket prefixes or file filters.
DSXCONNECTOR_ITEM_ACTION๐
Defines what happens to malicious files.
Common values:
nothing(report only)move(quarantine)move_tag(quarantine and tag - moves the file and adds metadata tag)`delete
If using move, also set:
DSXCONNECTOR_ITEM_ACTION_MOVE_METAINFO๐
Defines an object store resource and prefix to move quarantined files to.
Using our example above:
DSXCONNECTOR_ITEM_ACTION_MOVE_METAINFO=dsx-quarantine
dsx-quarantine under the same bucket or container specified in DSXCONNECTOR_ASSET.
Connector-specific Settings๐
Google Cloud Authentication๐
| Variable | Description |
|---|---|
GOOGLE_APPLICATION_CREDENTIALS |
Path to the mounted service account JSON file, if somewhere other than defaults |
GOOGLE_CLOUD_PROJECT |
Optional project ID if not included in the credential file. |
Advanced Settings๐
DSX_Connect Authentication๐
| Variable | Description |
|---|---|
DSXCONNECTOR_AUTH__ENABLED |
true enables authenticated communication between connector and DSX-Connect |
DSXCONNECT_ENROLLMENT_TOKEN |
DSX-Connect's bootstrap enrollment token |
TLS๐
If DSX-Connect Core is using TLS, set DSXCONNECTOR_DSX_CONNECT_URL protocol to https:
DSXCONNECTOR_DSX_CONNECT_URL=https://dsx-connect-api:8586
Monitor Settings๐
Monitoring enables on-access scanning when objects are created or modified.
Google Notification via Pub/Sub๐
First, set up notifications for the bucket you want to monitor. Pub/Sub Setup
Next, set the project_id and subscription obtained from the Pub/Sub setup.
| Variable | Description |
|---|---|
DSXCONNECTOR_MONITOR |
Enable monitoring (true or false). |
GCS_PUBSUB_PROJECT_ID |
Project containing the Pub/Sub subscription receiving bucket notifications. |
GCS_PUBSUB_SUBSCRIPTION |
Pub/Sub subscription that receives bucket event notifications. |
GCS_PUBSUB_ENDPOINT |
Optional override for the Pub/Sub endpoint (useful for local emulators). |
Google's client SDK handles the Pub/Sub connection and handling - under the covers, it calls the connector's /webhook/event endpoint.
Webhook Alternative๐
Youโd reach for the connector's /webhook/event path instead of native Pub/Sub in a few scenarios:
- Pub/Sub isnโt an option (restricted project, org policy, private cloud, or youโre already forwarding events through something else like Cloud Storage โ Eventarc โ Cloud Run).
- You already have middleware that enriches or filters events and can simply POST to the connectorโswitching to Pub/Sub would add new moving pieces.
- You want to keep control of retries/backoff or fan out to multiple systems before notifying dsx-connect.
- The connector runs where Pub/Sub access is awkward (airโgapped network segment, proxies, workload identity gaps), but you can still reach dsx-connect over HTTP/S.
- You plan to feed events from several sources beyond Cloud Storage (e.g., a centralized event hub), so hitting the webhook maintains a single integration pattern.
- You need custom authentication/validation in front of the connector; a small gateway/service can enforce that and call the webhook.
Pub/Sub remains the simplest path when itโs available, but the webhook keeps things flexible if youโve already standardized on HTTP callbacks or have compliance/runtime constraints around Pub/Sub.
For external callbacks into the connector, expose or tunnel the host port mapped to 8630 (compose default).
Upstream systems should hit that public address. Internally, set DSXCONNECTOR_CONNECTOR_URL to the Docker-service URL
(e.g., http://google-cloud-storage-connector:8630) so dsx-connect can reach the container.