Configuration Troubleshooting
Common Coordimap Agent Configuration Errors
This page covers the most common configuration mistakes we see when setting up the Coordimap agent.
Error: scope_id must be provided
Several crawlers now require scope_id. If it is missing, the crawler will fail during configuration or startup.
Typical fix:
- Kubernetes: use the cluster UID
- GCP: use the project number
- AWS: use the account ID
- PostgreSQL: use
system_identifier - MySQL or MariaDB: use
server_uuid - MongoDB: use a replica set or cluster identity
See the canonical reference here:
Error: Flow Data Does Not Attach To Kubernetes Assets
This is usually an identity mismatch.
Common causes:
- GCP
external_mappingspoints to the wrong value flowsdatasource uses the wrong Kubernetes cluster UID- Kubernetes crawler
scope_idchanged when the cluster did not
The fix is to make sure every integration that maps back to Kubernetes uses the same cluster UID as the Kubernetes crawler scope_id.
Find the cluster UID with:
kubectl get namespace kube-system -o jsonpath='{.metadata.uid}'Error: Invalid external_mappings
The agent expects external_mappings entries to use this shape:
<mapping-key>@<mapping-value>Examples:
europe-west1-my-gke-cluster@6f5f56e3-0123-4567-89ab-6c8f1e2a0cde
*@6f5f56e3-0123-4567-89ab-6c8f1e2a0cde
gke-node-pool-a@gcp-productionPractical rules:
- keep the mapping key exactly aligned with the value emitted by the upstream integration
- keep the mapping value stable
- when mapping to Kubernetes scope, use the Kubernetes cluster UID
Error: Duplicate Assets After Recreating A Data Source
This usually means the new configuration used a different scope_id for the same upstream system.
Remember:
data_source_ididentifies the Coordimap connector recordscope_ididentifies the real upstream ownership boundary
If you recreate the data source in the UI but the upstream system is still the same, keep the same scope_id.
Error: Repo Examples Show id But Docs Show data_source_id
Some current agent repository examples still show datasource entries written with id.
The runtime model and current validation paths use data_source_id, which is why these docs standardize on data_source_id.
If you work from an older example file, verify which field your deployed version expects before rolling it into production.
Error: GCP Flow Logs Are Enabled But Nothing Shows Up
Check all of the following:
- the GCP datasource uses
gcp_flows: "true" - VPC Flow Logs are enabled in GCP
- metadata needed for mapping is included in the logs
- any Kubernetes-related
external_mappingspoint to the correct Kubernetes cluster UID
Error: eBPF flows Datasource Fails In Kubernetes Mode
Check all of the following:
deployedAtis set tokubernetesexternal_mappingsis present- the mapping value points to the Kubernetes cluster UID
- the host environment supports the required eBPF tooling and kernel capabilities
Related Reading
Shared Configuration Options
Learn how Coordimap shared configuration works, including data_source_id, scope_id, and crawl_interval for stable asset identity across cloud and database crawlers.
Configuring An AWS Data Source For Coordimap
Configure Coordimap AWS data sources with the correct scope_id, IAM credentials, crawl intervals, and optional VPC Flow Logs.