Kondado

Creating the data source

Kondado's connector allows you to access your objects and executions (quantities and dates) for your Kondado models and integrations — all completely free: the records integrated using this connector do not count toward your plan limit.

It's especially useful for tracking the number of integrated records directly in your database, helping you monitor your plan usage. It also enables the data integrated and modeled by Kondado to interact with the rest of your data ecosystem. For example, if you use our integrations to send data to a staging database before sending it to a production one, you can query the log tables to check when an integration or model was executed and trigger your internal processes accordingly.

Adding the connector

Requirements
The generated token inherits the same permissions as the user who creates it. Therefore, the user generating the token must have access to the accounts and workspaces from which they want to retrieve logs.

Instructions
To add the Kondado connector, you first need to generate a Kondado token — see how to do that here. Be sure to save both the token and the access key.
Then simply name your connector, paste in the previously obtained key and token, and click SAVE!

Pipelines

Summary

Relationship chart

Click to expand

Model Logs

Campo Tipo

id

float

[en] Log record ID

model_id

float

[en] Model ID. You can get it from the URL when accessing the model on the Kondado platform: app.kondado.com.br/models/[your_model_id]

client_id

float

[en] Kondado account or workspace ID to which the integration belongs

row_count

float

[en] Number of records in the final table generated by the model

execution_time_seconds

float

[en] Time in seconds the model took to execute, excluding internal Kondado operations (e.g., connection setup)

updatedat

timestamp

[en] Log update timestamp (UTC)

createdat

timestamp

[en] Log creation timestamp (UTC). The log is created after the model execution finishes

model_name

text

[en] Model name at the time the log was generated

Pipeline Logs

Logs older than 90 days are not available

Campo Tipo

id

float

[en] Log record ID

pipeline_id

float

[en] Integration ID. You can get it from the URL when accessing the integration on the Kondado platform: app.kondado.com.br/pipelines/[your_integration_id]

pipeline_name

text

[en] Integration name

client_id

float

[en] Kondado account or workspace ID to which the integration belongs

row_count

float

[en] Number of records inserted by the execution

has_deltas

boolean

[en] Whether the integration wrote records to the deltas table

execution_time_seconds

float

[en] Time in seconds the integration took to run, excluding Kondado internal operations (e.g., connection setup)

updatedat

timestamp

[en] Log update timestamp (UTC)

createdat

timestamp

[en] Log creation timestamp (UTC). The log is created after the integration finishes, so this field can be used similarly to the record count graph shown in the Kondado platform

initial_savepoint

text

[en] For incremental integrations, this field stores the initial savepoint value (from Sept 2020 onward)

new_savepoint

text

[en] For incremental integrations, this field stores the final savepoint value (from Sept 2020 onward)

external_ip

text

[en] This field stores the external IP of the server used to execute the integration

is_billable

boolean

[en] Indicates whether the integrated records counted against your plan (TRUE) or were free rows (FALSE)

raw_row_count

float

[en] Number of rows inserted by the execution. This field will be populated starting June 12, 2025; for earlier dates, the row_count field is equivalent

mb_estimate

float

[en] Estimated size in MB of the data inserted by the execution

Frequently asked questions

What is the Kondado data source?
The Kondado data source lets you replicate your own Kondado workspace data — such as accounts, integrations, executions, row consumption, and event logs — into a destination of your choice. It is useful for monitoring usage, building internal dashboards, and auditing pipeline runs.
What credentials do I need to set up the Kondado data source?
You need an API token generated inside your Kondado account. Paste the token into the Kondado source form, give the source a name, and save it.
What kind of tables are available?
The source exposes tables describing your account configuration and pipeline activity, including integrations metadata and execution history. With this you can build a dashboard to track success rates, row consumption, and execution durations.
Where can I send Kondado workspace data?
You can load it into any of the supported destinations — Power BI, Looker Studio, Google Sheets, Excel, BigQuery, PostgreSQL, MySQL, SQL Server, Redshift, or S3. See destinations for the full list.
How often does the data refresh?
Pipelines run at the frequency you choose. Each run reads the latest workspace activity from the Kondado API and updates the tables in your destination.
Is this the same as connecting an external app to Kondado?
No. This source replicates Kondado's own usage and metadata for analytics. To replicate data from external systems (e-commerce, CRM, ads), use the corresponding data source for that system from the catalog.

Written by·Published 2023-07-19·Updated 2026-04-26