GA4

This data source is in beta mode

This data source refers to version 4 (GA4) of Google Analytics. To use the previous version, Universal Analytics (UA), see this documentation

Adding the data source

To add Google Analytics as a data source on Kondado platform, follow the steps below:

1) On the Kondado platform, go to the add new sources page and select the GA4 data source;

2) Click AUTHORIZE

3) Select the account you will use

4) On the next screen, check ALL permissions that are required and click Continue

5) You will be redirected to Kondado and all you have to do is give your data source a name and click on SAVE

Pipelines

Custom report (core)

The GA Custom Report pipeline allows you to have full control over the format of your pipeline data.

First, the Property (GA4) to which the report will refer must be chosen. This choice is necessary because our report has the ability to allow you to select custom metrics and dimensions from your account, the same reason why it is not possible to get data from multiple accounts in the same pipeline.

Once the Property parameter is selected, you must select the metrics and dimensions. Dimensions can be understood as details of information. For example, date is one dimension and page is another. So, if you select these dimensions (date and page) you will get detailed information by page and date – “on this day, on this page we have X”. Every report must have at least one dimension

Metrics are the information that is given about the dimensions. So, going back to the previous example, on a given day, for a given page, you might want to get, for example, the number of users and the number of sessions – those are the metrics. Metrics are not mandatory for the report, it is possible to create pipelines with dimensions only.

Some dimensions can make your pipeline incremental or integral. If your pipeline is incremental, you will also need to choose the initial savepoint and an update window. The dimensions that make an pipeline incremental are date, dateHour, and dateHourMinute.

The savepoint defines the initial reading date. After a first read, the savepoint is updated to today's date. In this way, the data from the past (from the savepoint to today) will remain in your destination and in the next execution of your pipeline, only new and updated data will be fetched – avoiding re-reading.

The Update Window is a parameter that defines the updating of historical data. Depending on your metric's attribution model, data from a few days ago may change. For example, if a user sees a given page 1 week ago and just performs an action (goal/goal) today, the goal completion record can be assigned to a date one week ago – changing the past. The Update Window parameter solves this by always running a read a few days before the current savepoint. So, on the first run of an pipeline, the data will be read starting at the savepoint until today's date – this data will stay at its destination. In the next execution, data will be read from when the previous execution stopped minus the days defined by its attribution window.

The table that will be created in your database will be defined by the combination of metrics and dimensions you choose and will have a format similar to this:

Field Type

dimension_x

text

dimension_y

text

dimension_z

text

metric_x

float

metric_y

float

metric_z

float

To learn which metrics and dimensions to use, use our dimensions and metrics reference

Once all your parameters are configured, just click on NEXT to continue creating your pipeline.

1. Our dimensions and metrics reference are prepared based on the GA4 documentation, and may suffer variations during its implementation in the pipelines

2. While creating or editing pipelines, after selection, the list of possible options will be updated to reflect the combinations that can or cannot be performed based on the unsupported fields. Despite this functionality, there can still be false positives, where the documentation indicates that fields are supported when they are not. That way, you can also use this google tool to check compatibility – however, even this tool can generate false positives, so the only way to know 100% if the fields are compatible is to actually create the pipeline

3. Metrics that count distinct values (for example, “distinct users”) behave differently if you use a “day” dimension versus a “month” dimension. For example, let's say the same user visits your page for an entire month. In an pipeline that sees distinct users per day, you will get a value of 1 per row. Adding this value to get the number of unique users in that month would result in a value of 30 – which is wrong, as in that month it was the same unique user who accessed your page every day. In this case, the right thing would be to use the unique user’s metric with the month dimension. In order for the values obtained to be close to the tool, it is always important that the report is created in the same dimension as the tool's report.

4. In order for data to be reported for events and custom parameters sent (for example event=purchase and parameter=purchase_value) it is necessary that there are custom dimensions and metrics created in the property

5. Segments are not supported

6. The quotas applied by the API limit the number of calls per hour and per day, which can make reading data slower in pipelines’

7. When creating pipelines, if the quota is consumed, custom fields will not be returned

8. Due to API limitations, a maximum of 10 metrics and 9 dimensions can be selected per pipeline

9. Due to API limitations, metrics that contain special characters cannot be selected