BigQuery (Service Account) (destination)

BigQuery is a Bigdata solution created by Google that enables large-scale data storage and analysis. In this article, we'll show you how to register a BigQuery database as a destination on Kondado's platform, so you can centralize data from all your company's tools.

Click here to access the tutorial on how to connect to your visualization tool:
Looker Studio
Power BI

 

In this tutorial, we show how to add BigQuery with Service Account. If you want to use your email account, use this tutorial

1) Login to your Google Cloud account

2) Click on this link to access the Service Accounts section

3) Once in the Service Accounts section, click on “CREATE SERVICE ACCOUNT”

4) In the first step, fill in a name for your Service Account (for example “kondado bigquery”) and click on “CREATE”

5) In the second step of the creation process, select the Roles “BigQuery Job User”, “BigQuery User”, and “BigQuery Data Editor” click CONTINUE

6) Now just click on “DONE” to finish the creation

7) Once created, you will be directed to a list of all active service accounts. Locate the one you just created and, on the three vertical points on the right, click on “Create key

8) In the dialog, select the type “JSON” and then click on “CREATE”

9) After clicking create, the key will be downloaded to your computer. Open the downloaded file in a text editor, it will look something like this:

xUntitled-drawing-41.png.pagespeed.ic.1atel_GyUB.png

10) Access the BigQuery page and get the project and dataset IDs of your destination in the Google console

11) Login to Kondado platform

12) Navigate to the add destinations page, click “New Destination” and select the BigQuery Service Account destination

13) Fill in your database information as shown below:

  • In Project ID and Dataset ID paste the values obtained in step (10)
  • In JSON Credential, copy and paste ALL the contents of the file downloaded in step (9), without any changes

ATTENTION: Make sure that the Dataset ID DOES NOT contain the project ID separated by “:”. For example, “id_project:id_dataset” is wrong. The correct thing is that the Project ID and Dataset ID are each separately in their respective fields

Now just save so that your destination is ready to receive the first pipelines!