
This document outlines the process of data uploading, change detection, and performing quality checks using Domo. By exploring different data updating methods, such as full replace, append, and merge, you can manage your data effectively. Additionally, you will learn how to monitor data quality and handle schema changes efficiently.
To upload data in Domo, navigate to the specific data set or connector. You can modify the update method by selecting between a full replace, an append, or a merge. A full replace will overwrite every row during each update, effectively rewriting the entire table every time.

An append operation will add the selected data to the existing table with each update. For instance, if your data selection consists of 50 rows, these will be added to the table every time, increasing the total row count incrementally.

When using the merge option, the process combines data based on a unique identifier or merge key. If a record already exists, it will be updated. If the record is new, it will be appended to the dataset.

Once data is loaded into Domo, utilize the statistical data profiler to identify potential data quality issues. This tool allows you to examine data distribution, the percentage of null or missing data, and create snapshots for drift detection purposes, enabling continuous monitoring of data changes.

Domo automatically manages schema changes when using a Domo connector. If cloud integration is configured, data freshness checks will update the schema to align with your cloud data warehouse's information schema. Change data capture is also automated in Workbench, uploading only data that has changed since the last run unless otherwise specified.

To instruct Workbench to disregard certain metadata, specify this in the additional settings. Workbench offers flexibility in handling null values and schema changes from on-premise systems.

Event-driven data flow updates can be configured by accessing the settings of a data flow and selecting the specific datasets to trigger updates. Options include selecting inputs or using a pre-built trigger that ensures all datasets are updated before running the full data flow.

Customization for orchestration allows you to meet the specific needs of your pipeline. If certain datasets need to update the data flow upon changes, they can be triggered automatically. With cloud integration, Domo assists in establishing a storage location for data before it is transferred to a table, visible within Snowflake or other cloud data warehouses.

Domo's features for staging, change detection, and quality checks ensure your data pipelines are robust and maintain the desired quality.
