Data validation is used to review your data content against one or more checks. In data management systems, data can be validated in a relatively straightforward way against data type, against code lists or against thresholds. Typically, automatic data validation or data cleansing workflows cover these uncomplicated cases. Automated data validation enables users to focus on the important part of data management – extracting value from the data and applying logic to look at the results of the data uploaded. For example, validation processes are automated to upload and review different sized data sets from multiple locations.
Automation works best by taking away time consuming tasks to enable staff to do the job that they should be doing. Instead of chasing up the delivery of data daily or manually creating data feeds, staff can work on resolving data quality problems that has a much more important effect on the end data set. Automatic data validation completes time consuming tasks and prepares data for use, providing staff with more detail and information more quickly. However, you can’t fit all data into the same automated data validation checks. In the world of big data there is no one-size-fits-all data validation strategy.
Advanced Data Validation
Advanced validation in data management includes checks against data coverage, data consistency and against data stability. Notifications or alerts can be set up to notify data operations of the validation results.
Data coverage looks at the data and if data has many blanks or breaks data coverage rules it can be stopped before reaches the golden copy.
The interdependency between the fields within the data is the basis for data consistency validation. Validation of consistency is achieved through the comparison of the meaning of data items. Data consistency works on the basis that if there is some specified data populated then you should have other data populated, if not then it fails validation rules.
If the data ingested by the system is considered stable, there are stable volumes of data being ingested, then any major change on an aggregated level or on a single data item level is against data stability rules. For example, if 100,000 data records are expected but instead the system receives 5,000,000 records then this would fail data stability validation.
Data coverage, data consistency and data stability rules are created based on the business case. The business case often centres on the data changes or file edits that the customer is doing manually.
The Self-improving System
The goal of data validation is a golden copy of data. The solution should ensure that all data is validated the same way. Validation serves to protect your data by achieving consistent results from validation. Transparency is essential so users get real time feedback and can react quickly to resolve data issues.
Data validation can be visualised as a circular process with consecutive rounds of validation and feedback cycles. Validation is most useful when users can extract all the data quality errors and use them to perfect the system further. After resolving the highest priority or most significant errors additional validations can be added so further errors can be addressed in a self-improving system.
Why is Data Validation Important?
Receiving multiple data feeds from multiple data sources from data sources can result in meta data changes and subsequent data loss. If there is a specifically defined list of systems or data formats and that changes between data sources, then data validation prevents loss of data.
Duplicates, missing data or contradicting data could result in a variable picture of the data landscape. If the trustworthiness of the data is questioned, then the analysis on the data becomes less relevant or is flawed which may have operational consequences. If you are missing data from a data source or if the data source is incorrect then anything you do afterward with the data will be flawed.
Data Validation is key to downstream data analysis and decision making. If you don’t have good quality data, then it is very likely it will affect the decisions you make based on the data. For example, in central banking, if transaction data is not validated then calculated rates can be wrong or derived statistics will be misleading. Having good quality data allows analytics to be applied to receive results that can be used reliably for decisions making.
The Keys to Data Validation
There are three keys to data validation.
Timely feedback – getting information to users as soon as possible. Data validation processes should be transparent and give feedback on the results to allow for quick adjustments
Focused validation – start with your major data quality problem first and put in additional validation checks once it is resolved. By eliminating your top data quality problem, the next most important data quality problem can be tackled according to your list of priorities
Understanding your data and your end result - Sometimes it is difficult to understand what your biggest data problem is. It is not always possible to understand the scope of your data and what you want to achieve within a data management process. Data validation helps you understand your data by applying data validations step by step and rerunning the processes. Knowing the end result will help output data that is stable, useful and relevant.
Finworks uses a combination of automatic workflow templates and advanced validation rules based on business case and data types. We are adept at analysing, documenting and implementing data validation use cases. Finworks data validation helps prevent data loss because every single data item can be traceable to the source data, and it is possible to track back to the data during each stage of the workflow. Our data validation not only helps you understand when events happen that affect data integrity but also helps you plan around them using data quality messages. Finworks has found that good quality data helps our customers make good decisions.