duplicate data

Welcome to the ActivePrime Blog

In our blog, you’ll find helpful and informative posts dedicated to improving CRM performance and usability. Our topics include deep dives on the technical aspects of CRMs like searching and data quality. You’ll also find insightful posts about user productivity and adoption.

Four Ways Dirty Data Is Hurting Your Marketing Results

Four Ways Dirty Data Is Hurting Your Marketing Results

Like it or not, you’re spending marketing budget on programs that bring poor or no results.

Research shows that on average, 12% of an organization’s annual income is misspent due to bad contact data.

And it’s not just the money. Dirty data, such as duplicates, results in inaccurate lead scores. Score sharing between duplicate leads means neither record gets a high enough score to move further down the sales pipeline.

Cleaning Your Data Manually – A Data Quality Initiative Doomed to Failure

Cleaning Your Data Manually – A Data Quality Initiative Doomed to Failure

Anyone whose company depends on a database, will agree that data quality is critically important. Sadly, many of these same people, when allocating budgets will not be ready to invest money in data quality by cleaning a database to improve/remove out-of-date, incomplete and/or duplicate records. It seems that there is almost always a higher priority project against which to spend. Even worse, dirty versus clean data might be a simple concept, but it’s not a simple problem. In an earlier post, we presented the reasons why the “write an in-house script” quick, easy, cheap data fix is neither quick, nor easy, nor cheap. Today we’d like to talk about another perennial favorite data fix:

Writing Custom Scripts to Clean Duplicates is a Bad Idea

Writing Custom Scripts to Clean Duplicates is a Bad Idea

Duplicate records in your CRM are among the most likely sources of data corruption, leading to inaccurate data and misleading reports. Duplicate data can be easily cleaned- and then prevented. But, that doesn’t seem to be the reality for many companies. The breakdown starts when processes to prevent the introduction of duplicates into the database are either flawed, inconsistently applied or absent. Human error is all too frequently the main culprit. Most software lacks the sophistication to detect any but the most blatant cases of duplication and can be too cumbersome or complex for any but the savviest of users to run. There are different ways that companies can deal with this problem. Unfortunately, in an effort to save time or money and often as a result of underestimating the actual complexity of the problem – some of the solutions that companies adopt to fix their data quality are not effective.