As you may remember from our most recent post in our series exploring the archive continuum, we examined the options and use cases available at the low-functionality end of the spectrum. Here, we shift our focus to archives with high functionality.
The very definition of “archive” varies greatly. So much so, that in the first post in this series, we proposed that archives exist on a continuum depending on the data structure and level of access required. On one end, you’ll find archives with low-to-no functionality. On the other end, more sophisticated active archiving solutions provide considerably more access and much higher functionality.
After your organization implements a new patient accounting system, your staff still have to work down the AR from your legacy system. If you must continue to operate (and pay for) the legacy system you’ve just replaced to do so, you’re taking a significant hit to the return on your new investment. It’s a recurring hit, too, as it can take years to zero out your books. And, by keeping legacy applications that have outlived vendor support, you’re increasing your organization’s exposure to risk.
The terms data conversion and data migration are often used interchangeably. And, while this is incorrect, there are plenty of blogs and articles out there explaining the difference between converting data (changing it from one format to another) and migrating data (permanently moving data from one location to another) that will set you straight.
Rather than add to the conversation about their definitions, we’re going to talk about why it’s not enough to talk about data conversion or data migration without also considering data archiving.
Everyday there are 2.5 quintillion (yes, that number has 17 zeros in it!) bytes of data created. More than 90% of all the data that exists in the world was created in the last two years. And the pace of data creation only continues to accelerate. As good data stewards, we must take ownership of the data we create to harness it to solve problems and make systems more efficient.
It starts simply enough: Your organization meticulously matches platform capabilities to your organization’s needs and decides on a new EMR. You set the project schedule, took the system live and even optimized it to maximize user efficiency. Everything is perfect…okay, as perfect as any large system implementation can go…But, then, it happens. You remember that throughout all the planning, capability matching and optimizing, you forgot to think about your legacy data.
While flipping through the “Show Daily” on the shuttle from the convention center late this afternoon, I read a piece by HIMSS President and CEO Hal Wolf that called for healthcare “champions to solve our global challenges.” While an obvious tie into this year’s HIMSS theme: Champions of Health Unite, it’s uncanny how nice of a tie it is to the activity that’s been happening at MediQuant over the past two days.
With the vast amount of data your organization produces daily, properly caring for this data is critical when implementing information-based care. Data Stewardship is a sub domain, a discipline if you will, within information governance that is predicated on ensuring the accessibility of data assets. In short, successful Data Stewardship will prevent your organization from being data-rich, but information-poor.
It happens more often than you might think. A migration project is completed. Clinical legacy data is securely available in a new location. People begin to access the data for use, and bam, something doesn’t look right. Workflows are immediately interrupted, questions ensue, and relationship tensions heighten between users and IT.