Geospatial data management best practices
The single most important consideration in a data management strategy is a deep understanding of the use cases the data intends to support. Data ingestion workflows need to eliminate bottlenecks in write performance. Geospatial transformation jobs need access to powerful computational resources, and the ability to cache large amounts of data temporarily in memory. Analytics and visualization concerns require quick searching and the retrieval of geospatial data. These core disciplines of geospatial data management have benefitted from decades of fantastic work done by the community, which has driven AWS to create pathways to implement these best practices in the cloud.
Data – it’s about both quantity and quality
A long-standing anti-pattern of data management is to rely primarily on folder structures or table names to infer meaning about datasets. Having naming standards is a good thing, but it is not a substitute for a well-formed data management strategy. Naming conventions invariably change over time and are never fully able to account for the future evolution of data and the resulting taxonomy. In addition to the physical structure of the data, instrumenting your resources with predefined tags and metadata becomes crucial in cloud architectures. This is because AWS inherently provides capabilities to specify more information about your geospatial data, and many of the convenient tools and services are built to consume and understand these designations. Enriching your geospatial data with the appropriate metadata is a best practice in the cloud as it is for any GIS.
Another best practice is to quantify your data quality. Simply having a hunch that your data is good or bad is not sufficient. Mature organizations not only quantitatively describe the quality of their data with continually assessed scores but also track the scores to ensure that the quality of critical data improves over time. For example, if you have a dataset of addresses, it is important to know what percentage of the addresses are invalid. Hopefully, that percentage is 0, but very rarely is that the case. More important than having 100% accurate data is having confidence in what the quality of a given dataset is… today. Neighborhoods are being built every day. Separate buildings are torn down to create apartment complexes. Perfect data today may not be perfect data tomorrow, so the most important aspect of data quality is real-time transparency. A threshold should be set to determine the acceptable data quality based on the criticality of the dataset. High-priority geospatial data should require a high bar for quality, while infrequently used low-impact datasets don’t require the same focus. Categorizing your data based on importance allows you to establish guidelines by category. This approach will allow finite resources to be directed toward the most pressing concerns to maximize value.
People, processes, and technology are equally important
Managing geospatial data successfully in the cloud relies on more than just the technology tools offered by AWS. Designating appropriate roles and responsibilities in your organization ensures that your cloud ecosystem will be sustainable. Avoid single points of failure with respect to skills or tribal knowledge of your environment. Having at least a primary and secondary person to cover each area will add resiliency to your people operations. Not only will this allow you to have more flexibility in coverage and task assignment but it also creates training opportunities within your team and allows team members to continually learn and improve their skills.
Next, let’s move on to talk about how to stretch your geospatial dollars to do more with less.