Data governance is a complex endeavor, but scaling it to meet the needs of a complex or globally distributed organization requires a well considered and coherent strategy. In this episode Tim Ward describes an architecture that he has used successfully with multiple organizations to scale compliance. By treating it as a graph problem, where each hub in the network has localized control with inheritance of higher level controls it reduces overhead and provides greater flexibility. Tim provides useful examples for understanding how to adopt this approach in your own organization, including some technology recommendations for making it maintainable and scalable. If you are struggling to scale data quality controls and governance requirements then this interview will provide some useful ideas to incorporate into your roadmap.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, a 40Gbit public network, fast object storage, and a brand new managed Kubernetes platform, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. And for your machine learning workloads, they’ve got dedicated CPU and GPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
- You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference in NYC, Strata Data in San Jose, and PyCon US in Pittsburgh. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today.
- Your host is Tobias Macey and today I’m interviewing Tim Ward about using an architectural pattern called data hub that allows for scaling data management across global businesses
- How did you get involved in the area of data management?
- Can you start by giving an overview of the goals of a data hub architecture?
- What are the elements of a data hub architecture and how do they contribute to the overall goals?
- What are some of the patterns or reference architectures that you drew on to develop this approach?
- What are some signs that an organization should implement a data hub architecture?
- What is the migration path for an organization who has an existing data platform but needs to scale their governance and localize storage and access?
- What are the features or attributes of an individual hub that allow for them to be interconnected?
- What is the interface presented between hubs to allow for accessing information across these localized repositories?
- What is the process for adding a new hub and making it discoverable across the organization?
- How is discoverability of data managed within and between hubs?
- If someone wishes to access information between hubs or across several of them, how do you prevent data proliferation?
- If data is copied between hubs, how are record updates accounted for to ensure that they are replicated to the hubs that hold a copy of that entity?
- How are access controls and data masking managed to ensure that various compliance regimes are honored?
- In addition to compliance issues, another challenge of distributed data repositories is the question of latency. How do you mitigate the performance impacts of querying across multiple hubs?
- Given that different hubs can have differing rules for quality, cleanliness, or structure of a given record how do you handle transformations of data as it traverses different hubs?
- How do you address issues of data loss or corruption within those transformations?
- How is the topology of a hub infrastructure arranged and how does that impact questions of data loss through multiple zone transformations, latency, etc.?
- How do you manage tracking and reporting of data lineage within and across hubs?
- For an organization that is interested in implementing their own instance of a data hub architecture, what are the necessary components of an individual hub?
- What are some of the considerations and useful technologies that would assist in creating and connecting hubs?
- Should the hubs be implmeneted in a homogeneous fashion, or is there room for heterogeneity in their infrastructure as long as they expose the appropriate interface?
- When is a data hub architecture the wrong approach?
- @jerrong on Twitter
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Podcast Episode
- Eventual Connectivity Episode
- Data Governance
- Data Lineage
- Data Sovereignty
- Graph Database
- Helm Chart
- Application Container
- Docker Compose
- LinkedIn DataHub
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA