The current trend in data management is to centralize the responsibilities of storing and curating the organization’s information to a data engineering team. This organizational pattern is reinforced by the architectural pattern of data lakes as a solution for managing storage and access. In this episode Zhamak Dehghani shares an alternative approach in the form of a data mesh. Rather than connecting all of your data flows to one destination, empower your individual business units to create data products that can be consumed by other teams. This was an interesting exploration of a different way to think about the relationship between how your data is produced, how it is used, and how to build a technical platform that supports the organizational needs of your business.
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
- And to grow your professional network and find opportunities with the startups that are changing the world then Angel List is the place to go. Go to dataengineeringpodcast.com/angel to sign up today.
- You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management.For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Upcoming events include the O’Reilly AI Conference, the Strata Data Conference, and the combined events of the Data Architecture Summit and Graphorum. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
- Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
- Your host is Tobias Macey and today I’m interviewing Zhamak Dehghani about building a distributed data mesh for a domain oriented approach to data management
- How did you get involved in the area of data management?
- Can you start by providing your definition of a "data lake" and discussing some of the problems and challenges that they pose?
- What are some of the organizational and industry trends that tend to lead to this solution?
- You have written a detailed post outlining the concept of a "data mesh" as an alternative to data lakes. Can you give a summary of what you mean by that phrase?
- In a domain oriented data model, what are some useful methods for determining appropriate boundaries for the various data products?
- What are some of the challenges that arise in this data mesh approach and how do they compare to those of a data lake?
- One of the primary complications of any data platform, whether distributed or monolithic, is that of discoverability. How do you approach that in a data mesh scenario?
- A corollary to the issue of discovery is that of access and governance. What are some strategies to making that scalable and maintainable across different data products within an organization?
- Who is responsible for implementing and enforcing compliance regimes?
- One of the intended benefits of data lakes is the idea that data integration becomes easier by having everything in one place. What has been your experience in that regard?
- How do you approach the challenge of data integration in a domain oriented approach, particularly as it applies to aspects such as data freshness, semantic consistency, and schema evolution?
- Has latency of data retrieval proven to be an issue in your work?
- When it comes to the actual implementation of a data mesh, can you describe the technical and organizational approach that you recommend?
- How do team structures and dynamics shift in this scenario?
- What are the necessary skills for each team?
- Who is responsible for the overall lifecycle of the data in each domain, including modeling considerations and application design for how the source data is generated and captured?
- Is there a general scale of organization or problem domain where this approach would generate too much overhead and maintenance burden?
- For an organization that has an existing monolothic architecture, how do you suggest they approach decomposing their data into separately managed domains?
- Are there any other architectural considerations that data professionals should be considering that aren’t yet widespread?
- @zhamakd on Twitter
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh
- Technology Radar
- Data Lake
- Data Warehouse
- James Dixon
- Azure Data Lake
- "Big Ball Of Mud" Anti-Pattern
- Event Sourcing
- Podcast.__init__ Episode
- Data Engineering Episode
- Data Catalog
- Master Data Management
- CNCF (Cloud Native Computing Foundation)
- Cloud Events Standard
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA