Data Engineering Podcast

This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.

https://www.dataengineeringpodcast.com

subscribe
share






Creating Shared Context For Your Data Warehouse With A Controlled Vocabulary


Summary

Communication and shared context are the hardest part of any data system. In recent years the focus has been on data catalogs as the means for documenting data assets, but those introduce a secondary system of record in order to find the necessary information. In this episode Emily Riederer shares her work to create a controlled vocabulary for managing the semantic elements of the data managed by her team and encoding it in the schema definitions in her data warehouse. She also explains how she created the dbtplyr package to simplify the work of creating and enforcing your own controlled vocabularies.

Announcements
  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold.
  • Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription
  • Your host is Tobias Macey and today I’m interviewing Emily Riederer about defining and enforcing column contracts and controlled vocabularies for your data warehouse
Interview
  • Introduction
  • How did you get involved in the area of data management?
  • Can you start by discussing some of the anti-patterns that you have encountered in data warehouse naming conventions and how it relates to the modeling approach? (e.g. star/snowflake schema, data vault, etc.)
  • What are some of the types of contracts that can, and should, be defined and enforced in data workflows?
    • What are the boundaries where we should think about establishing those contracts?
  • What is the utility of column and table names for defining and enforcing contracts in analytical work?
  • What is the process for establishing contractual elements in a naming schema?
    • Who should be involved in that design process?
    • Who are the participants in the communication paths for column naming contracts?
  • What are some examples of context and details that can’t be captured in column names?
    • What are some options for managing that additional information and linking it to the naming contracts?
  • Can you describe the work that you have done with dbtplyr to make name contracts a supported construct in dbt projects?
    • How does dbtplyr help in the creation and enforcement of contracts in the development of dbt workflows
    • How are you using dbtplyr in your own work?
  • How do you handle the work of building transformations to make data comply with contracts?
  • What are the supplemental systems/techniques/documentation to work with name contracts and how they are leveraged by downstream consumers?
  • What are the most interesting, innovative, or unexpected ways that you have seen naming contracts and/or dbtplyr used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on dbtplyr?
  • When is dbtplyr the wrong choice?
  • What do you have planned for the future of dbtplyr?
Contact Info
  • Twitter
  • Website
Parting Question
  • From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
  • Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
  • dbtplyr
  • Great Expectations
    • Podcast Episode
  • Controlled Vocabularies Presentation
  • dplyr
  • Data Vault
    • Podcast Episode
  • OpenMetadata
    • Podcast Episode

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast


fyyd: Podcast Search Engine
share








 January 2, 2022  1h0m