Discussion about this post

User's avatar
SJ's avatar

Thanks for sharing your thoughts on this topic Madison. I agree (and coming from experience) that a good strong and well-thought data model is the first step to ensure accuracy for reporting. With the right level of granularity and with additive facts, metrics can be built from documentation and definitions.

One thing which I think a semantic model has a place for is to ensure consistency in access to metrics across departments in an organization. I think a clean data model allows metrics to be build from a single source of truth (i.e. a fact table) by a data analyst within a department, but if different departmental analysts build the same metric but apply different conditions due to departmental needs, then there would be a "logic drift" across the departments: the data model is good, data is granular, there is a single source of truth, but different departments in the organization produce different 'flavors' of the same metric. For example, a sales department may consider 'total revenue' as the sum of all revenue, while a finance department may only want to sum revenue with a non-null post-date.

In such a case, wouldn't it be beneficial to capture these nuances in a semantic model by the core data team, and potentially in a central semantic model, so that these nuances are made available for all users in the organization? Curious to read your thoughts on this!

Expand full comment
Adrian Curry's avatar

A great read!! I really like how your final example clearly demonstrated how starting with a more granular data model could lead to more consistency and usability, when using the data to create metrics!

Expand full comment
4 more comments...

No posts

Ready for more?