
Here's your Playbook
Download now
Oops! Something went wrong while submitting the form.
Learn More
TABLEย OFย CONTENT
In a recent LinkedIn post, I pointed out the discord between data producers and consumers that is specifically ignited due to poor data modelling practices. The irony is data modelling goals are to be aligned with quite the opposite, i.e., it is supposed to break down the high wall between the two counterparts.
Given the vast head nods on that post, it was validation of the fact that it is not just an exclusive problem torturing the likes of me, my friends, or acquaintances, but it is, in fact, a devil for the data industry at large.
โ
โ
โ
Letโs break down the problem into the trifecta of data personas.
Data producers are constantly stuck with rigid legacy models that reduced the flexibility of data production and hinder produce of high-quality composable data. Multiple iterations with data teams are often necessary to enable the required production schema and requirements.
A hinderance to effective data modeling is that data consumers suffer from slow metrics and KPIs, especially when the data and query load of warehouses or data stores increase over time. Expensive, complex, and time-taking joins to make the consumption process littered with stretched-out timelines, bugs, and unnecessary iterations with data teams. There is often no single source of truth that different consumers could reliably refer to, and thereafter, discrepancies are rampant across the outcome of BI tools. Consumers are the largest victim of broken dependencies, and sometimes they are not even aware of it.
Data engineers are bogged down with countless requests from both producers and consumers. They are consistently stuck between the choices of creating a new data model or updating an old one. Every new model they generate based on unique requests adds to the plethora of data models they are required to maintain for as long as the dependencies last (lifetime). Grappling with data modelling complexities like updating models often means falling back on complex queries that are buggy and lead to broken data pipelines and a bunch of new requests because of those broken pipelines. In short, while figuring out the right approach to data modelling, data engineers suffer tremendously in the current data stack, and it is not sustainable.
In short, Data Models are creating a high wall between data producers and data engineers while their sole objective is to eliminate the gap between the two ends. However, it is not Data Modelingโs fault. Data Modeling has been and is one of the coolest ways to manage data. The problem lies in the way they are implemented, constantly making bottlenecks out of poor data engineers. Almost like any other approach in the data space, identifying the best practices that organisations should follow while building their data models can be the real solution for any obstacle created.
You can learn more about the best practices of data modelling and how it can help you address the inevitable siloes and bring back scalable and independent AI/ML.
Data Modeling: Resurrection Stone for Scalable and Independent AI/ML
The chaos goes back years and decades, but it has started impacting strategic conversations lately, especially due to the growing importance and volume of data for organisations. Data was only an afterthought before, used only for fundamental analysis work. But the narrative has changed, and how!
Today businesses that have a good grasp of data make the difference between winning and losing the competitive edge. Many data-first organizations, the likes of Uber, Airbnb, and Google, understood this long back and dedicated major projects to becoming data-first.
โ
Contrary to popular belief, the modern data stack is a barrier to optimising the capabilities of data models. The primary reason causing the silo between data producers and data consumers is a chaotic bunch of tools and processes that are clogged into the system. Each somehow trying to make use of the rigid data model defined by someone possibly with no idea about the lay of the business landscape.
Spending more capital on one more tool is not a solution, but it's an additional layer on a chaotic base. More tools bring in more cruft (debt) and make the problem more complex.
As one of my idols, Martin Fowler, would say:
โ
"๐๐ฉ๐ช๐ด ๐ด๐ช๐ต๐ถ๐ข๐ต๐ช๐ฐ๐ฏ ๐ช๐ด ๐ค๐ฐ๐ถ๐ฏ๐ต๐ฆ๐ณ ๐ต๐ฐ ๐ฐ๐ถ๐ณ ๐ถ๐ด๐ถ๐ข๐ญ ๐ฆ๐น๐ฑ๐ฆ๐ณ๐ช๐ฆ๐ฏ๐ค๐ฆ. ๐๐ฆ ๐ข๐ณ๐ฆ ๐ถ๐ด๐ฆ๐ฅ ๐ต๐ฐ ๐ด๐ฐ๐ฎ๐ฆ๐ต๐ฉ๐ช๐ฏ๐จ ๐ต๐ฉ๐ข๐ต ๐ช๐ด "๐ฉ๐ช๐จ๐ฉ ๐ฒ๐ถ๐ข๐ญ๐ช๐ต๐บ" ๐ข๐ด ๐ด๐ฐ๐ฎ๐ฆ๐ต๐ฉ๐ช๐ฏ๐จ ๐ต๐ฉ๐ข๐ต ๐ค๐ฐ๐ด๐ต๐ด ๐ฎ๐ฐ๐ณ๐ฆ. ๐๐ถ๐ต ๐ธ๐ฉ๐ฆ๐ฏ ๐ช๐ต ๐ค๐ฐ๐ฎ๐ฆ๐ด ๐ต๐ฐ ๐ต๐ฉ๐ฆ ๐ข๐ณ๐ค๐ฉ๐ช๐ต๐ฆ๐ค๐ต๐ถ๐ณ๐ฆ ๐ข๐ฏ๐ฅ ๐ฐ๐ต๐ฉ๐ฆ๐ณ ๐ข๐ด๐ฑ๐ฆ๐ค๐ต๐ด ๐ฐ๐ง ๐ช๐ฏ๐ต๐ฆ๐ณ๐ฏ๐ข๐ญ ๐ฒ๐ถ๐ข๐ญ๐ช๐ต๐บ, ๐ต๐ฉ๐ช๐ด ๐ณ๐ฆ๐ญ๐ข๐ต๐ช๐ฐ๐ฏ๐ด๐ฉ๐ช๐ฑ ๐ช๐ด ๐ณ๐ฆ๐ท๐ฆ๐ณ๐ด๐ฆ๐ฅ. ๐๐ช๐จ๐ฉ ๐ช๐ฏ๐ต๐ฆ๐ณ๐ฏ๐ข๐ญ ๐ฒ๐ถ๐ข๐ญ๐ช๐ต๐บ ๐ญ๐ฆ๐ข๐ฅ๐ด ๐ต๐ฐ ๐ง๐ข๐ด๐ต๐ฆ๐ณ ๐ฅ๐ฆ๐ญ๐ช๐ท๐ฆ๐ณ๐บ ๐ฐ๐ง ๐ฏ๐ฆ๐ธ ๐ง๐ฆ๐ข๐ต๐ถ๐ณ๐ฆ๐ด ๐ฃ๐ฆ๐ค๐ข๐ถ๐ด๐ฆ ๐ต๐ฉ๐ฆ๐ณ๐ฆ ๐ช๐ด ๐ญ๐ฆ๐ด๐ด ๐ค๐ณ๐ถ๐ง๐ต ๐ต๐ฐ ๐จ๐ฆ๐ต ๐ช๐ฏ ๐ต๐ฉ๐ฆ ๐ธ๐ข๐บ.โ
โ
โ
โ
โ
Contrary to the widespread mindset that it takes years to build a data-first stack, with new storage and compute tools and innovative technologies that have popped up in the last couple of years, this is no longer true. It is not impossible to build a data-first stack and reap value from it within weeks instead of months and years.
Referring again to Martin Fowlerโs architectural ideology:
โ
โHigh internal quality leads to faster delivery of new features because there is less cruft to get in the way. While it is true that we can sacrifice quality for faster delivery in the short term, before the build-up of cruft has an impact, people underestimate how quickly the cruft leads to an overall slower delivery. While this isn't something that can be objectively measured, experienced developers, reckon that attention to internal quality pays off in weeks, not months.โ
In the journey of this data-first stack, we need to be ruthless about trimming the countless moving parts that plug into a data model. Chop down multiple tools and, with it, eliminate integration overheads, maintenance overheads, expertise overheads, and licensing costs that build up to millions with no tangible outcome.
A data-first stack is only truly data-first when built with the right ideology and in alignment with your internal infrastructure. This can be answered by understanding a data developer platform. Read about it more here: https://datadeveloperplatform.org/
โ
For a deep dive into the overheads of point solutions that make up the Modern Data Stack, refer to:
โ
โ
In the traditional approach, data teams are stuck with defining data models in spite of the fact that they do not have much exposure to the business side. However, the task still falls on them since data modeling is largely considered to be part of the engineering stack. This narrative needs to change.
The purpose of a data modelling is to build the right roadmap for data to fall into. Who better to do this than business folks who work day and night with the data and know exactly how and why they want it? This would give back control of business logic to business teams and leave the management to data teams and technologies. But how should this be materialised? Declarative and Semantic layers of abstraction.
Business teams would give a hard pass to complex SQLs, databases, or low-level data modeling techniques. Itโs a tragedy that they are forced to deal with them, but if given the opportunity, they would choose the more intuitive and quicker path that impacts business at the sweet time and the sweet spot.
Moreover, such abstractions are not just for business folks exclusively. To make life easier for all (producers, consumers, and data engineers), we need to create a seamless way for business personnel to inject their vast industry and domain knowledge into the data modeling pipeline. Reduce the complexity of SQLs through abstractions that analysts can easily understand and operate. Omit the need for analysts to struggle with their double life as analytical engineers.
โ
A semantic source of truth is different from what is usually referred to as a single source of truth for data. A semantic source of truth refers to a single point that emits verified logic that the organisation could blindly rely on.
โBlindly relyingโ is a big step, so we need the right system to enable optimal reliability. Surely youโve heard of data contracts? Contracts are your one-stop lever to declaratively manage schema, semantics, and governance to bring harmony between producers and consumers (keep watching Modern Data 101 for more about Contracts).
โ
โ
Since its inception, ModernData101 has garnered a select group of Data Leaders and Practitioners among its readership. Weโd love to welcome more experts in the field to share their story here and connect with more folks building for better. If you have a story to tell, feel free to email your title and a brief synopsis to the Editor.