Literally said this in a meeting with our portfolio management team. That it’s a challenge to help the organization understand that even within data engineering not all of our labor is fungible (substitutable).
When you try to act like engineering work can be converted into story points and labor capacity can be turned into story points and then it’s just a juggling game you oversimplify engineering capacity estimation. It’s not that simple at all.
Some data engineers are skilled at real time data, low latency stack in addition to fundamentals, only a few of them can do these more intensive requirement tasks.
Some are great at standard batch processing using the modern data stack and are generalists who can do almost any standard pipeline, but maybe not that real time stuff yet.
Some are specialists in data modeling & analytics engineering, which can be the bottleneck of the entire pipeline so we don’t want them focusing as much on data ingestion work
Sometime we have employees whose expertise was in legacy tech that we are sunsetting (SSIS, Informatica) and they are in the middle of learning modern data stack - so skill transformation is adding friction to our velocity.
And when the portfolio/program management group and leadership try to treat it as if it can be this simplified it creates perverse incentives. To pad timelines, to refuse to story point at all because they don’t want it misused against them, to just not populate start and end dates because there’s too much uncertainty.
It’s not that technical teams are unwilling to be managed - it’s that the framework you are trying to get them to align to feels impossible. And I don’t know what the answer is.