The Importance of…Being Raised Well
By Ernst (Earnest 😉) Wolf
When we talk about “raising” things in a data platform, most people think about raising data quality, raising performance, or raising business value. But there’s something far more fundamental we need to raise consistently — exceptions.
In modern platforms like Microsoft Fabric, data pipelines and notebooks raise and propagate exceptions by default. They alert us when something goes wrong. They stop bad transformations from continuing. They help prevent silent data corruption.
Yet in custom logic— notebooks as well as pipelines— this basic hygiene is often forgotten when using special options. And that’s where things start to break.
When “try… except” becomes “try… accept all”
Across multiple projects and code bases, I’ve encountered the same pattern:
This looks harmless. It feels polite. But in a production data platform, this is the equivalent of sweeping broken glass under the carpet. The notebook sees the cell as “successful.” The notebook completes “successfully.” The Fabric pipeline run shows a green check mark.
Meanwhile, inside the pipeline calling this notebook a file failed to load, or an API returned incomplete or invalid data. And nobody knows. Not the engineer. Not the product owner. Not the business. Not even the downstream consumers. The error was caught, but the truth was not raised.
Why good exceptions make good platforms
Good data engineering isn’t about suppressing errors — it’s about surfacing them early, loudly, and clearly. When you let exceptions be raised well, you get:
- Clear pipeline failures
- Faster debugging
- Accurate operational monitoring
- Better data lineage and root cause analysis
- Stronger governance
- Higher data trust and integrity
This is especially important in Fabric, where pipelines and events depend on runtime status being correct — including failures.
The right pattern: try when needed, raise always
If you truly need to catch an exception — for specific retry, logging, cleanup, or custom messages — you should always re-raise it:
The log is helpful. But the raise is necessary. It ensures that your pipeline and any other upstream code reflects the reality of your code.
🔍 Notebooks vs Pipelines: a quick reality check
Fabric pipelines already do this correctly… until you start using special conditions between activities. Even notebooks do this correctly …until you start using “try”. In those cases the risk of swallowing exceptions unintentionally gets very high.
Notebooks: things become tricky when using try. To be more specific: the tricky part is the except part of the try. Do not use the except to swallow exceptions but use it to change something and retry, or do extra actions. Always make sure the error condition is reported as error in upstream code by using raise as the last line in the except. Only in very rare cases you might choose not to raise, but do this only if there is no good alternative and only for specific exceptions.
Data Pipelines: things become tricky when using precedent constraints other than on-success (the green lines). In a way, these alternative constraints are the pipeline equivalent of using a try in code. Using on-completion (blue), on-fail (red) and even on-skipped (grey) precedent constraints opens up the risk of swallowing exceptions. This is because the rule that determines whether a pipeline run is reported as failed is based solely on the last executed activity of each branch—the leaf activities. This behaviour is clearly explained, with examples, in this Azure Data Factory article,and it applies equally to Fabric data pipelines.
A common pattern where this happens is sending an email on failure (see image below). The pipeline will send the email when the task fails, but the pipeline itself will still be reported as Succeeded, because the leaf activity—the email task—completed successfully. The same risk exists when using on-completion or on-skippedconstraints.
Therefore, use alternatives to on-success only when really needed. And if you do use them, prevent accidental suppression of pipeline failures by adding a Failactivity, so the pipeline execution is explicitly marked as failed.
You can add multiple Fail activities in the same pipeline. Even if a Fail activity is executed, the pipeline will continue to process all activities which are allowed by the definition of the lines (on-completion, on-skipped, etc). After all activities are executed, the pipeline will then still report a failed run.
“The importance…” — for code and pipelines
As the famous play reminds us, being Earnest early avoids a lot of trouble — and the same holds for exceptions by raising them well.
If you want your data platform to behave predictably, you must let exceptions bubble up to the orchestration layer where they belong.
When errors surface clearly, life becomes simpler: pipelines behave predictably, teams troubleshoot faster, and the platform stays trustworthy. Raising exceptions isn’t about being strict or dramatic — it’s just about keeping things visible so the right systems can respond.
In the end, well‑raised exceptions make for well‑running data platforms.