Minimize Manual Work

It’s really quite simple. The quicker you get access to your data they way you need it, the quicker you can react to the real world challenges it describes. D♯ is all about making that happen as effortlessly as possible, no matter where the data comes from or where it’s going. By eliminating entire categories of manual work, development times are reduced to a fraction of what they used to be. The same goes for maintenance as well as routine tasks, big and small, that developers and data professionals perform several times daily.

Conceptual Modeling

What makes all of this possible is Conceptual model-based automation. The model describes what the solution should contain, not how it should be implemented. The how can be algorithmicallly deduced from the model and accompanying metadata and then automatically turned into program code that is instantly ready to be installed into the production environment, or performed ad hoc when needed. This includes creating database tables, views and SQL procedures that hash and load the data as well as materialize views for optimal performance. Even sets of orchestration procedures that process the data in the correct order can be generated, as all database object relationships are known since they are based on the original model. By using semantic elements in the modeling process, D♯ knows about the nature of the data itself, and as a result, also data quality analysis can be automated.

The Driving Idea

As a guiding principle, if you can explain to someone how to do it, you can also automate it. And that’s what we do.

A true Low-Code solution for Business Data Platforms, D♯ reduces the ETL/ELT coding time to zero, an indisputable time saver. True to the model-oriented approach, the data is published in the form of the original Conceptual model, completely abstracting out the underlying, often complicated, Data Vault 2.0 structure. Another indisputable time saver, but most importantly, this will make the model truly the center of the solution.

Handling, refining and enriching data is a complicated process, but implementing it does not need to be.