Thursday, February 19, 2026

Data Collection

One of the strongest abilities of any software is data collection. Computers are stupid, but they can remember things that are useful.

It’s not enough to just have some widgets display it on a screen. To collect data means that it has to be persisted for the long term. The data survives the programming being run and rerun, over and over again.

But it’s more than that. If you collect data that you don’t need, it is a waste of resources. If you don’t collect the data that you need, it is a bug. If you keep multiple copies of the same data, it is a mistake. The software is most useful when it always collects just what it needs to collect.

And it matters how you represent that data. Each individual piece of data needs to be properly decomposed. That is, if it is two different pieces of information, it needs to be collected into two separate data fields. You don’t want to over-decompose, and you don’t want to clump a bunch of things together.

Decomposition is key because it allows the data to be properly typed. You don’t want to collect an integer as a string; it could be misinterpreted. You don’t want a bunch of fields clumped together as unstructured text. Data in the wrong format opens up the door for it to be misinterpreted as information, causing bugs. You don’t want mystery data, each datam should have a self-describing label that is unambiguous. If you collect data that you can not interpret correctly, then you have not collected that information.

If you have the data format correct, then you can discard invalid junk as you are collecting it. Filling a database with junk is collecting data you don’t need, and if you did that instead of getting the data you did need, it is also a bug.

Datam are never independent. You need to collect data, and that data has a structure that binds together all of the underlying datam correctly. If you downgrade that structure, you have lost the information about it. If you put the data into a broader structure, you have opened up the possibility of it getting filled with junk data. For example, if the relationship between the data is a hierarchical tree, then the data needs to be collected as a tree; neither a list nor a graph is a valid collection.

In most software, most of the data is intertwined with other values. If you started with one specific piece of data, you should be able to quickly navigate to any of the others. That means that you have collected all of the structures and interconnections properly, and you have not lost any of them. There should only be one way to navigate, or you have collected redundant connections.

As such, if you have collected all of the data you need, then you can validate it. There won’t be data that is missing, there won’t be data that is junk. You can write simple validations that will ensure that the software is working properly, as expected. If the validations are difficult, then there is a problem with the data collection.

If you collect all of the data you need for the software correctly, then writing the code on top of it is way simpler and far easier to properly structure. The core software gets the data from persistence, then passes it out to some form of display. It may come back with some edits, which need to be updated in the persistence. There may be some data that you did not collect, but the data you did collect is enough to be able to derive it from a computation. There may be tricky technical issues that are necessary to support scaling, but those are independent from the collection and flow of data.

Collecting data is the foundation of almost all software. If you get it right, you will be able to grow the software to gradually cover larger parts of the problem domain. If you make a mess out of it, the code will get really ugly, and the software will be unreliable.

No comments:

Post a Comment

Thanks for the Feedback!