There are lots of technologies available that will help companies avoid spending time organizing their data. They let them just dump it all together, then pick it apart later.
Mostly, that hasn’t worked very well. Either the mess renders the data lost in the swamp, or the resource usage is far too extreme to offset the costs.
But it also isn’t necessary.
The data that a company acquires is data that it specifically intends to collect. It’s about their products and services, customers, internal processes, etc. It isn’t random data that randomly appears.
Mining that data for knowledge might, at very low probabilities, offer some surprises, but likely the front line of the business alreadys knows these even if it isn’t getting communicated well.
Companies also know the ‘structure’ of the data in the wild. It might change periodically, or be ignored for a while, but direct observations can usually describe it accurately. Strong analysis saves time.
Companies collect data in order to run at larger scales. So, with a few exceptions, sifting through that data is not exploratory. It’s an attempt to get a reliable snapshot of the world at many different moments.
There are exploratory tasks for some industries too, but these are relatively small in scope, and they are generally about searching for unexpected patterns. But this means that first, you need to know the set of expected patterns. That step is often skipped.
Mostly, data isn’t exotic, it isn’t random, and it shouldn’t be a surprise. If there are a dozen different representations for it when it is collected, that is a mistake. Too often we get obsessed about technology but forget about its purpose. That is always an expensive mistake.