Data specialists focus on a specific data source or category (e.g., social media, satellite, radiology) and then seek to apply the derived insights to a diverse range of business problems. Data generalists, on the other hand, start from a specific category of business problems or technology and then seek access to various data sources relevant for addressing those problems or being processed by the technology.
The objective of data specialists is to derive the maximum value from the data sources they analyze. The objective of data generalists is to cover and integrate a wide range of data sources, whereby data quality is diminished for the sake of uniformity. Each one addresses complimentary needs, but data analytics buyers tend to prefer generalists over specialists for the reasons of simplicity and budget constraints.
At Bloom, once we list the data sources that we analyze, clients rarely ask any follow up questions. And Bloom is not an exception. Yet, it is widely known that the extent to which a given data source is treated during the analysis hugely impacts the analysis outcome: “garbage in – garbage out”. The reason for this contradiction is the lack of internal expertise and independent third parties to enable customers to properly evaluate data quality and its
impact on the analysis results (see an illusion of success for more details). As a result, customers are unable to properly weigh the quality of individual data sources in relation to the quantity of diverse data sources analyzed by a solution.
Clearly, customers would like to combine both quality and quantity within a single solution. Yet, such solutions are not available in most market sectors, and particularly those where data access and quality require significant financial investment. The market would therefore strongly benefit from open platforms that are sufficiently flexible to be able to host and integrate high-quality data specialist applications to address specific business challenges.