This week I've started planning for the next version of our data collection system. The key realization for me is that I do not know all the questions we will need to answer in the future. Our current focus is on specific sequences of click events, but in the future we might want to look at browser versions or behavioral patterns related to IP addresses. If we don't capture user-agent, for example, we won't be able to answer questions about browser versions. If we don't capture IP then we cannot look for patterns in IP addresses. We should store data in a way that maximizes the range of questions we can address in the future.
In the past few years, the cost of storing data have continued to fall. We use AWS extensively. Amazon S3 costs are very reasonable and guarantees a high level of availability. Also, lower compute costs and open source tools like Hadoop that process large data volumes have greatly increased our ability to extract valuable insights from data. So storing more data than we need causes minimal inconvenience.
Another consideration is that the cost and difficulty of modifying data ingestion is higher than other parts of a data processing system. There is nothing to reconcile captured data to in the wild. We need to take extra steps to ensure data doesn't get misplaced, malformed or lost. I don't want to change those systems frequently.
All of these factors push me towards wanting to capture as much information as I possibly can, and in as unaltered a format as allowed by the components I am working with for data ingestion. This approach maximizes our ability to answer questions in the future, incurs negligible storage cost and reduces the frequency of expensive changes to the ingestion flow.
Comments
Post a Comment