„Our experience has taught us that if your organization hasn’t created and thoroughly tested, repeatedly, a cyber incident response plan across all business areas and personnel, as well as performed simulations of cyber attacks, you won’t do a good job of responding when it occurs for real. We see over and over that it is very difficult to make good decisions when you’re responding to a real attack in the heat of the moment.” /David Burg, Cyber Security & Privacy Leader PwC/
It goes without saying that for the last decades a vast majority of institutions, companies, firms and the like, have dealt with the Big Data reality, which required or just forced the urgent necessity to create processing platforms capable of storing and analyzing this vast amount of data. Here is why Hadoop and [Spark](/spark-consulting/), later on, around the year 2008, came into picture.
High-volume data streams and a great number of reports for the real estate market was what we were confronted with on one of our client’s projects. More specifically, the client faced a tough scalability problem: the property market reports generated from such a big data set took up to 3 hours to produce (just for 100 markets). Worse, this time was increasing as each day a few million new records were fed to augment the data set. In a step to resolve the problem, the client decided to invest in a new system architecture.
The post features an account of a machine learning enabled software project in the domain of financial investments optimization / automation in blockchain-based cryptocurrency markets. The article specifies the domain problem addressed as well as describes the solution development process and the key project takeaways.