In times when computer programing is becoming more and more accessible due to the growing number of coding schools, online resources and bootcamps, this question seems to go viral - which computer language should I learn first or which language should I choose for my use case. This situation is no different for Ruby and Python.
This article aims at reminding the veterans and aiding the beginners in the basics of caching (in Ruby particularly). Starting from the basic question – what is caching – we will move on to when given caching techniques should be used, where cached resources can be stored, what types of cache are available, how to approach cache invalidation, and finally, what risks are imposed.
Nowadays any project must be preceded by a detailed Customer Profiling process in order to provide detailed and accurate information concerning customer’s needs and expectations. The right decision, which carries the probability of success has to be a customer-centered one, with possibly all these needs and expectations to be met.
Even a casual look at the brief Wikipedia (translated) definition proves the fallacy of such thinking: “Natural Language processing (NLP) – the interdisciplinary scientific field which joins the issues of natural intelligence and linguistics, dealing with the automatization of language analysis, understanding, translation and generating the natural language by computer.”
As old as it is, we still have the debate: what kind of database should be used for my system? The most common answer is usually: “it depends”. We know that it does depend on many different factors. Therefore I would like to cover some of them, which can help you in identifying and selecting the proper one, based on the requirements for your project.
„Our experience has taught us that if your organization hasn’t created and thoroughly tested, repeatedly, a cyber incident response plan across all business areas and personnel, as well as performed simulations of cyber attacks, you won’t do a good job of responding when it occurs for real. We see over and over that it is very difficult to make good decisions when you’re responding to a real attack in the heat of the moment.” /David Burg, Cyber Security & Privacy Leader PwC/
It goes without saying that for the last decades a vast majority of institutions, companies, firms and the like, have dealt with the Big Data reality, which required or just forced the urgent necessity to create processing platforms capable of storing and analyzing this vast amount of data. Here is why Hadoop and [Spark](/spark-consulting/), later on, around the year 2008, came into picture.
High-volume data streams and a great number of reports for the real estate market was what we were confronted with on one of our client’s projects. More specifically, the client faced a tough scalability problem: the property market reports generated from such a big data set took up to 3 hours to produce (just for 100 markets). Worse, this time was increasing as each day a few million new records were fed to augment the data set. In a step to resolve the problem, the client decided to invest in a new system architecture.