Go to Top


Hadoop is a highly scalable, open source platform that is the go-to big data analytics engine. It’s designed to aggregate vast amounts of unstructured and structured data from multiple sources. It also runs analytics that are deep and computationally extensive, like high performance data mining, large-scale data exploration, clustering and targeting. Hadoop is entirely schema-less and so excels as a staging area and data integration platform for the ETL of unstructured data into enterprise data warehouses, and is used in applications across a variety of markets from finance and e-commerce to fraud detection and healthcare.

Designed to run on a large number of machines that don’t share any memory, Hadoop’s key component – Hadoop Distributed File System – allows users to spread data across multiple servers. Hadoop then relies on its key processing framework, MapReduce, to map data operations to individual servers and return results in a single data set. This results in greater potential power than a centralized database system and, because its processors work in parallel, the ability to ask complicated computational questions of virtually any amount of data. Additionally, Hadoop’s distributed file system produces rapid data transfer rates, and allows the system to operate uninterrupted in the event of node failure.

Simple developer tools and interfaces allow for many applications that can run on the Hadoop Framework. But, it is best reserved for enterprises with truly vast quantities of structured and unstructured data that need to perform heavy data mining and complex queries, as well as large scale ETL. Notable users include Google, Yahoo and IBM.

Need some guidance implementing BI technologies to improve your business? Plaster Group offers business solutions for Hadoop and other BI platforms to help your business succeed. Contact us at info@plastergroup.com today to find out how we can help.

Leave a Reply

Your email address will not be published. Required fields are marked *