Welcome to the Building Big Data Pipelines with SparkR & PowerBI & MongoDB course. In this course, we will be creating a big data analytics solution using big data technologies for R.
In our use case, we will be working with raw earthquake data and we will be applying big data processing techniques to extract transform, and load the data into usable datasets. Once we have processed and cleaned the data, we will use it as a data source for building predictive analytics and visualizations.
Power BI Desktop is a powerful data visualization tool that lets you build advanced queries, models, and reports. With Power BI Desktop, you can connect to multiple data sources and combine them into a data model. This data model lets you build visuals, and dashboards that you can share as reports with other people in your organization.
SparkR is an R package that provides a lightweight frontend to use Apache Spark from R. SparkR provides a distributed data frame implementation that supports operations like selection, filtering, aggregation, etc. (similar to R data frames, dplyr) but on large datasets. SparkR also supports distributed machine learning using MLlib.
MongoDB is a document-oriented NoSQL database, used for high volume data storage. It stores data in JSON-like format called documents and does not use row/column tables. The document model maps to the objects in your application code, making the data easy to work with.
- You will learn how to create big data processing pipelines using R and MongoDB
- You will learn machine learning with geospatial data using the SparkR and the MLlib library
- You will learn data analysis using SparkR, R and PowerBI
- You will learn how to manipulate, clean, and transform data using Spark data frames
- You will learn how to create Geo Maps in PowerBI Desktop
- You will also learn how to create dashboards in PowerBI Desktop
Course consist of total 3h 22min of content, in total.
Edwin Bomela is a Big Data Engineer and Consultant, involved in multiple projects ranging from Business Intelligence, Software Engineering, IoT and Big data analytics. Expertise are in building data processing pipelines in the Hadoop and Cloud ecosystems and software development.
He is currently a consulting at one of the top business intelligence consultancies helping clients build data warehouses, data lakes, cloud data processing pipelines and machine learning pipelines. The technologies he uses to accomplish client requirements range from Hadoop, Amazon S3, Python, Django, Apache Spark, MSBI, Microsoft Azure, SQL Server Data Tools, Talend and Elastic MapReduce.