Oreilly - The Ultimate Hands-on Hadoop
by Frank Kane | Released June 2017 | ISBN: 9781788478489
Tame Your Big DataAbout This VideoFully understand the architecture behind an Angular application and how to use itDevelop modern, complex, responsive and scalable web applications with Angular 9Use the gained, deep understanding of the Angular fundamentals to quickly establish yourself as a frontend developerIn DetailThe world of Hadoop and "Big Data" can be intimidating - hundreds of different technologies with cryptic names form the Hadoop ecosystem. With this course, you'll not only understand what those systems are and how they fit together - but you'll go hands-on and learn how to use them to solve real business problems!This course is comprehensive, covering over 25 different technologies in over 14 hours of video lectures. It's filled with hands-on activities and exercises, so you get some real experience in using Hadoop - it's not just theory.You'll find a range of activities in this course for people at every level. If you're a project manager who just wants to learn the buzzwords, there are web UI's for many of the activities in the course that require no programming knowledge. If you're comfortable with command lines, we'll show you how to work with them too. And if you're a programmer, you will write real scripts on a Hadoop system using Scala, Pig Latin, and Python. Show and hide more Publisher resources Download Example Code
- Chapter 1 : Learn all the buzzwords! And install Hadoop
- [Activity] Introduction, and install Hadoop on your desktop! 00:01:10
- Hadoop Overview and History 00:18:27
- Overview of Hadoop Ecosystem 00:03:02
- Tips for Using This Course 00:01:26
- Chapter 2 : Using Hadoop's Core: HDFs and MapReduce
- HDFS: What it is, and how it works 00:13:54
- [Activity] Install the MovieLens dataset into HDFS using the Ambari UI 00:06:20
- [Activity] Install the MovieLens dataset into HDFS using the command line 00:07:51
- MapReduce: What it is, and how it works 00:10:40
- How MapReduce distributes processing 00:12:57
- MapReduce example: Break down movie ratings by rating score 00:11:36
- [Activity] Installing Python, MRJob, and nano 00:07:43
- [Activity] Code up the ratings histogram MapReduce job and run it 00:07:36
- [Exercise] Rank Movies by their popularity 00:07:07
- [Activity] Check your results against mine! 00:08:24
- Chapter 3 : Programming Hadoop with Pig
- Introducing Ambari 00:09:50
- Introducing Pig 00:06:26
- Example: Find the oldest movie with 5-star rating using Pig 00:15:08
- [Activity] Find old 5-star movies with Pig 00:09:40
- More Pig Latin 00:07:34
- [Exercise] Find the most-rated one-star movie 00:01:56
- Pig Challenge: Compare Your Results to Mine! 00:05:37
- Chapter 4 : Programming Hadoop with Spark
- Why Spark? 00:10:07
- The Resilient Distributed Datasets(RDD) 00:10:14
- [Activity] Find the movie with the lowest average rating - with RDD's 00:15:34
- Datasets and Spark 2.0 00:06:28
- [Activity] Find the movie with the lowest average rating - with DataFrames 00:10:01
- [Activity] Movie recommendations with MLLib 00:12:16
- [Exercise] Filter the lowest-rated movies by number of ratings 00:02:51
- [Activity] Check your results against mine! 00:06:40
- Chapter 5 : Using relational data stores with Hadoop
- What is Hive? 00:06:32
- [Activity] Use Hive to find the most popular movie 00:10:46
- How Hive Works? 00:09:11
- [Exercise] Use Hive to find the movie with the highest average rating 00:01:56
- Compare your solution to mine 00:04:11
- Integrating MySQL with Hadoop 00:08:00
- [Activity] Install MySQL and import our movie data 00:07:46
- [Activity] Use Sqoop to import data from MySQL to HFDS/Hive 00:07:31
- [Activity] Use Sqoop to export data from Hadoop to MySQL 00:07:17
- Chapter 6 : Using non-relational data stores with Hadoop
- Why NoSQL? 00:13:55
- What is HBase 00:12:55
- [Activity] Import movie ratings into HBase 00:13:29
- [Activity] Use HBase with Pig to import data at scale 00:11:20
- Cassandra Overview 00:14:51
- [Activity] Installing Cassandra 00:11:44
- [Activity] Write Spark output into Cassandra 00:11:01
- MongoDB overview 00:16:54
- [Activity] Install MongoDB, and integrate Spark with MongoDB 00:12:45
- [Activity] Using the MongoDB shell 00:07:48
- Choosing a database technology 00:15:59
- [Exercise] Choose a database for a given problem 00:05:00
- Chapter 7 : Querying Your Data Interactively
- Overview of Drill 00:07:56
- [Activity] Setting up Drill 00:10:58
- [Activity] Querying across multiple databases with Drill 00:07:07
- Overview of Phoenix 00:08:56
- [Activity] Install Phoenix and query HBase with it 00:07:08
- [Activity] Integrate Phoenix with Pig 00:11:46
- Overview of Presto 00:06:40
- [Activity] Install Presto, and query Hive with it 00:12:27
- [Activity] Query both Cassandra and Hive using Presto 00:09:01
- Chapter 8 : Managing your Cluster
- YARN Explained 00:10:02
- Tez explained 00:04:56
- [Activity] Use Hive on Tez and measure the performance benefit 00:08:36
- Mesos explained 00:07:14
- ZooKeeper explained 00:13:11
- [Activity] Simulating a failing master with ZooKeeper 00:06:48
- Oozie explained 00:11:56
- [Activity] Set up a simple Oozie workflow 00:16:39
- Zeppelin overview 00:05:02
- [Activity] Use Zeppelin to analyze movie ratings, part 1 00:12:28
- [Activity] Use Zeppelin to analyze movie ratings, part 2 00:09:46
- Hue Overview 00:08:08
- Other technologies worth mentioning 00:04:35
- Chapter 9 : Feeding Data to your Cluster
- Kafka explained 00:09:48
- [Activity] Setting up Kafka, and publishing some data 00:07:24
- [Activity] Publishing web logs with Kafka 00:10:21
- Flume explained 00:10:16
- [Activity] Set up Flume and publish logs with it 00:07:46
- [Activity] Set up Flume to monitor a directory and store its data in HDFS 00:09:12
- Chapter 10 : Analysing Streams of Data
- Spark Streaming: Introduction 00:14:28
- [Activity] Analyze web logs published with Flume using Spark streaming 00:14:20
- [Exercise] Monitor Flume-published logs for errors in real time 00:02:02
- Exercise solution: Aggregating HTTP access codes with Spark Streaming 00:04:25
- Apache Storm: Introduction 00:09:28
- [Activity] Count words with Storm 00:14:35
- Flink: An Overview 00:06:53
- [Activity] Counting words with Flink 00:10:21
- Chapter 11 : Designing Real-World Systems
- The Best of the Rest 00:09:25
- Review: How the pieces fit together 00:06:30
- Understanding your requirements 00:08:03
- Sample Application: consume web server logs and keep tracks of top-sellers 00:10:07
- Sample application: serving movie recommendations to a website 00:11:18
- [Exercise] Design a system to report web sessions per day 00:02:53
- Exercise solution: Design a system to count daily sessions 00:04:24
- Chapter 12 : Learning More
- Books and online resources 00:05:33
- Bonus lecture: Discounts on my other big data / data science courses! 00:02:26
Show and hide more