GCP-ENG: Google Cloud Platform – Data Engineer Specialty

$2,750.00

  • Duration: 5 Days
  • Mode of Delivery: Online -Instructor-led training
  • Level: Intermediate
  • Job role: Data Engineer
  • Preparation for exam: None
  • Cost: USD$2,500.00

Data management, data analytics, machine learning and artificial intelligence are all hot topics. And who does these better than Google? Our Google Certified Professional Data Engineer course will help prepare you for the certification exam so you can take that next step in your Cloud career and demonstrate your proficiency in one of the most in-demand disciplines in the industry today. The primary focus of this course is to prepare you for the GCP Professional Data Engineer certification exam. Along the way you’ll solidify your foundations in data engineering and machine learning, ensuring that by the end of the course you will be able to design and build data processing solutions, operationalize machine learning models and gain a working knowledge of relevant GCP data processing tools and technologies.

20 in stock

SKU: AZ-303-1-1-2-2-1-1-1-1-1-1 Categories: , , ,

Audience

This course is intended for experienced developers who are responsible for managing big data transformations including:
• Extracting, loading, transforming, cleaning, and validating data
• Designing pipelines and architectures for data processing
• Creating and maintaining machine learning and statistical models
• Querying datasets, visualizing query results and creating reports

Prerequisites

To get the most of out of this course, you should have:
Completed Google Cloud Fundamentals: Big Data & Machine Learning course or have equivalent experience
• Basic proficiency with common query language such as SQL
• Experience with data modeling, extract, transform, load activities
• Developing applications using a common programming language such as Python
• Familiarity with Machine Learning and/or statistics

Skills Gained

After completing this course, students will be able to:
• Design a data processing system
• Build and maintain data structures and databases
• Analyze data and enable machine learning
• Optimize data representations, data infrastructure performance, and cost
• Ensure reliability of data processing infrastructure
• Visualize data
• Design secure data processing systems

Course outline

Module 1: Introduction to Data Engineering
• Explore the role of a data engineer
• Analyse data engineering challenges
• Intro to BigQuery
• Data Lakes and Data Warehouses
• Demo: Federated Queries with BigQuery
• Transactional Databases vs Data Warehouses
• Website Demo: Finding PII in your dataset with DLP API
• Partner effectively with other data teams
• Manage data access and governance
• Build production-ready pipelines
• Review GCP customer case study
• Lab: Analysing Data with BigQuery

Module 2: Building a Data Lake
• Introduction to Data Lakes
• Data Storage and ETL options on GCP
• Building a Data Lake using Cloud Storage
• Optional Demo: Optimising cost with Google Cloud Storage classes and Cloud Functions
• Securing Cloud Storage
• Storing All Sorts of Data Types
• Video Demo: Running federated queries on Parquet and ORC files in BigQuery
• Cloud SQL as a relational Data Lake
• Lab: Loading Taxi Data into Cloud SQL

Module 3: Building a Data Warehouse
• The modern data warehouse
• Intro to BigQuery
• Demo: Query TB+ of data in seconds
• Getting Started
• Loading Data
• Video Demo: Querying Cloud SQL from BigQuery
• Lab: Loading Data into BigQuery
• Exploring Schemas
• Demo: Exploring BigQuery Public Datasets with SQL using INFORMATION_SCHEMA
• Schema Design
• Nested and Repeated Fields
• Demo: Nested and repeated fields in BigQuery
• Lab: Working with JSON and Array data in BigQuery
• Optimizing with Partitioning and Clustering
• Demo: Partitioned and Clustered Tables in BigQuery
• Preview: Transforming Batch and Streaming Data

Module 4: Introduction to Building Batch Data Pipelines
• EL, ELT, ETL
• Quality considerations
• How to carry out operations in BigQuery
• Demo: ELT to improve data quality in BigQuery
• Shortcomings
• ETL to solve data quality issues

Module 5: Executing Spark on Cloud Dataproc
• The Hadoop ecosystem
• Running Hadoop on Cloud Dataproc
• GCS instead of HDFS
• Optimising Dataproc
• Lab: Running Apache Spark jobs on Cloud Dataproc

Module 6: Serverless Data Processing with Cloud Dataflow
• Cloud Dataflow
• Why customers value Dataflow
• Dataflow Pipelines
• Lab: A Simple Dataflow Pipeline (Python/Java)
• Lab: MapReduce in Dataflow (Python/Java)
• Lab: Side Inputs (Python/Java)
• Dataflow Templates
• Dataflow SQL

Module 7: Manage Data Pipelines with Cloud Data Fusion and Cloud Composer
• Building Batch Data Pipelines visually with Cloud Data Fusion
• Components
• UI Overview
• Building a Pipeline
• Exploring Data using Wrangler
• Lab: Building and executing a pipeline graph in Cloud Data Fusion
• Orchestrating work between GCP services with Cloud Composer
• Apache Airflow Environment
• DAGs and Operators
• Workflow Scheduling
• Optional Long Demo: Event-triggered Loading of data with Cloud Composer, Cloud Functions, Cloud Storage, and BigQuery
• Monitoring and Logging
• Lab: An Introduction to Cloud Composer

Module 8: Introduction to Processing Streaming Data
• Processing Streaming Data

Module 9: Serverless Messaging with Cloud Pub/Sub
• Cloud Pub/Sub
• Lab: Publish Streaming Data into Pub/Sub

Module 10: Cloud Dataflow Streaming Features
• Cloud Dataflow Streaming Features
• Lab: Streaming Data Pipelines

Module 11: High-Throughput BigQuery and Bigtable Streaming Features
• BigQuery Streaming Features
• Lab: Streaming Analytics and Dashboards
• Cloud Bigtable
• Lab: Streaming Data Pipelines into Bigtable

Module 12: Advanced BigQuery Functionality and Performance
• Analytic Window Functions
• Using With Clauses
• GIS Functions
• Demo: Mapping Fastest Growing Zip Codes with BigQuery GeoViz
• Performance Considerations
• Lab: Optimising your BigQuery Queries for Performance
• Optional Lab: Creating Date-Partitioned Tables in BigQuery

Module 13: Introduction to Analytics and AI
• What is AI?
• From Ad-hoc Data Analysis to Data Driven Decisions
• Options for ML models on GCP

Module 14: Prebuilt ML model APIs for Unstructured Data
• Unstructured Data is Hard
• ML APIs for Enriching Data
• Lab: Using the Natural Language API to Classify Unstructured Text

Module 15: Big Data Analytics with Cloud AI Platform Notebooks
• What is a Notebook
• BigQuery Magic and Ties to Pandas
• Lab: BigQuery in Jupyter Labs on AI Platform

Module 16: Production ML Pipelines with Kubeflow
• Ways to do ML on GCP
• Kubeflow
• AI Hub
• Lab: Running AI models on Kubeflow

Module 17: Custom Model building with SQL in BigQuery ML
• BigQuery ML for Quick Model Building
• Demo: Train a model with BigQuery ML to predict NYC taxi fares
• Supported Models
• Lab Option 1: Predict Bike Trip Duration with a Regression Model in BQML
• Lab Option 2: Movie Recommendations in BigQuery ML

Module 18: Custom Model building with Cloud AutoML
• Why Auto ML?
• Auto ML Vision
• Auto ML NLP
• Auto ML Tables

Schedule

Click on the following link to see the current Course Schedule
Our minimum class-size is 3 for this course.
If there are no scheduled dates for this course, it can be customized to suit the time and skill needs of clients and it can be held online, at a rented location or at your premises.
Click on the following link below to arrange for a custom course: Enquire about a course date

Product Information

If you have been looking into cloud computing, then the chances are you have come across the term big data. This is used to describe the large volumes of data those modern-day businesses are processing. We are not so worried about the mass of data, but more about how we can use this data. Almost everything carries a digital footprint these days, and there are challenges to ingest this data and to extract meaningful information. Google offers services that work together to allow us to first gather data, and then process it, before finally analyzing it. The faster we can analyze this means the faster business decisions can be made. Big data is becoming very important to many organizations.
We can ingest data from multiple sources, execute code to process this data, and then analyze the data to ensure that we maximize our business capabilities:

Of course, this is a very simplistic view of it and there are complexities when architecting our solutions. Pub/Sub is a messaging and event ingestion service that acts as the glue between the loosely coupled systems. It allows us to send and receive messages between independent
applications whilst decoupling the publishers of events and subscribers to those events. This means the publishers do not need to know anything about their subscribers. Pub/Sub is fully managed and therefore offers scale at ease, making it perfect for a modern stream analytics pipeline.
There are some core concepts that you should understand:
• A publisher is an application that will create and send messages to a topic.
• A topic is a resource to which messages are sent by publishers.
• A subscription represents the stream of messages from a single topic to be delivered to the subscribing application. Subscribers will either receive the message through pull or push, meaning the Pub/Sub pushes the messages to the endpoint using a webhook, or the message is pulled by the application using HTTPS requests to the Google API.
• A message is the data that a publisher will send to a topic. Put simply, it is data in transit through the system.
Publishers can be any application that can make HTTPS requests to googleapis.com. This can be existing GCP services, Internet Of Things (IoT) devices, or end user applications.
As we mentioned earlier, subscribers receive the messages either by a pull or push delivery method. Similar to publishers, pull subscribers can be any application that can make HTTPS requests to googleapis.com. In this case, the subscriber application will initiate requests to Pub/Sub to retrieve messages. On the other hand, push subscribers must be webhook endpoints that can accept POST requests over HTTPS.
In this case, Cloud Pub/Sub initiates requests to the subscriber application to deliver messages. Consideration should be given as to the best method for our requirements. As an example, if you are expecting a large volume of messages (for example, more than one message per second), then it is advisable to use the pull delivery method. Alternatively, if you have multiple topics that must be processed by the same webhook, then it is advisable to use the push delivery method:

Additional Information and FAQs

CERTFICATE OF COMPLETION: Participants will receive a certificate of completion at the end of a course. This is not an official certification for the product and/or software. Our courses do indicate the appropriate certification exam(s) that the participant can sit. Data Vision Systems does not provide certification or deliver the certification exams. Participants are responsible for arranging and paying for the certification exams on the appropriate certification body.

CANCELLATION POLICY: There is never a fee for cancelling seven business days before a class for any reason. Data Vision Systems reserves the right to cancel any course due to insufficient registration or other extenuating circumstances. Participants will be advised prior to doing so.

Reviews

There are no reviews yet.

Be the first to review “GCP-ENG: Google Cloud Platform – Data Engineer Specialty”

Your email address will not be published. Required fields are marked *