Your industry informed, hands-on pathway into data engineering
An immersive programme that gives you the skills and experience you need to become a top-performing data engineer
Learn by building and deploying production-grade systems, with support from a thriving community of industry experts.
Get experience building real industry systems through industry projects
Our industry projects put you in the position of an engineer on the job. You are dropped into cloud infrastructure that mirrors what you’d find in the workplace. You are challenged to use step by step instructions to build their data pipelines and models, learning by doing.
Build a solid foundation in software engineering
Software Engineering
Learn the core of writing production ready code, following industry best practices.
You will build two different projects that will help you learn the basic and advanced concepts of Python, and other industry relevant tools such as the command line and version control tools, such as git and GitHub. In the first project you will create a command line assistant that helps you process multiple entries from IMDB. In the second project you build an implementation of the Hangman game using object oriented programming in Python.
Module 1: Python Programming
- Arithmetic Variable Assignment and Strings
- Lists and Sets
- Dictionaries, Tuples and Operators
- Control Flow
- Loops
- Functions
- Error Handling
- The Python environment
- Advanced Python
- Debugging
- Object Oriented Programming
Module 2: The Command Line
- FIle Navigation
- File Manipulations
- File Permissions
- Advanced Command Line Features
Module 3: Git and Github
- Version Control
- Commits and Branches
- Fetching and Merging
- Merge Conflicts
- Pull Requests
- README Files
- GitHub Security
Then specialise in data engineering
Data Engineering
Learn how to store, share and process various types of data at scale.
- Build a complete data solution for a multinational organisation, from data acquisition to analysis . Write Python code to extract large datasets from multiple data sources. Utilise the power of Pandas to clean and analyse the data. Build a STAR based database schema for optimised data storage and access. Perform complex SQL data queries to extract valuable insights and make informed decisions for the organisation.
- Build an experiment analytics data pipeline on AWS which runs thousands of experiments per day and crunches billions of datapoints to provide valuable insights to improve the product.
Module 1: Data Formats and Processing Libraries
- JSON, CSV, XLSX and YAML
- Tabular Data
- Pandas Dataframes
- Advanced Dataframe Operations
- Data Cleaning in Pandas
- Numpy
- Missing Data
Module 2: Web APIs
- Basics of APIs and Communication Protocols
- Working with API Requests
- FastAPI
- Routing with FastAPI
- Sending Data to FastAPI
Module 3: SQL
- What is SQL?
- SQL Setup
- SQL Tools Setup
- SQL Commands
- SQL best practices
- SELECT and Sorting
- The WHERE Clause
- CRUD Creating Tables
- CRUD Altering Tables
- SQL JOINs
- SQL JOIN Types
- SQL Common Aggregations
- SQL GROUP BY
- Creating Subqueries
- Types of Subqueries
- CRUD Subquery Operations
- Common Table Expressions (CTEs)
- pyscopg2 and SQLAlchemy
Module 4: Essential Cloud Technology
- What is the Cloud
- Essential Cloud Concepts
- AWS Identity and Access Management
- AWS CLI
- Introduction to Amazon S3
- S3 Objects and boto3
- Amazon EC2
- Virtual Private Cloud
- IAM Roles
- Amazon RDS
- Billing in AWS
Module 5: Big Data Engineering Foundations
- The Data Engineering Landscape
- Data Pipelines
- Data Ingestion and Data Storage
- Enterprise Data Warehouses
- Batch vs Real-Time Processing
- Structured, Unstructured and Complex Data
Module 6: Data Ingestion
- Principles of Data Ingestion
- Batch Processing
- Real-Time Data Processing
- Kafka Essentials
- Kafka-Python
- Streaming in Kafka
Module 7: Data Wrangling and Transformation
- Data Transformations: ELT & ETL
- Apache Spark and Pyspark
- Distributed Processing with Spark
- Integrating Spark & Kafka
- Integrating Spark & AWS S3
- Spark Streaming
Module 8: Data Orchestration
- Apache Airflow
- Integrating Airflow & Spark
Module 9: Advanced Cloud Technologies and Databricks
- MSK and MSK Connect
- AWS API Gateway
- Integrating API Gateway with Kafka
- Databricks Essentials
- Integrating Databricks with Amazon S3
- AWS MWAA
- Orchestrating Databricks Workloads on MWAA
- AWS Kinesis
- Integrating Databricks with AWS Kinesis
- Integrating API Gateway with Kinesis
- Apache Airflow
- Integrating Airflow & Spark
Career support
Work with our outcomes team to launch your new career.
Programme Schedule
The programme is fully remote. There are no traditional “classes” to attend. You can progress through your learning and projects on whatever schedule is convenient to you, booking time with support engineers to guide you as you need it.
Drop-in live support sessions available through the day and evening
Online community meetups Monday - Thursday 6:30PM to 9:30PM where you can work alongside your peers and support engineers are available for instant support
Drop-in live support sessions available through the day and evening
Online community meetups Monday - Thursday 6:30PM to 9:30PM where you can work alongside your peers and support engineers are available for instant support
Drop-in live support sessions available through the day and evening
Online community meetups Monday - Thursday 6:30PM to 9:30PM where you can work alongside your peers and support engineers are available for instant support
Launch your career with AiCore support
Career playbook
Have your CV, LinkedIn and Github portfolio optimized. Learn how to source your ideal roles.
Get referred by alumni
Our alumni network hire directly from AiCore. Over 15% of AiCore grads get hired this way.
Interview coaching
Feel 100% confident going into any hiring process. Our team will prepare you with general and technical mock interviews.
Curated job board
Access our internal job board of curated roles
Success stories
Learning packages that work for you
Professional certification
Obtain professional certifications in essential skills for a successful career as a data analyst, data scientist, data engineer, cloud engineer or machine learning engineer.
Career launch
The end-to-end solution for launching your career as a data analyst, data scientist, data engineer, cloud engineer or machine learning engineer.
Frequently Asked Questions
Who are we?
AiCore is a specialist Ai & Data career accelerator. We deliver an immersive programme that will help launch your career in Ai & Data. To-date we have had over 2500 students successfully graduate our programme.
Where will I take classes?
Our programmes are 100% online and available on demand.
Do I need prior knowledge or an academic degree to join?
No, we don’t require any specific degrees or certifications to join the programme.
Do I receive a certification when I complete the programme?
Yes we offer the following certification for each specialisms:
- Data Engineering: Databricks Certified Data Engineer Associate
- Cloud Engineering: Microsoft Azure Fundamentals (AZ-900)
- Data Analytics: Microsoft Power Bi Data Analyst Associate (PL-300)
- DevOps Engineering: HashiCorp Certified Terraform Associate (003)