198 Big Data Hadoop jobs in Canada
Engineer - Distributed Data Processing System
Posted 13 days ago
Job Viewed
Job Description
Huawei Canada has an immediate 12-month contract opening for an Engineer.
About the team:
Established in 2014, the Distributed Scheduling and Data Engine Lab is Huawei Cloud's technical innovation center in Canada. The lab focuses on researching and developing advanced cloud technologies, supporting the productization and iterative optimization of its technical achievements. Current research areas include cloud native databases, infrastructure resource scheduling and prediction, cloud-native middleware, media engines, and user experience studies. The lab fosters a robust technical environment, allowing collaboration with industry experts to create a highly competitive cloud platform.
About the job:
Work with a team of architects and engineers to develop proof-of-concept distributed systems and product components
Investigate and design data processing system focusing on extra-large volume data for real-time streaming, based distributed system with the latest and state of the art hardware technology
Continuous enhancements on the AI-native data analysis system to fulfill customer requirements utilizing AI techniques for public cloud capacity management and scheduling
Principal Engineer - Distributed Data Processing System
Posted 13 days ago
Job Viewed
Job Description
Huawei Canada has an immediate permanent opening for a Principal Engineer.
About the team:
Established in 2014, the Distributed Scheduling and Data Engine Lab is Huawei Cloud's technical innovation center in Canada. The lab focuses on researching and developing advanced cloud technologies, supporting the productization and iterative optimization of its technical achievements. Current research areas include cloud native databases, infrastructure resource scheduling and prediction, cloud-native middleware, media engines, and user experience studies. The lab fosters a robust technical environment, allowing collaboration with industry experts to create a highly competitive cloud platform. Our team has an immediate permanent opening for a Principal Software Engineer.
About the job:
Work with a team of architects and engineers to develop proof-of-concept distributed systems and product components.
Investigate and design data processing system focusing on extra-large volume data for real-time streaming, based distributed system with the latest and state of the art hardware technology.
Continuous enhancements on the AI-native data analysis system to fulfill customer requirements utilizing AI techniques for public cloud capacity management and scheduling.
Data Engineer
Posted 5 days ago
Job Viewed
Job Description
The **Data Engineer** will work closely with senior developers to build out new and maintain existing ETL pipelines in Airflow. They will be part of our Data Technology team working with an all new technology stack, working closely in collaboration with internal and external stakeholders.
Now, if you were to come on board as our **Data Engineer**, we’d ask you to do the following for us:
- Design, implement and maintain data pipelines for extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
- Build and maintain existing Airflow DAGs
- Maintain and add to our middle tier API
- Prototype new technology that supports our vision of making our consumer’s experiences better with our data
- Provide support and insights to the business analytics and data science teams
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
Think you have what it takes to be our **Data Engineer**? We’re committed to hiring the best talent for the role. Here’s how we’ll know you’ll be successful in the role:
- At least 2-3 years of relevant experience in a similar role or function
- Bachelor’s degree or equivalent in Computer Science, Computer Engineering, Information Systems or similar
- Programming skills with Python and Spark
- Exposure to at least one cloud provider, AWS preferred
- Knowledge of SQL, relational database concepts and SQL scripting
- Some experience with Airflow highly preferred, or similar systems
- Web Development skills with languages and frameworks like Angular, Node.js would be a plus
- Experience with Docker, Kafka or other data stream processing software platform is preferred
Data Engineer
Posted 5 days ago
Job Viewed
Job Description
# **Job Summary**
The **Data Engineer** will work closely with senior developers to build out new and maintain existing ETL pipelines in Airflow. They will be part of our Data Technology team working with an all new technology stack, working closely in collaboration with internal and external stakeholders.
Now, if you were to come on board as our **Data Engineer**, we’d ask you to do the following for us:
- Design, implement and maintain data pipelines for extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
- Build and maintain existing Airflow DAGs
- Maintain and add to our middle tier API
- Prototype new technology that supports our vision of making our consumer’s experiences better with our data
- Provide support and insights to the business analytics and data science teams
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
Think you have what it takes to be our **Data Engineer**? We’re committed to hiring the best talent for the role. Here’s how we’ll know you’ll be successful in the role:
- At least 2-3 years of relevant experience in a similar role or function
- Bachelor’s degree or equivalent in Computer Science, Computer Engineering, Information Systems or similar
- Programming skills with Python and Spark
- Exposure to at least one cloud provider, AWS preferred
- Knowledge of SQL, relational database concepts and SQL scripting
- Some experience with Airflow highly preferred, or similar systems
- Web Development skills with languages and frameworks like Angular, Node.js would be a plus
- Experience with Docker, Kafka or other data stream processing software platform is preferred
Compass Group Canada is committed to nurturing a diverse workforce representative of the communities within which we operate. We encourage and are pleased to consider all qualified candidates, without regard to race, colour, citizenship, religion, sex, marital / family status, sexual orientation, gender identity, aboriginal status, age, disability or persons who may require an accommodation, to apply.
For accommodation requests during the hiring process, please contact for further information.
Data Engineer
Posted today
Job Viewed
Job Description
Job Description
Hybrid Hadoop Engineer and Hadoop Infrastructure Administrator to build and maintain a scalable and resilient Big Data framework to support Data Scientists. As an administrator, your responsibility will be to deploy and maintain Hadoop clusters, add and remove nodes using cluster management and monitoring tools like Cloudera Manager, and support performance and scalability requirements, in support of our Data scientist's needs. Some Relational Database administrator experience will also be desirable to support the general administration of Relational Databases.
Design, build, and maintain Big Data workflows/pipelines to process a continuous stream of data with experience in end-to-end design and build process of Near-Real-Time and Batch Data Pipelines.
Demonstrated work experience in the following with Big Data and distributed programming models and technologies
Knowledge of database structures, theories, principles, and practices (both SQL and NoSQL).
Active development of ETL processes using Spark or other highly parallel technologies, and implementing ETL/data pipelines
Experience with Data technologies and Big Data tools, like Spark, Kafka, Hive
Understanding of Map Reduce and other Data Query and Processing and aggregation models
Understanding of challenges of transforming data across distributed clustered environment
Experience with techniques for consuming, holding, and aging out continuous data streams
Ability to provide quick ingestion tools and corresponding access APIs for continuously changing data schema, working closely with Data Engineers around specific transformation and access needs
Preferred:
Experience as a Database administrator (DBA) will be responsible for keeping critical tools database up and running
Building and managing high-availability environments for databases and HDFS systems
Familiarity with transaction recovery techniques and DB Backup
Skills and Attributes:
Ability to have effective working relationships with all functional units of the organization
Excellent written, verbal, and presentation skills
Excellent interpersonal skills
Ability to work as part of a cross-cultural team
Self-starter and Self-motivated
Ability to work without lots of supervision
Works under pressure and is able to manage competing priorities.
Hide
Data Engineer
Posted today
Job Viewed
Job Description
Job Description
Salary:
Forum Asset Management Data Engineer
Location: 181 Bay Street, Toronto
In Office:5 days per week
Overview of Forum:
Join us in deliveringExtraordinary Outcomesthrough investment.
Forum is an investor, developer and asset manager operating across North America for over 28 years, focusing on real estate, private equity and infrastructure, with a strategic concentration in housing.
Our AUM exceeds $3 billion. We are committed to sustainability, responsible investing and creating value that benefits our stakeholders and the communities in which we invest, what we call our Extraordinary Outcomes.
In 2024, Forum completed thelargest real estate transaction in Canadawith theAlignvest acquisition,making us thelargest owner of Purpose-Built Student Accommodation (PBSA) in Canada through our $.5B open ended Real Estate Income and Impact Fund (REIIF).Ournational development pipeline now exceeds 3.5 billion,positioning Forum as thelargest developer of PBSA in Canada, operating from coast to coast.
The Forum team is adaptable, agile, and dynamic, committed to sustainable and responsible investing. Our people bring diverse cultural backgrounds and professional experiences, fostering innovation and thought leadership.We uphold integrity, trust, and transparency as core values, knowing that to achieveExtraordinary Outcomes,we need to support and develop anExtraordinary team.
Position Overview:
Were looking for aData Engineer to own the architecture, build, and evolution of Forums enterprise data platform using the Microsoft Fabric technology stack (or a suitable alternative). This is a rare greenfield opportunity to architect the full data lifecyclefrom ingestion and transformation to analytics and AI enablementwhile directly impacting Forums investment, real estate, and operational strategies.
This is a hands-on, in-office role with high visibility. Youll act as a trusted advisor and technical authority, collaborating with investment professionals, business leaders, analysts, and software developers to design enterprise-grade systems and AI-powered workflows that drive measurable business value. Your work will lay the foundation for our long-term data strategy and the future of decision-making at Forum.
Key Duties and Responsibilities:
- Own the end-to-end design and evolution of Forums Lakehouse architecture Microsoft Fabric, Azure Data Factory, Synapse, ADLS, and related Azure services
- Define and enforce data engineering standards, governance frameworks, and lifecycle management practices
- Lead large-scale data initiatives, migrations, and integrations across diverse internal and external systems
- Design and optimize enterprise-grade ETL/ELT pipelines for high-volume, high-complexity data
- Implement structured data workflows (e.g., Medallion Architecture) to deliver reliable, business-ready data
- Develop AI/ML-powered workflows using tools like Azure OpenAI, Document Intelligence, and Azure ML
- Create internal APIs and tools to enable self-serve analytics and data access across departments
- Partner with business teams across real estate, private equity, and operations to identify data opportunities and implement tailored solutions
- Develop and evolve our Power BI dashboards and reporting layer, ensuring reliable data access and visualization
- Promote best practices in data governance, automation, and AI application across Forums technology ecosystem
- Partner with internal teams and external technology vendors to drive the rollout of the platform
Candidate Profile:
- 6-7 years of progressive experience in data engineering, with a proven track record of architecting and delivering enterprise-scale data solutions
- Expert-level experience with the Microsoft Azure ecosystem: Microsoft Fabric, Azure Data Factory (ADF), Synapse, ADLS Gen2, and Power BI
- Proficiency in Python for data engineering, automation, and API development
- Deep understanding of data modeling, ELT/ETL design, and data warehouse best practices
- Track record architecting enterprise-scale data platforms in high-growth or regulated industries
- Proven success deploying AI/ML solutions into production at scale
- Experience integrating Azure OpenAI, LLMs, and document intelligence into real business processes
- Ability to evaluate, pilot, and operationalize emerging AI technologies for measurable business impact
- Demonstrated ability to work directly with executives, investors, and analysts to shape data strategy
- Strong communication skills and an entrepreneurial mindset: capable of turning ambiguity into working solutions
- Experience working with financial data models, capital tables, investment reports, or property management systems
- Background in private equity, real estate, asset management, or related sectors preferred
At Forum, we encourage diversity. We are committed to an inclusive workplace that reflects our belief that diversity is central to building a high-performing team. Forum is an equal-opportunity employer. We are committed to providing accessible employment practices. Should you require an accommodation during any phase of the recruitment process, please let the recruitment team know at
Data Engineer
Posted today
Job Viewed
Job Description
Job Description
Salary: $90,000 - $110,000
We are seeking a motivated Data Engineer with 2-3 years of experience to join our innovative team. In this role, you will design, build, and optimize data pipelines and analytics workflows using modern cloud data platforms and tools. You will collaborate with cross-functional teams to transform raw data into actionable insights that power intelligent product features, ML models, and strategic decision-making.
Responsibilities
- Design and implement robust ETL pipelines using Apache Airflow
- Build and maintain data warehouses in Snowflake to support scalable data ingestion and transformation
- Develop ETL workflows for structured and semi-structured data from various sources using AWS services
- Collaborate with data scientists and ML engineers to prepare and transform data for AI/ML model training and inference
- Build interactive data applications using Streamlit
- Design and implement data models to support machine learning workflows and analytics
- Integrate data from APIs, databases, and cloud storage using Python
- Implement data quality checks and monitoring systems to ensure data integrity
- Document data models and pipeline architectures
- Stay updated on advancements in Airflow, AWS, Snowflake, Streamlit, and AI/ML technologies
Requirements
- 2-3 years of professional experience in data engineering or a related field
- Hands-on experience with Apache Airflow for building and orchestrating ETL pipelines
- Practical experience with AWS services
- Experience working with Snowflake for data warehousing workloads
- Strong Python programming skills for data processing and automation
- Experience with Streamlit for building data applications
- Solid understanding of data modeling concepts
- Familiarity with AI/ML workflows, including data preparation for model training
- Bachelor's degree in Computer Science, Data Engineering, or a related technical field (or equivalent experience)
Preferred Qualifications
- Advanced experience with AWS data services
- Deep expertise in Snowflake optimization and performance tuning
- Experience building complex Streamlit applications
- Advanced Python skills for data wrangling and automation
- Experience with AI/ML model deployment and data modeling for machine learning
- Knowledge of Airflow best practices for ETL pipeline design
Be The First To Know
About the latest Big data hadoop Jobs in Canada !
Data Engineer
Posted today
Job Viewed
Job Description
Job Description
Title: Data Engineer - P2
Location: Remote Canada
Reports To: Senior Principal, Data Engineer
The Role:
As part of the data engineering team, the Data Engineer is responsible for designing and optimizing data structure, developing ETL processes, preparing and maintaining related documents, and automating and implementing data processes for different data projects and products.
Responsibilities:
- Work closely with project teams in different kinds of data related projects, design and implement data structure/models, develop automated data programs with optimized performance.
- As part of the Enterprise Data Hub implementation team, design, develop and maintain related ETL processes and data objects.
- Perform and provide complete tests, validation and documentation for all development tasks as required.
- Perform business analytics and visualization in support of product teams in an iterated fashion during product development
Qualifications:
- Bachelor's degree or master's degree in a quantitative or engineering field such as Computer Science and Information Systems, Database Management, Big Data, Data Engineering, Data Science, Applied Math, or other related area.
- Proficient and advanced knowledge of SQL and Python programming is a must. 1+ years of professional experiences with heavy usage of SQL and Python in daily development work.
- Familiar with ETL methodologies and Data Warehousing principles, approaches, technologies, and architectures including the concepts, designs, and usage of data warehouses and data marts.
- Familiar with python environment management tool (e.g., conda), experienced in using Git and the related tools (Github, Gitlab, etc.) during programming process.
- Experiences of cloud platform and products (such as Google Clouds, BigQuery, etc.) is a plus.
- Experience with automotive data helpful.
- Good communication skills.
The Career Opportunity:
You will have the opportunity to work with industry experts, learn the different auto industry data assets and how these data and solutions are helping our clients in their daily operations and strategic planning. You will also have the opportunity to strengthen your data design, data engineering and analytic skills by daily exposure to senior data science and data engineer staff at J.D. Power.
The Team / The Business:
Data Science has global responsibility for the research design, analyses planning and execution of all advanced analytics for survey research at J.D. Power. Additionally, Data Engineering team within Data Science is responsible for linking and optimizing datasets from all kinds of sources. You will report directly to Data Engineering Team Sr. Principal and join a team of highly educated and enthusiastic data scientists and data engineers that collaborate regularly with each other and with other teams across the firm.
Our Hiring Manager says:
I'm looking for someone who can work as part of a broader engagement team and can design and implement the linking of our data assets and also perform analyses for new product development and insights. If you're right for this role, you are a self-starter who is able to develop creative solutions to problems, not be afraid to ask questions as you learn our company's culture and do it all with enthusiasm.
J.D. Power is a global leader in consumer insights, advisory services and data and analytics. A pioneer in the use of big data, artificial intelligence (AI) and algorithmic modeling capabilities to understand consumer behavior, J.D. Power has been delivering incisive industry intelligence on customer interactions with brands and products for more than 50 years. The world's leading businesses across major industries rely on J.D. Power to guide their customer-facing strategies.
The Way We Work:
- Leader Led
- Remote First
- Foster Flexibility
- Reward Performance
- Time Off Matters
Company Mission
J.D. Power is clear about what we do to ensure our success into the future. We unite industry leading data and insights with world-class technology to solve our clients' toughest challenges.
Our Values
At J.D. Power, we strive to be Truth Finders, Change Makers and Team Driven - the distinct behaviors that, together, define our unique culture.
J.D. Power is committed to employing a diverse workforce. Qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, age, sexual orientation, gender identity, gender expression, veteran status, or disability.
J.D. Power is an equal-opportunity employer and compliant with AODA/ADA legislation. Should you require accommodations during the recruitment and selection process, please reach out to
To all recruitment agencies: J.D. Power does not accept unsolicited agency resumes and we are not responsible for any fees related to unsolicited resumes.
Data Engineer
Posted today
Job Viewed
Job Description
Job Description
Salary: $90,000-$110,000
We're Hiring!
We're looking for brilliant thinkers to join our #Rocketeers. If you've ever wondered what it's like to work in a place where people enjoy their work and where talent is more important than the title, then keep reading.
What is ScalePad?
ScalePad is a market-leading software-as-a-service (SaaS) company with headquarters in Vancouver, Toronto, Montreal and Phoenix, AZ. However, we are proud to say our employee reach is now global so we can best serve our partners all over the world.
Our success is no accident: ScalePad provides MSPs of every size with the knowledge, technology, and community they need to deliver increased client value while navigating the continuously changing terrain of the IT landscape. With a suite of integrated products that automate and standardize MSPs operations, analyze and uncover new opportunities, and expand value to clients, ScalePad is equipping the MSP adventure.
ScalePad has received awards such as MSP Todays Product of the Year, G2s 2024 Fastest Growing Product, and 2024 Best IT Management Product. In 2023, it was named a Best Workplace in Canada by Great Place to Work. ScalePad is a privately held company serving over 12,000 MSPs across the globe.
You can contribute to our innovation and appreciate how your work is helping take this company to a higher level of operational maturity. More on that here.
Your mission should you choose to accept it.
As a Data Engineer supporting our Revenue Operations, your mission is to drive the evolution of our business intelligence, data management, and operational solutions. Your active involvement in developing innovative approaches will ensure our scalability aligns seamlessly with our vision for growth. Collaborating closely with Data Scientists, Business Analysts, and Software Engineers, you will contribute to the creation of exceptional solutions. Our collaborative work environment values your input, as it plays a pivotal role in our continuous innovation process. This unique opportunity empowers you to engage in every aspect of the data engineering cycle, with the guidance and resources needed to learn, optimize, advance, and grow every quarter.
Responsibilities.
- Familiarize yourself with our current initiatives, teams, and operational workflows to seamlessly integrate into ongoing projects.
- Master our technology architecture, gaining a deep understanding of our systems and infrastructure.
- Gain insight into our business operations structure, needs, and cycles to align your work with organizational goals.
- Join a new development team and kick-off major projects.
- Collaborate closely with data scientists to enable their projects with robust data engineering solutions.
- Work on data lake and data warehouse solutions, optimizing data storage and retrieval processes.
- Develop solutions that seamlessly connect different departments, fostering efficient data flow and information sharing.
- You have autonomy in your ability to manage and prioritize your tasks with an innate ability to shift gears when required.
- You are a sponge when it comes to learning and excelling even when outside your comfort zone.
- Maintain an unwavering commitment to the quality of your work and that of your teammates, upholding high standards.
- Embrace a learning mindset, excelling even outside your comfort zone and staying updated on industry trends.
- Possessing experience in OLAP databases, artificial intelligence, machine learning, and event sourcing is considered an asset.
Qualifications.
- 2+ years of hands-on Data Engineer experience with a strong track record of delivering data solutions.
- Proficiency in Python, R, C#, or equivalent programming languages for crafting data solutions.
- Strong knowledge of SQL and NoSQL solutions.
- Familiarity with Agile development methodologies.
- Experience in conceptual, business, and analytical data modeling.
- Proficiency in Databricks, PySpark, and DBT.
- Experience with data ingestion tools like Fivetran, Airbyte, and Airflow.
- Deep understanding of data pipelines, including ETL processes.
- Commitment to best practices for efficient data pipelines, data quality, and integrity.
- Passion for Big Data, data mining, artificial intelligence, and machine learning.
- Curious mindset, love for learning, and strong problem-solving skills.
- Ability to collaborate with business stakeholders to clarify data requirements and develop models.
- Strong focus on practical and valuable solutions for end-users.
- Agile and iterative approach to work, with the ability to recognize and rectify mistakes.
- Nice qualifications to have: Experience with CRM, analytics, data visualization, and billing tools (e.g., Zendesk, HubSpot, Segment, GA, Tableau, Stripe, Chargebee).
What Youll Love Working As A Rocketeer:
- Everyones an Owner: Through our Employee Stock Option Plan (ESOP), each team member has a stake in our success. As we scale, your contributions directly shape our future and you share in the rewards.
- Growth, Longevity and Stability: Benefit from insights and training from our leadership and founder, whose extensive experience in funding and scaling successful software companies creates a stable environment for your long-term career growth. Their proven track record fosters a culture of lasting success.
- Annual Training & Development: Every employee receives an annual budget for professional development, empowering you to advance your skills and career on your terms.
- Hybrid Flexibility: Enjoy a world-class office at our headquarters in downtown Vancouver, Toronto, and Montreal
- Cutting-Edge Gear: Whether in the office or at home, youll be set up for success with top-of-the-line hardware.
- Wellness at Work: Our Vancouver office features a fitness facility, outdoor ping-pong tables
- Comprehensive Benefits: Weve got you covered with an extensive benefits package with 100% medical and dental coverage fully employer-paid, RRSP matching after one year of employment, and even a monthly stipend to help offset the costs of the hybrid experience.
- Flexible Time Off: With our unlimited flex-time policy in addition to all accrued vacation allows you to take the time you need to recharge and thrive.
Dream jobs dont knock on your door every day.
ScalePad is not your typical software company. When we hire you, we arent just offering you a job, but rather we are committing to investing in both you and your long-term career. You'll help shape how this modern SaaS company operates and make a genuine impact on the future of our people, product, and partners.
At ScalePad, we believe in the power of Diversity, Equity, Inclusion, and Belonging (DEIB) to drive innovation, collaboration, and success. We are committed to fostering a workplace where every individual's unique experiences and perspectives are valued, and where employees from all backgrounds can thrive. Our dedication to DEIB is woven into the fabric of our culture, guiding our actions and decisions as we build a stronger and more inclusive future together.
Join us and be part of a team that celebrates differences, embraces fairness, and ensures that everyone has an equal opportunity to contribute and grow. Together, we're creating an environment where diverse voices are not only heard but also amplified, where everyone feels valued, and where we can all achieve our full potential.
Please no recruiters or phone calls.