216 Datastage Etl jobs in Canada

Data Engineer

Mississauga, Ontario Compass Group

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

# Job Summary

The **Data Engineer** will work closely with senior developers to build out new and maintain existing ETL pipelines in Airflow. They will be part of our Data Technology team working with an all new technology stack, working closely in collaboration with internal and external stakeholders.

Now, if you were to come on board as our **Data Engineer**, we’d ask you to do the following for us:

- Design, implement and maintain data pipelines for extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
- Build and maintain existing Airflow DAGs
- Maintain and add to our middle tier API
- Prototype new technology that supports our vision of making our consumer’s experiences better with our data
- Provide support and insights to the business analytics and data science teams
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs

Think you have what it takes to be our **Data Engineer**? We’re committed to hiring the best talent for the role. Here’s how we’ll know you’ll be successful in the role:

- At least 2-3 years of relevant experience in a similar role or function
- Bachelor’s degree or equivalent in Computer Science, Computer Engineering, Information Systems or similar
- Programming skills with Python and Spark
- Exposure to at least one cloud provider, AWS preferred
- Knowledge of SQL, relational database concepts and SQL scripting
- Some experience with Airflow highly preferred, or similar systems
- Web Development skills with languages and frameworks like Angular, Node.js would be a plus
- Experience with Docker, Kafka or other data stream processing software platform is preferred
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Mississauga, Ontario Compass Group

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

We are CDAI—the data and artificial intelligence engine of Compass Group North America. We design and deliver custom, in-house solutions tailored to the unique complexities of food service and hospitality. Our work is grounded in strong data foundations, layered with AI to enhance forecasting, streamline operations, and enable better, faster decision-making across Compass Group. With deep integration into the business and a commitment to white-glove service, CDAI empowers associates, clients, and customers through innovative, future-forward technologies.

# **Job Summary**

The **Data Engineer** will work closely with senior developers to build out new and maintain existing ETL pipelines in Airflow. They will be part of our Data Technology team working with an all new technology stack, working closely in collaboration with internal and external stakeholders.

Now, if you were to come on board as our **Data Engineer**, we’d ask you to do the following for us:

- Design, implement and maintain data pipelines for extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
- Build and maintain existing Airflow DAGs
- Maintain and add to our middle tier API
- Prototype new technology that supports our vision of making our consumer’s experiences better with our data
- Provide support and insights to the business analytics and data science teams
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs

Think you have what it takes to be our **Data Engineer**? We’re committed to hiring the best talent for the role. Here’s how we’ll know you’ll be successful in the role:

- At least 2-3 years of relevant experience in a similar role or function
- Bachelor’s degree or equivalent in Computer Science, Computer Engineering, Information Systems or similar
- Programming skills with Python and Spark
- Exposure to at least one cloud provider, AWS preferred
- Knowledge of SQL, relational database concepts and SQL scripting
- Some experience with Airflow highly preferred, or similar systems
- Web Development skills with languages and frameworks like Angular, Node.js would be a plus
- Experience with Docker, Kafka or other data stream processing software platform is preferred

Compass Group Canada is committed to nurturing a diverse workforce representative of the communities within which we operate. We encourage and are pleased to consider all qualified candidates, without regard to race, colour, citizenship, religion, sex, marital / family status, sexual orientation, gender identity, aboriginal status, age, disability or persons who may require an accommodation, to apply.

For accommodation requests during the hiring process, please contact for further information.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Toronto, Ontario I-cube Software Llc

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

Job Description

Hybrid Hadoop Engineer and Hadoop Infrastructure Administrator to build and maintain a scalable and resilient Big Data framework to support Data Scientists. As an administrator, your responsibility will be to deploy and maintain Hadoop clusters, add and remove nodes using cluster management and monitoring tools like Cloudera Manager, and support performance and scalability requirements, in support of our Data scientist's needs. Some Relational Database administrator experience will also be desirable to support the general administration of Relational Databases.

Design, build, and maintain Big Data workflows/pipelines to process a continuous stream of data with experience in end-to-end design and build process of Near-Real-Time and Batch Data Pipelines.


Demonstrated work experience in the following with Big Data and distributed programming models and technologies
Knowledge of database structures, theories, principles, and practices (both SQL and NoSQL).
Active development of ETL processes using Spark or other highly parallel technologies, and implementing ETL/data pipelines
Experience with Data technologies and Big Data tools, like Spark, Kafka, Hive
Understanding of Map Reduce and other Data Query and Processing and aggregation models
Understanding of challenges of transforming data across distributed clustered environment
Experience with techniques for consuming, holding, and aging out continuous data streams
Ability to provide quick ingestion tools and corresponding access APIs for continuously changing data schema, working closely with Data Engineers around specific transformation and access needs

Preferred:

Experience as a Database administrator (DBA) will be responsible for keeping critical tools database up and running
Building and managing high-availability environments for databases and HDFS systems
Familiarity with transaction recovery techniques and DB Backup

Skills and Attributes:

Ability to have effective working relationships with all functional units of the organization
Excellent written, verbal, and presentation skills
Excellent interpersonal skills
Ability to work as part of a cross-cultural team
Self-starter and Self-motivated
Ability to work without lots of supervision
Works under pressure and is able to manage competing priorities.

Hide

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Toronto, Ontario Forum Asset Management

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

Job Description

Salary:

Forum Asset Management Data Engineer


Location: 181 Bay Street, Toronto


In Office:5 days per week


Overview of Forum:

Join us in deliveringExtraordinary Outcomesthrough investment.


Forum is an investor, developer and asset manager operating across North America for over 28 years, focusing on real estate, private equity and infrastructure, with a strategic concentration in housing.


Our AUM exceeds $3 billion. We are committed to sustainability, responsible investing and creating value that benefits our stakeholders and the communities in which we invest, what we call our Extraordinary Outcomes.


In 2024, Forum completed thelargest real estate transaction in Canadawith theAlignvest acquisition,making us thelargest owner of Purpose-Built Student Accommodation (PBSA) in Canada through our $.5B open ended Real Estate Income and Impact Fund (REIIF).Ournational development pipeline now exceeds 3.5 billion,positioning Forum as thelargest developer of PBSA in Canada, operating from coast to coast.


The Forum team is adaptable, agile, and dynamic, committed to sustainable and responsible investing. Our people bring diverse cultural backgrounds and professional experiences, fostering innovation and thought leadership.We uphold integrity, trust, and transparency as core values, knowing that to achieveExtraordinary Outcomes,we need to support and develop anExtraordinary team.

Position Overview:

Were looking for aData Engineer to own the architecture, build, and evolution of Forums enterprise data platform using the Microsoft Fabric technology stack (or a suitable alternative). This is a rare greenfield opportunity to architect the full data lifecyclefrom ingestion and transformation to analytics and AI enablementwhile directly impacting Forums investment, real estate, and operational strategies.

This is a hands-on, in-office role with high visibility. Youll act as a trusted advisor and technical authority, collaborating with investment professionals, business leaders, analysts, and software developers to design enterprise-grade systems and AI-powered workflows that drive measurable business value. Your work will lay the foundation for our long-term data strategy and the future of decision-making at Forum.


Key Duties and Responsibilities:

  • Own the end-to-end design and evolution of Forums Lakehouse architecture Microsoft Fabric, Azure Data Factory, Synapse, ADLS, and related Azure services
  • Define and enforce data engineering standards, governance frameworks, and lifecycle management practices
  • Lead large-scale data initiatives, migrations, and integrations across diverse internal and external systems
  • Design and optimize enterprise-grade ETL/ELT pipelines for high-volume, high-complexity data
  • Implement structured data workflows (e.g., Medallion Architecture) to deliver reliable, business-ready data
  • Develop AI/ML-powered workflows using tools like Azure OpenAI, Document Intelligence, and Azure ML
  • Create internal APIs and tools to enable self-serve analytics and data access across departments
  • Partner with business teams across real estate, private equity, and operations to identify data opportunities and implement tailored solutions
  • Develop and evolve our Power BI dashboards and reporting layer, ensuring reliable data access and visualization
  • Promote best practices in data governance, automation, and AI application across Forums technology ecosystem
  • Partner with internal teams and external technology vendors to drive the rollout of the platform


Candidate Profile:

  • 6-7 years of progressive experience in data engineering, with a proven track record of architecting and delivering enterprise-scale data solutions
  • Expert-level experience with the Microsoft Azure ecosystem: Microsoft Fabric, Azure Data Factory (ADF), Synapse, ADLS Gen2, and Power BI
  • Proficiency in Python for data engineering, automation, and API development
  • Deep understanding of data modeling, ELT/ETL design, and data warehouse best practices
  • Track record architecting enterprise-scale data platforms in high-growth or regulated industries
  • Proven success deploying AI/ML solutions into production at scale
  • Experience integrating Azure OpenAI, LLMs, and document intelligence into real business processes
  • Ability to evaluate, pilot, and operationalize emerging AI technologies for measurable business impact
  • Demonstrated ability to work directly with executives, investors, and analysts to shape data strategy
  • Strong communication skills and an entrepreneurial mindset: capable of turning ambiguity into working solutions
  • Experience working with financial data models, capital tables, investment reports, or property management systems
  • Background in private equity, real estate, asset management, or related sectors preferred



At Forum, we encourage diversity. We are committed to an inclusive workplace that reflects our belief that diversity is central to building a high-performing team. Forum is an equal-opportunity employer. We are committed to providing accessible employment practices. Should you require an accommodation during any phase of the recruitment process, please let the recruitment team know at

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Surrey, British Columbia Monark

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

Job Description

Salary: $90,000 - $110,000

We are seeking a motivated Data Engineer with 2-3 years of experience to join our innovative team. In this role, you will design, build, and optimize data pipelines and analytics workflows using modern cloud data platforms and tools. You will collaborate with cross-functional teams to transform raw data into actionable insights that power intelligent product features, ML models, and strategic decision-making.

Responsibilities

  • Design and implement robust ETL pipelines using Apache Airflow
  • Build and maintain data warehouses in Snowflake to support scalable data ingestion and transformation
  • Develop ETL workflows for structured and semi-structured data from various sources using AWS services
  • Collaborate with data scientists and ML engineers to prepare and transform data for AI/ML model training and inference
  • Build interactive data applications using Streamlit
  • Design and implement data models to support machine learning workflows and analytics
  • Integrate data from APIs, databases, and cloud storage using Python
  • Implement data quality checks and monitoring systems to ensure data integrity
  • Document data models and pipeline architectures
  • Stay updated on advancements in Airflow, AWS, Snowflake, Streamlit, and AI/ML technologies

Requirements

  • 2-3 years of professional experience in data engineering or a related field
  • Hands-on experience with Apache Airflow for building and orchestrating ETL pipelines
  • Practical experience with AWS services
  • Experience working with Snowflake for data warehousing workloads
  • Strong Python programming skills for data processing and automation
  • Experience with Streamlit for building data applications
  • Solid understanding of data modeling concepts
  • Familiarity with AI/ML workflows, including data preparation for model training
  • Bachelor's degree in Computer Science, Data Engineering, or a related technical field (or equivalent experience)

Preferred Qualifications

  • Advanced experience with AWS data services
  • Deep expertise in Snowflake optimization and performance tuning
  • Experience building complex Streamlit applications
  • Advanced Python skills for data wrangling and automation
  • Experience with AI/ML model deployment and data modeling for machine learning
  • Knowledge of Airflow best practices for ETL pipeline design

This advertiser has chosen not to accept applicants from your region.

Data Engineer

London, Ontario J.D. Power

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

Job Description

Title: Data Engineer - P2

Location: Remote Canada

Reports To: Senior Principal, Data Engineer

The Role:

As part of the data engineering team, the Data Engineer is responsible for designing and optimizing data structure, developing ETL processes, preparing and maintaining related documents, and automating and implementing data processes for different data projects and products.

Responsibilities:

  • Work closely with project teams in different kinds of data related projects, design and implement data structure/models, develop automated data programs with optimized performance.
  • As part of the Enterprise Data Hub implementation team, design, develop and maintain related ETL processes and data objects.
  • Perform and provide complete tests, validation and documentation for all development tasks as required.
  • Perform business analytics and visualization in support of product teams in an iterated fashion during product development

Qualifications:

  • Bachelor's degree or master's degree in a quantitative or engineering field such as Computer Science and Information Systems, Database Management, Big Data, Data Engineering, Data Science, Applied Math, or other related area.
  • Proficient and advanced knowledge of SQL and Python programming is a must. 1+ years of professional experiences with heavy usage of SQL and Python in daily development work.
  • Familiar with ETL methodologies and Data Warehousing principles, approaches, technologies, and architectures including the concepts, designs, and usage of data warehouses and data marts.
  • Familiar with python environment management tool (e.g., conda), experienced in using Git and the related tools (Github, Gitlab, etc.) during programming process.
  • Experiences of cloud platform and products (such as Google Clouds, BigQuery, etc.) is a plus.
  • Experience with automotive data helpful.
  • Good communication skills.

The Career Opportunity:

You will have the opportunity to work with industry experts, learn the different auto industry data assets and how these data and solutions are helping our clients in their daily operations and strategic planning. You will also have the opportunity to strengthen your data design, data engineering and analytic skills by daily exposure to senior data science and data engineer staff at J.D. Power.

The Team / The Business:

Data Science has global responsibility for the research design, analyses planning and execution of all advanced analytics for survey research at J.D. Power. Additionally, Data Engineering team within Data Science is responsible for linking and optimizing datasets from all kinds of sources. You will report directly to Data Engineering Team Sr. Principal and join a team of highly educated and enthusiastic data scientists and data engineers that collaborate regularly with each other and with other teams across the firm.

Our Hiring Manager says:

I'm looking for someone who can work as part of a broader engagement team and can design and implement the linking of our data assets and also perform analyses for new product development and insights. If you're right for this role, you are a self-starter who is able to develop creative solutions to problems, not be afraid to ask questions as you learn our company's culture and do it all with enthusiasm.

J.D. Power is a global leader in consumer insights, advisory services and data and analytics. A pioneer in the use of big data, artificial intelligence (AI) and algorithmic modeling capabilities to understand consumer behavior, J.D. Power has been delivering incisive industry intelligence on customer interactions with brands and products for more than 50 years. The world's leading businesses across major industries rely on J.D. Power to guide their customer-facing strategies. 

The Way We Work:

  • Leader Led
  • Remote First
  • Foster Flexibility
  • Reward Performance
  • Time Off Matters

Company Mission

J.D. Power is clear about what we do to ensure our success into the future. We unite industry leading data and insights with world-class technology to solve our clients' toughest challenges.  

Our Values

At J.D. Power, we strive to be Truth Finders, Change Makers and Team Driven - the distinct behaviors that, together, define our unique culture. 

J.D. Power is committed to employing a diverse workforce. Qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, age, sexual orientation, gender identity, gender expression, veteran status, or disability.

J.D. Power is an equal-opportunity employer and compliant with AODA/ADA legislation. Should you require accommodations during the recruitment and selection process, please reach out to

To all recruitment agencies: J.D. Power does not accept unsolicited agency resumes and we are not responsible for any fees related to unsolicited resumes.

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Vancouver, British Columbia ScalePad

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

Job Description

Salary: $90,000-$110,000

We're Hiring!

We're looking for brilliant thinkers to join our #Rocketeers. If you've ever wondered what it's like to work in a place where people enjoy their work and where talent is more important than the title, then keep reading.


What is ScalePad?

ScalePad is a market-leading software-as-a-service (SaaS) company with headquarters in Vancouver, Toronto, Montreal and Phoenix, AZ. However, we are proud to say our employee reach is now global so we can best serve our partners all over the world.


Our success is no accident: ScalePad provides MSPs of every size with the knowledge, technology, and community they need to deliver increased client value while navigating the continuously changing terrain of the IT landscape. With a suite of integrated products that automate and standardize MSPs operations, analyze and uncover new opportunities, and expand value to clients, ScalePad is equipping the MSP adventure.

ScalePad has received awards such as MSP Todays Product of the Year, G2s 2024 Fastest Growing Product, and 2024 Best IT Management Product. In 2023, it was named a Best Workplace in Canada by Great Place to Work. ScalePad is a privately held company serving over 12,000 MSPs across the globe.

You can contribute to our innovation and appreciate how your work is helping take this company to a higher level of operational maturity. More on that here.


Your mission should you choose to accept it.

As a Data Engineer supporting our Revenue Operations, your mission is to drive the evolution of our business intelligence, data management, and operational solutions. Your active involvement in developing innovative approaches will ensure our scalability aligns seamlessly with our vision for growth. Collaborating closely with Data Scientists, Business Analysts, and Software Engineers, you will contribute to the creation of exceptional solutions. Our collaborative work environment values your input, as it plays a pivotal role in our continuous innovation process. This unique opportunity empowers you to engage in every aspect of the data engineering cycle, with the guidance and resources needed to learn, optimize, advance, and grow every quarter.


Responsibilities.

  • Familiarize yourself with our current initiatives, teams, and operational workflows to seamlessly integrate into ongoing projects.
  • Master our technology architecture, gaining a deep understanding of our systems and infrastructure.
  • Gain insight into our business operations structure, needs, and cycles to align your work with organizational goals.
  • Join a new development team and kick-off major projects.
  • Collaborate closely with data scientists to enable their projects with robust data engineering solutions.
  • Work on data lake and data warehouse solutions, optimizing data storage and retrieval processes.
  • Develop solutions that seamlessly connect different departments, fostering efficient data flow and information sharing.
  • You have autonomy in your ability to manage and prioritize your tasks with an innate ability to shift gears when required.
  • You are a sponge when it comes to learning and excelling even when outside your comfort zone.
  • Maintain an unwavering commitment to the quality of your work and that of your teammates, upholding high standards.
  • Embrace a learning mindset, excelling even outside your comfort zone and staying updated on industry trends.
  • Possessing experience in OLAP databases, artificial intelligence, machine learning, and event sourcing is considered an asset.

Qualifications.

  • 2+ years of hands-on Data Engineer experience with a strong track record of delivering data solutions.
  • Proficiency in Python, R, C#, or equivalent programming languages for crafting data solutions.
  • Strong knowledge of SQL and NoSQL solutions.
  • Familiarity with Agile development methodologies.
  • Experience in conceptual, business, and analytical data modeling.
  • Proficiency in Databricks, PySpark, and DBT.
  • Experience with data ingestion tools like Fivetran, Airbyte, and Airflow.
  • Deep understanding of data pipelines, including ETL processes.
  • Commitment to best practices for efficient data pipelines, data quality, and integrity.
  • Passion for Big Data, data mining, artificial intelligence, and machine learning.
  • Curious mindset, love for learning, and strong problem-solving skills.
  • Ability to collaborate with business stakeholders to clarify data requirements and develop models.
  • Strong focus on practical and valuable solutions for end-users.
  • Agile and iterative approach to work, with the ability to recognize and rectify mistakes.
  • Nice qualifications to have: Experience with CRM, analytics, data visualization, and billing tools (e.g., Zendesk, HubSpot, Segment, GA, Tableau, Stripe, Chargebee).


What Youll Love Working As A Rocketeer:

  • Everyones an Owner: Through our Employee Stock Option Plan (ESOP), each team member has a stake in our success. As we scale, your contributions directly shape our future and you share in the rewards.
  • Growth, Longevity and Stability: Benefit from insights and training from our leadership and founder, whose extensive experience in funding and scaling successful software companies creates a stable environment for your long-term career growth. Their proven track record fosters a culture of lasting success.
  • Annual Training & Development: Every employee receives an annual budget for professional development, empowering you to advance your skills and career on your terms.
  • Hybrid Flexibility: Enjoy a world-class office at our headquarters in downtown Vancouver, Toronto, and Montreal
  • Cutting-Edge Gear: Whether in the office or at home, youll be set up for success with top-of-the-line hardware.
  • Wellness at Work: Our Vancouver office features a fitness facility, outdoor ping-pong tables
  • Comprehensive Benefits: Weve got you covered with an extensive benefits package with 100% medical and dental coverage fully employer-paid, RRSP matching after one year of employment, and even a monthly stipend to help offset the costs of the hybrid experience.
  • Flexible Time Off: With our unlimited flex-time policy in addition to all accrued vacation allows you to take the time you need to recharge and thrive.

Dream jobs dont knock on your door every day.

ScalePad is not your typical software company. When we hire you, we arent just offering you a job, but rather we are committing to investing in both you and your long-term career. You'll help shape how this modern SaaS company operates and make a genuine impact on the future of our people, product, and partners.

At ScalePad, we believe in the power of Diversity, Equity, Inclusion, and Belonging (DEIB) to drive innovation, collaboration, and success. We are committed to fostering a workplace where every individual's unique experiences and perspectives are valued, and where employees from all backgrounds can thrive. Our dedication to DEIB is woven into the fabric of our culture, guiding our actions and decisions as we build a stronger and more inclusive future together.

Join us and be part of a team that celebrates differences, embraces fairness, and ensures that everyone has an equal opportunity to contribute and grow. Together, we're creating an environment where diverse voices are not only heard but also amplified, where everyone feels valued, and where we can all achieve our full potential.



Please no recruiters or phone calls.

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Datastage etl Jobs in Canada !

Data Engineer

Toronto, Ontario CI Financial Corp.

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

Job Description

Description

At CI, we see a great place to work as one that is a safe place for everyone to have a voice, where people are empowered to take ownership over meaningful work, where there is an opportunity to grow through stretching themselves, where they can work on innovative products and projects, and where employees are supported and engaged in doing so.

PRIMARY OBJECTIVE:

We’re seeking a versatile professional who can bridge the gap between software development and data engineering. The successful candidate will work closely with our IT Client Reporting team to support the development and maintenance of existing applications, while driving efforts toward modernization, cloud adoption, and the application of data engineering principles.

RESPONSIBILITIES:

  • Support, maintain, and optimize custom applications for tax reporting built on legacy platforms.
  • Analyze business requirements and translate them into scalable application features and efficient data workflows.
  • Collaborate with analysts, business users, and infrastructure teams to deliver end-to-end solutions.
  • Design and build solutions aligned with modernization goals and modern data engineering principles where applicable.
  • Develop and maintain clear, comprehensive documentation for data processes, system architecture, and procedures.
  • Ensure data integrity, application performance, and overall system efficiency.
  • Participate in code reviews and contribute to team knowledge sharing and best practices.

QUALIFICATION REQUIREMENTS:

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

Experience

  • 5+ years of experience in developing and maintaining complex programs in a professional setting
  • Proficient in SQL for data extraction, transformation, and analysis
  • Skilled in shell scripting (KSH, SQR) within Unix/Linux environments
  • Familiar with job scheduler solutions such us Cron, AutoSys, or Rundeck
  • Hands-on experience with AWS services and modern data warehouse solutions such as Snowflake
  • Familiarity with data transformation and orchestration tools, including DBT and Apache Airflow
  • Experience in the Financial Services industry and tax reporting is an asset; cybersecurity knowledge is a plus
  • Strong analytical and problem-solving skills; adaptable in a fast-paced, collaborative environment
  • Clear and effective communicator, with the ability to translate complex technical concepts for non-technical stakeholders

Preferred Skills

  • Strong knowledge in agile methodologies, DevOps practices, including CI/CD pipelines and infrastructure as code (IaC).
  • Experienced in the use of Atlassian collaboration tools.
  • Experienced in shell-scripting language like KSH and SQR.
  • Experience in Restful API is a plus
  • Experience in cyber-security measures is a plus
  • Good interpersonal and team skills.
  • Excellent oral and written communication skills.
  • Strong organizational, analytical skills.

WHAT WE OFFER 

  • Modern HQ location within walking distance from Union Station.
  • Equipment Purchase Program.
  • Training Reimbursement.
  • Paid Professional Designations.
  • Employee Share Purchase Program (ESPP).
  • Corporate Discount Program.
  • Enhanced group benefits.
  • Parental Leave Top–up program.
  • Fitness membership discounts.
  • Volunteer paid Days.

We are focused on building a diverse and inclusive workforce. If you are excited about this role and are not confident you meet all the qualification requirements, we encourage you to apply to investigate the opportunity further.

Please submit your resume in confidence by clicking “Apply”. Only qualified candidates selected for an interview will be contacted. CI Financial Corp. and all of our affiliates (“CI”) are committed to fair and accessible employment practices and provide reasonable accommodations for persons with disabilities. If you require accommodations in order to apply for any job opportunities, require this posting in an additional format, or require accommodation at any stage of the recruitment process please contact us at , or call ext. 4747.

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Toronto, Ontario TheAppLabb

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

Job Description

Salary:

TAL Talent Pool:

Interested in joining our team? Send us your resume, and we'll include you in our Talent Pool!

When new positions become available at TAL, we'll consult our Talent Pool. If your qualifications match what we're looking for, we'll contact you for an interview.

Review the criteria listed below to see if youre a good match and submit your resume.

About Us

With a mission to humanize technology and create better human experiences, TheAppLabb is an innovation company focussed on transforming businesses through intelligent and immersive business applications. We ideate and build fully custom solutions with emerging technologies like Artificial Intelligence, IoT, Virtual Reality, Augmented Reality and Blockchain.


TheAppLabb is a Toronto-based technology company that specializes in strategy, design, and development of mobile first, AI enabled apps. TheAppLabb has built over 750 apps over the past 16 years for leading enterprise firms in North America including Unilever, GE, Samsung, RBC, Meridian, among others.
TheAppLabb was awarded Great Place to Work in Canada by Globe & Mail, Top Mobile App Developer and Most Innovative Company by Canadian Business.


The Opportunity


Summary:

We are looking for aData Engineer with expertise inSnowflake to join our team. You will play a key role in designing, developing, and maintaining scalable data pipelines and cloud-based data solutions. This role requires strong experience with ETL processes, and big data technologies to support analytics and business intelligence initiatives.



What you will do?

Design, develop, and maintain efficientETL/ELT pipelines for structured and unstructured data.
Build and optimize data models, ensuring high performance and scalability inSnowflake .
Implement best practices forSnowflake data warehousing , including schema design, partitioning, and performance tuning.
Develop and maintaindata pipelines usingPython, SQL, or Scala .
Integrate data from various sources, including APIs, databases, and third-party tools.
Ensure data quality, integrity, and governance across platforms.
Monitor and troubleshoot data pipeline issues, ensuring high availability and reliability.
Work closely with data scientists, analysts, and software engineers to support business intelligence and reporting needs.
Implementdata security, compliance, and governance best practices.

What Youll need?

3+ years of experience in Data Engineering or a related field.
Hands-on experience withSnowflake (warehouse design, performance tuning, and data ingestion).
StrongSQL skills and experience with relational databases (PostgreSQL, MySQL, SQL Server, etc.).
Proficiency inPython, Java, or Scala for data processing and automation.
Experience withETL tools such asApache Airflow, DBT, or Talend .
Familiarity withcloud platforms (AWS, Azure, or Google Cloud) and services likeAWS Glue, Lambda, S3, and Redshift .
Knowledge ofbig data technologies such asApache Spark, Kafka, or Hadoop .
Understanding ofdata governance, security, and compliance best practices .
Excellent problem-solving and communication skills.

Nice to Have

Experience withreal-time data processing (Apache Flink, Spark Streaming, etc.).
Familiarity withNoSQL databases (MongoDB, Cassandra, DynamoDB).
Exposure toCI/CD for data pipelines and containerization (Docker, Kubernetes).
Understanding ofmachine learning pipelines and MLOps.


Why Join Us?

Work on large-scalecloud-based data platforms with cutting-edge technologies.
Collaborate with a dynamic, innovative, and data-driven team.
Competitive salary
Opportunities for career growth and continuous learning.



Why youll love TheAppLabb:

We are proud to be certified as a Great Place to Work, Canada. We're a motivated team with laser-focused mission to create exceptional experiences. At TheAppLabb, our culture is geared towards creativity, collaboration, and flexibility.

Coaching and Learning and Development to ensure you have the training and education you need to thrive in your career

A diverse leadership team with an open-door policy to help you grow and succeed at your career goals

Town hall celebrations for all employees to meet and connect with each other, align the Company to the same goals, build our Company culture and celebrate awards

Acknowledge and recognize employee efforts by having awards (Employee of the month, Employee with Most Growth, Leadership Award etc.)

We encourage a healthy lifestyle by incorporating fitness challenge incentives in our Company

TheAppLabbs strong values and principles guide our employees to work with honesty, integrity, teamwork, and empathy.

At TheAppLabb, we believe diversity and inclusion is a strength we cultivate. We are proud to be an equal opportunity employer and we do not discriminate based on race, gender, ethnicity, citizenship, national origin, religion, sexual orientation, age, marital status, disability, veteran status or any other legally protected status.

Our work environment welcomes equity, inclusiveness, and diversity by providing accommodations throughout the recruitment process and during your employment here. If you require accommodation, please let us know and we will work with you to meet your needs. Please contact us at

So, if you are looking for your next challenge, then this is your chance to join our team of exceptionally talented, creative, and innovative professionals working towards a unified goal.

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Moncton, New Brunswick Four Eyes Financial

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

Job Description

AI Engineer

Four Eyes Financial is a Canadian Regulatory Technology (RegTech) company based in Saint John, New Brunswick. We are at the forefront of change in how wealth management firms approach their data and compliance functions. Our proprietary Intelligent Compliance Platform streamlines interactions between Advisors, Clients, and Head Office.

We are expanding our data team to support the development of an AI-Powered Compliance Suite designed to transform financial supervision. This role is integral to building the foundational data architecture for our next-generation Generative AI and Agentic AI systems.

Candidate must be located in New Brunswick.


Focus Areas

Scalable Data Solutions: Database infrastructure is structured and operating efficiently.

  • Architect, build, and maintain a scalable Data Lakehouse using AWS S3, Apache Iceberg, and AWS Glue.
  • Develop and operate robust, continuous ETL pipelines to ingest high-volume data from various sources, including Aurora databases using AWS Database Migration Service (DMS).
  • Collaborate with the AI team to prepare, structure, and contextualize data for training and fine-tuning Large Language Models (LLMs) and specialized AI agents.
  • Contribute to the implementation and maintenance of the MLOps environment, ensuring proper version control for datasets and models.
  • Utilize Amazon Neptune to model and manage the complex permissions graph required for secure data access.
  • Experience with large data sets and Enterprise-grade databases (Relational, NoSQL and Node Graph Databases).
  • Experience with many data types: JSON, XML, CSV, etc.

ETL : Build, operate, and support our ETL pipelines & data systems.

  • Deep understanding of how to build and maintain event driven ETL processes with complex interdependencies.
  • Understanding of how to implement ETL processes using a combination of serverless and container-based infrastructure.
  • Experience architecting and building data pipelines.
  • Experience extracting data from multiple sources (APIs, SFTP, Web Scraping, etc.).

Leadership & Mentorship : Effective cross-functional collaboration.

  • Participate in sprint planning, peer code reviews, and architectural discussions.
  • Contribute to a fast-paced team solving complex business challenges with AI, ensuring alignment with product and project goals.
  • Document model development and decision-making processes clearly.
  • Support team members in learning best practices in AI development.
  • Contribute to a culture of learning and innovation within the team.

Qualifications

  • A graduate of Computer Science, Engineering or equivalent knowledge/experience ; with a minimum of 5 years of experience.
  • Strong programming skills in Python and advanced proficiency in SQL (Postgres experience is an asset).
  • Proven experience building and operating ETL pipelines using cloud-based tools such as AWS Glue, AWS DMS, Apache Airflow, or similar technologies.
  • Solid experience within the AWS data ecosystem (e.g., S3, Glue, Lambda).
  • An intuitive grasp of data modeling, data warehousing, and Data Lakehouse concepts.
  • Familiarity with Python libraries for data manipulation such as Pandas, NumPy, or Dask.
  • Experience with Amazon SageMaker, Amazon Bedrock, Amazon OpenSearch, or graph databases like Amazon Neptune.
  • Experience with Python coding frameworks such as Flask or FastAPI.
  • Bonus: Experience in fintech or regulatory/compliance -driven environments.

We have a diverse team of bright and high-performing people and we are in growth mode. You'd fit in well if you thrive in a fast-paced environment, have a thirst for learning, enjoy improving systems and processes, and focus on excellence in everything you do.

Benefits include:

  • Group health & dental benefits
  • RRSP matching program
  • Competitive salary & vacation days
  • Hybrid work options
  • And more!

Are you ready to accelerate your career? If you have a pioneering spirit, a passion for results, seek meaningful work and want to make an impact on a changing industry, come join us!

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Datastage Etl Jobs