105 Servicenow Platform jobs in Canada
AI Platform Engineer
Posted 4 days ago
Job Viewed
Job Description
# **Header**
We’re looking for an **AI Platform Engineer** with strong backend engineering skills and a passion for building systems that support AI-driven products. In this role, you’ll help design and maintain a robust, scalable foundation that connects backend services with AI systems, while also contributing to tools and workflows that empower our AI engineers. This position sits at the intersection of AI/ML engineering, data infrastructure, and platform development. You’ll collaborate closely with AI engineers, data scientists, and product teams to ensure backend services are reliable, performant, and aligned with product goals. At the same time, you’ll help shape internal tooling, development environments, and best practices to support technical workflows.
# **Job Summary**
If you were to come on board as our AI Platform Engineer, we’d ask you to do the following:
- Build and maintain robust backend services and APIs to support AI features and data applications (e.g., using FastAPI, SQLAlchemy, Snowflake)
- Design infrastructure and data flows to support inference, experimentation, feature engineering, and system observability.
- Maintain production-grade reliability, security, scalability, and monitoring for AI systems and services.
- Collaborate with AI engineers and data scientists to productionize models and integrate AI and ML systems into user-facing applications.
- Support deployment, versioning, and maintenance of APIs, shared libraries, and tooling.
- Contribute to CI/CD pipelines, platform automation, and developer environments to improve team velocity.
- Partner with Product Managers, AI Engineers and Data Scientists to prioritize and deliver high-impact work.
Think you have what it takes to be our AI Platform Engineer? Here’s how we’ll know:
- Bachelor's or Master’s degree in a STEM field (or equivalent experience).
- 5+ years of experience in software engineering, with strong backend expertise.
- Proficiency in Python, SQL, and backend frameworks (e.g., FastAPI, Flask, or similar).
- Experience working with databases and data warehouses (e.g., Snowflake, PostgreSQL), and related ORMs or query frameworks (SQLAlchemy)
- Familiarity with ML pipelines, model serving, and data-intensive applications.
- Comfort working in cloud-native environments (e.g., AWS, GCP, Azure) and with containerization (Docker, Kubernetes).
- Exposure to MLOps tooling (e.g., MLflow, SageMaker, Vertex AI, BentoML)
- Strong communication and collaboration skills; able to work cross-functionally with technical and non-technical teams.
- Interest in AI observability, feature stores, or experiment tracking systems.
Compass Group Canada is committed to nurturing a diverse workforce representative of the communities within which we operate. We encourage and are pleased to consider all qualified candidates, without regard to race, colour, citizenship, religion, sex, marital / family status, sexual orientation, gender identity, aboriginal status, age, disability or persons who may require an accommodation, to apply.
For accommodation requests during the hiring process, please contact for further information.
Cloud Platform Engineer
Posted today
Job Viewed
Job Description
Job Description
Description
At CI, we see a great place to work as one that is a safe place for everyone to have a voice, where people are empowered to take ownership over meaningful work, where there is an opportunity to grow through stretching themselves, where they can work on innovative products and projects, and where employees are supported and engaged in doing so.
RESPONSIBILITIES:
- Building end-to-end cloud solutions using low-level architecture documents.
- Works closely with other IT Infrastructure Specialists, Enterprise Architects and Security to design and integrate cloud and on-premises services, enabling secure operations in the cloud environment
- General administration and operational support of the Kong API Gateway data plane.
- Manage onboarding of new applications to the API gateway, including approvals for publishing APIs to the marketplace.
- Oversee and manage issuance, renewal, and revocation of digital certificates via the Venafi Trust Protection Platform.
- Maintain, upgrade, and troubleshoot the Venafi platform, ensuring system health, high availability, and compliance.
- Provides leadership in infrastructure migration methodologies and techniques to migrate large scale enterprise applications and services.
- Implement Infrastructure as a Code delivery pipelines using CI/CD tools and DevOps Tools
- Design and deployment of pipelines consisting of Docker, ECS, and/or EKS
- Provide expert hands-on Core AWS technologies including: IAM, ECS, EKS, VPC, ELB, KMS, S3, CloudFront, RDS, AWS-CLI. Experience working with large data sets, data lake,
- Design and provide support on AWS Data Engineering Technologies– Glue, EMR, AWS Lake formation, S3 as Data Lake, DMS, Aurora Postgres
- Practice DevOps (Expert Level) – Terraform, Jenkins, AWS Code Build, , Artifactory, AWS CloudFormation , Atlassian Jira, Confluence, Bitbucket,
- Demonstrate knowledge of Web APIs, UIs, infrastructure, data architecture, security, and new technologies and experience with Cloud API’s, streaming, metrics
- Ability to script with modern languages including PowerShell, Bash, or Python
- Investigate and resolve any integration issues that arise during implementation.
QUALIFICATION REQUIREMENTS:
Experience
- 5-10 years of experience working in a Software Development or IT Operational role with at least 5 years working directly in a Cloud Engineering/DevOps roles
- Proven experience and Strong knowledge of Public Cloud (AWS Preferred) in IaaS/PaaS/Serverless
- Experience deploying and managing Containers and Container Orchestration (eg. Docker and Kubernetes)
- Strong experience in Information technology, with infrastructure and platform services automation expertise.
- Extensive experience with engineering and supporting containerization technology.
- Proven experience with Kubernetes/OpenShift and Docker.
- Experience in of industry-leading open-source CI/CD pipeline tools including Confluence, Jira, Rundeck, Bitbucket, Jenkins, Spinnaker, Ansible and Terraform, etc.
- Experience in developing cloud migration patterns
- Experience with multiple programming languages such as Java, Python and PHP
- Experience and knowledge of Infrastructure security environments including cloud security
- Experience and knowledge of networking concepts
- Prior experience with information technology in a financial institution preferred.
Education/Training
- Bachelor’s degree within a business, computer science or equivalent experience
- Public Cloud Related Certifications (AWS )
Technical Skills
- AWS Cloud Resources (OU, VPC, S3, EC2, ECS, EKS, RDS, EMR, Lambda, API Gateway, Data Lake etc.)
- Infrastructure as Code (Terraform, CloudFormation)
- Security & Identity (IAM, Checkmarx, Sonarqube, CrowdStrike)
- OS (RHEL, AIX, Windows)
- CI/CD (Bitbucket, Git, Jenkins, Spinnaker, ArgoCD, AWS Code Suite, JFrog Artifactory)
- Cloudflare, OKTA, ADFS
- PHP, Node.js, React.js, Python
Knowledge, Skills, and Abilities
- Evangelist for sound cloud application architecture design and development
- Ability to operationalize architectural designs and solutions into reusable capabilities and patterns
WHAT WE OFFER
- Modern HQ location within walking distance from Union Station.
- Equipment Purchase Program.
- Training Reimbursement.
- Paid Professional Designations.
- Employee Share Purchase Program (ESPP).
- Corporate Discount Program.
- Enhanced group benefits.
- Parental Leave Top–up program.
- Fitness membership discounts.
- Volunteer paid Days.
We are focused on building a diverse and inclusive workforce. If you are excited about this role and are not confident you meet all the qualification requirements, we encourage you to apply to investigate the opportunity further.
Please submit your resume in confidence by clicking “Apply”. Only qualified candidates selected for an interview will be contacted. CI Financial Corp. and all of our affiliates (“CI”) are committed to fair and accessible employment practices and provide reasonable accommodations for persons with disabilities. If you require accommodations in order to apply for any job opportunities, require this posting in an additional format, or require accommodation at any stage of the recruitment process please contact us at , or call ext. 4747.
Kubernetes Platform Engineer
Posted today
Job Viewed
Job Description
Job Description
Experience 7+
Location: Halifax (Hybrid - 3 days office)
- Strong understanding of Platform engineering PodsContainers and experience in managing multi cluster environments.
- Must have sound knowledge of Kubernetes internals kubectl commands Image deployment Zookeeper PVsPVCs S3NASSAN stores Device mounts SSLTLSIngressLoad Balancer issues resolution Infra automation deployment LinuxUnix Shell scripting GIT.
- Hands on experience in writingcreatingmodifying helm chart adding overlays and addingmodifying helm templates.
- Dev nonprod Infra support work with CTB team in rolling out more Kubernetes clusters optimize the clusters performance make required change in YAML files configs PVsPVCs etc automate cluster provisioning through GITops.
- Should be able to read and understand YAML config files Helm charts and carry on deployments in multi cluster environment.
- Should be able to investigate resolve Infrastructure related issues.
Skills: Ansible, ArgoCD, AWS EKS, Azure AKS, Docker, Git, GKE, Helm Chart, Kubernetes, Python, Rancher
Data Platform Engineer
Posted today
Job Viewed
Job Description
Job Description
About the role
We’re looking for an engineer who thrives on building scalable data platforms and enjoys tackling complex backend challenges. This isn’t just a data engineering role, you’ll be designing and optimizing the data platform that powers our API, managing everything from data streaming and storage to analytics features at petabyte scale.
You should be comfortable navigating both data and backend engineering, with a solid foundation in software development. You’ll work with advanced data architectures, including Iceberg, Flink, and Kafka, tackling large-scale challenges and contributing to core product development using Java and Python. If you’re excited by the opportunity to shape a high-impact platform and tackle diverse engineering problems, we’d love to hear from you.
What you will do:- Own projects aimed at enhancing data replication, storage, enrichment, and reporting capabilities.
- Build and optimize efficient streaming and batch data pipelines that support our core product and API.
- Design scalable storage solutions for handling petabytes of IoT and time-series data.
- Develop and maintain real-time data systems to ingest growing data volumes.
- Implement distributed tracing, data lineage and observability patterns to improve monitoring and troubleshooting.
- Write clean, maintainable code in Java and Python for various platform components.
- Shape architectural decisions to ensure scalability and reliability throughout the data platform.
- 3+ years of experience in platform engineering or data engineering.
- 2+ years of experience designing and optimizing data pipelines at TB to PB scale.
- Proficient in Java, with a focus on clean, maintainable code.
- Strong system design skills with a focus on big data and real-time workflows.
- Familiarity with lake-house architectures (e.g., Iceberg, Delta, Paimon).
- Experience with real-time data processing tools like Kafka, Flink and Spark.
- Knowledge of distributed systems and large-scale data challenges.
- Strong problem-solving skills and a collaborative mindset.
- Nice-to-have:
- Experience working with orchestration / workflow engines (e.g. Step Functions, Temporal)
- Experience with serverless and/or event-driven architectures (e.g. AWS Lambda, SQS).
- Experience with Javascript/Typescript languages (for cross team work)
- Languages: Java, Python
- Framework: Springboot
- Storage: AWS S3, AWS DynamoDB, Apache Iceberg, Redis
- Streaming: AWS Kinesis, Apache Kafka, Apache Flink
- ETL: AWS Glue, Apache Spark
- Serverless : AWS SQS, AWS EventBridge, AWS Lambda and Step Functions.
- Infrastructure as Code: AWS CDK
- CI/CD: GitHub Actions
Atlassian Platform Engineer
Posted today
Job Viewed
Job Description
Job Description
Position: Atlassian Platform Engineer
Term: 6 months, with the option to extend by 6 months
Location: Victoria, BC -Work may be done remotely, hybrid
Preference will be given to local candidates who can work at the Victoria office for 3-4 working days per week.
Expected working hours: 35 hours a week, from 8:30 AM to 4:30 PM
POSITION DESCRIPTION
Reporting to the Manager, DevSecOps, the Atlassian Platform Consultant is responsible for conducting a comprehensive evaluation of the clients Atlassian platform implementation and providing strategic recommendations for optimization. The consultant will assess current usage patterns, workflows, and configurations across JIRA, Confluence, Trello, Bitbucket, and Guardian to identify opportunities for improvement, cost savings, and enhanced operational efficiency. Working closely with technology teams and end users, they will develop actionable recommendations for workflow optimization, governance processes, and platform best practices. The position can be based in either Vancouver or Victoria with occasional travel between the two cities.
As a contract role designed to optimize the client's Atlassian investment, the position includes significant responsibilities for knowledge transfer, comprehensive documentation of findings and recommendations, and collaboration with internal teams to ensure sustainable improvements beyond the contract period.
Primary Responsibilities
Platform Assessment & Analysis
- Conduct comprehensive evaluation of current Atlassian platform usage across JIRA, Confluence, Trello, Bitbucket, and Guardian
- Analyze existing workflows, configurations, and integrations to identify inefficiencies and optimization opportunities
- Assess add-on and extension utilization to identify underutilized or redundant tools
- Review user adoption patterns and identify areas where training or process improvements could increase value
- Evaluate platform administration practices against industry best practices and Atlassian recommendations
Workflow Optimization & Best Practices
- Develop or modify JIRA workflows to optimize efficiency and ensure best practices are followed across the platform
- Create standardized workflow templates that can be implemented organization-wide
- Review and optimize issue type schemes, field configurations, and screen schemes
- Identify opportunities to consolidate or streamline workflows across similar project types
- Document workflow design principles and standards for future implementations
Cost Optimization & Licensing
- Identify opportunities for cost savings through license optimization and add-on rationalization
- Analyze user licensing to ensure appropriate access levels and identify potential cost reductions
- Evaluate add-on renewals and recommend retention, replacement, or elimination
- Assess storage usage and recommend optimization strategies
- Provide cost-benefit analysis for recommended changes
Governance & Compliance
- Develop governance framework for Atlassian platform administration and change management
- Create audit and compliance reporting mechanisms to track platform usage and adherence to standards
- Establish roles and responsibilities for ongoing platform management
- Design approval workflows for platform changes and new project creation
- Develop policies for data retention, archiving, and information lifecycle management
Documentation & Knowledge Transfer
- Create comprehensive documentation of current state assessment findings
- Develop detailed implementation roadmap for recommended improvements with prioritization
- Produce user guides and administrative documentation for optimized workflows and processes
- Deliver formal presentations of findings and recommendations to stakeholders
- Provide training materials for key platform administrators on new workflows and governance processes
Collaboration & Stakeholder Engagement
- Engage with platform administrators, project leads, and end users to understand pain points and requirements
- Facilitate workshops to gather feedback on current platform usage and desired improvements
- Present findings and recommendations to technology leadership and stakeholders
- Collaborate with internal teams to validate feasibility of recommendations
- Provide guidance during implementation of approved recommendations
Qualifications Required:
- 5+ years of hands-on experience with Atlassian platform administration and optimization
- Deep expertise in JIRA administration including workflow design, project configuration, and custom field management
- Extensive experience with Confluence administration, space management, and content organization
- Demonstrated experience evaluating enterprise Atlassian implementations and providing strategic recommendations
- Strong understanding of Atlassian licensing models and cost optimization strategies
- Experience developing governance frameworks and compliance reporting for collaboration platforms
- Proven track record of workflow optimization and implementation of best practices across large organizations
- Excellent analytical skills with ability to assess complex configurations and identify improvement opportunities
- Strong communication skills with ability to present technical findings to non-technical stakeholders
- Experience creating comprehensive documentation and implementation roadmaps
Preferred:
- Atlassian Certified Professional certifications (JIRA Administrator, Confluence Administrator)
- Experience with Bitbucket, Trello, and Guardian (formerly JIRA Align)
- Familiarity with Atlassian Marketplace apps and integration ecosystem
- Experience with Atlassian Cloud migrations or hybrid implementations
- Knowledge of DevOps practices and integration of Atlassian tools with CI/CD pipelines
- Background in organizational change management and user adoption strategies
- Experience with ServiceNow integration with JIRA (Exhalate or similar)
- Understanding of financial services or investment management industry requirements
Competencies:
- Strategic thinking and systems analysis
- Attention to detail and pattern recognition
- Stakeholder engagement and facilitation
- Clear communication of technical concepts
- Problem-solving and critical thinking
- Documentation and knowledge transfer
- Change management and organizational influence
- Cost-benefit analysis and business case development
Fluid thanks you for the interest you have shown in this opportunity. However, only candidates possessing the required skills will be contacted.
Fluid adheres to the principles of equal employment opportunity and affirmative action. We welcome and encourage diversity in our workplace.
Candidates must be eligible to work in Canada.
Platform Engineer - Cloud, LUS
Posted 7 days ago
Job Viewed
Job Description
As a leader in micromobility, Lyft powers millions of rides daily across over 200 cities with our cutting-edge ride-sharing, bike-sharing, and scooter-sharing technologies. Our Montreal office is the birthplace of North America's first automated bike-share system, Bixi, which has since revolutionized urban mobility. Today, our pioneering system is operational in more than 50 cities worldwide, including Barcelona, Bogota, Boston, Buenos Aires, Chicago, Dubai, London, Madrid, Mexico City, Montreal, New York, Rio de Janeiro, San Francisco, and Washington DC, to name just a few. Join us and be part of the team behind some of the world's largest and most successful bike-share systems!
Every engineering team at Lyft is responsible for running and operating the software that they build. The Infrastructure team works towards standardizing and supporting all the rapidly evolving teams throughout our organization, assessing their architecture, helping them design scalable services, and fostering excellent operational practices. It's a mission-critical role of ensuring that our systems are always healthy, monitored, automated, and designed to scale.
**Responsibilities:**
+ Contribute features to our infrastructure codebase in Terraform, Python and Golang
+ Deploy new features across a global infrastructure in respecting change management practices and maintaining high availability
+ Apply best practices in working with source code including branch management and code review
+ Conduct disaster recovery rehearsals and regular maintenance of the system
+ Respond promptly and constructively to outages and security incidents
+ Analyse system failures using modern log analysis tools, metrics and tracing
+ Resolve operational issues in communicating clearly and professionally with other stakeholders including development teams and city authorities
+ Document new features and create incident post-mortems as necessary
**Experience:**
+ Experience with deploying and creating new containers using Docker images
+ Kubernetes cluster management including cluster upgrades and helm chart management
+ AWS resource management in a cross-region context (VPC, EC2, RDS, SQS, DynamoDB)
+ Terraform 1.x / HCL2 including best practices for submodules and state management
+ Python 3.x including unit testing techniques and type hinting
+ Golang, ideally including experience with Kubernetes Operator SDK framework
+ Relational databases such as MySQL
+ Experience with Crossplane is a plus
+ Experience with compliance frameworks such as PCI-DSS and ISO27001 a plus
+ Must be fluent in spoken and written English and minimally be willing to learn French if required
**Benefits:**
+ Extended health and dental coverage options, along with life insurance and disability benefits
+ Mental health benefits
+ Family building benefits
+ Pet benefits
+ Access to a Lyft funded Health Care Savings Account
+ RRSP plan to help save for your future
+ In addition to provincial observed holidays, salaried team members are covered under Lyft's flexible paid time off policy. The policy allows team members to take off as much time as they need (with manager approval). Hourly team members get 15 days paid time off, with an additional day for each year of service
+ Lyft is proud to support new parents with 18 weeks of paid time off, designed as a top-up plan to complement provincial programs. Biological, adoptive, and foster parents are all eligible.
+ Subsidized commuter benefits
_Lyft is committed to creating an inclusive workforce that fosters belonging. Lyft believes that every person has a right to equal employment opportunities without discrimination because of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, marital status, family status, disability, pardoned record of offences, or any other basis protected by applicable law or by Company policy. Lyft also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. Accommodation for persons with disabilities will be provided upon request in accordance with applicable law during the application and hiring process. Please contact your recruiter if you wish to make such a request._
_Lyft highly values having employees working in-office to foster a collaborative work environment and company culture._ _This role will be in-office on a hybrid schedule - Team Members will be expected to work in the office at least 3 days per week, including on Mondays, Wednesdays, and Thursdays._ _Lyft considers working in the office at least 3 days per week to be an essential function of this hybrid role. Your recruiter can share more information about the various in-office perks Lyft offers._ _Additionally, hybrid roles have the flexibility to work from anywhere for up to 4 weeks per year._ _#Hybrid_
_The expected base pay range for this position in the Montreal area is $88,000-$110,000. Salary ranges are dependent on a variety of factors, including qualifications, experience and geographic location. Range is not inclusive of potential equity offering, bonus or benefits. Your recruiter can share more information about the salary range specific to your working location and other factors during the hiring process._
Principal Platform Engineer - Americas
Posted today
Job Viewed
Job Description
Job Description: We’re looking for a curious, rigorous, problem-hungry platform engineer (who codes!) to carry the ball as we bring Ashby to the big leagues. Ashby builds software that lets talent teams build an efficient, delightful, respectful hiring process. Similarly, you’re an engineer who wants to build a “paved road” that excellent engineering teams can safely take to the moon and back. We have notable customers like Notion, Linear, Shopify, and Snowflake. Our growth and retention metrics are best-in-class among our peers: we have tens of millions in ARR, growing >100% year over year, over 2500 customers, very low churn, and many years of runway. We’ll share more details once we meet, but you now probably have a good idea as to why we're hiring for this role
Be The First To Know
About the latest Servicenow platform Jobs in Canada !
Senior Data Platform Engineer
Posted today
Job Viewed
Job Description
Job Description
Who We Are:
Alpaca is a US-headquartered self-clearing broker-dealer and brokerage infrastructure for stocks, ETFs, options, crypto, fixed income, 24/5 trading, and more. Our recent Series C funding round brought our total investment to over $170 million, fueling our ambitious vision.
Amongst our subsidiaries, Alpaca is a licensed financial services company, serving hundreds of financial institutions across 40 countries with our institutional-grade APIs. This includes broker-dealers, investment advisors, wealth managers, hedge funds, and crypto exchanges, totalling over 6 million brokerage accounts.
Our global team is a diverse group of experienced engineers, traders, and brokerage professionals who are working to achieve our mission of opening financial services to everyone on the planet . We're deeply committed to open-source contributions and fostering a vibrant community, continuously enhancing our award-winning, developer-friendly API and the robust infrastructure behind it.
Alpaca is proudly backed by top-tier global investors, including Portage Ventures, Spark Capital, Tribe Capital, Social Leverage, Horizons Ventures, Unbound, SBI Group, Derayah Financial, Elefund, and Y Combinator.
Our Team Members:
We're a dynamic team of 230+ globally distributed members who thrive working from our favorite places around the world, with teammates spanning the USA, Canada, Japan, Hungary, Nigeria, Brazil, the UK, and beyond!
We're searching for passionate individuals eager to contribute to Alpaca's rapid growth. If you align with our core values—Stay Curious, Have Empathy, and Be Accountable—and are ready to make a significant impact, we encourage you to apply.
Your Role:
We are seeking a Senior Data Engineer to design and develop the data management layer for our platform to ensure its scalability as we expand to larger customers and new jurisdictions. At Alpaca, data engineering encompasses financial transactions, customer data, API logs, system metrics, augmented data, and third-party systems that impact decision-making for both internal and external users. We process hundreds of millions of events daily, with this number growing as we onboard new customers.
We prioritize open-source solutions in our data management approach, leveraging a Google Cloud Platform (GCP) foundation for our data infrastructure. This includes batch/stream ingestion, transformation, and consumption layers for BI, internal use, and external third-party sinks. Additionally, we oversee data experimentation, cataloging, and monitoring and alerting systems.
Our team is 100% distributed and remote.
Things You Get To Do:
- Design and oversee key forward and reverse ETL patterns to deliver data to relevant stakeholders.
- Develop scalable patterns in the transformation layer to ensure repeatable integrations with BI tools across various business verticals.
- Expand and maintain the constantly evolving elements of the Alpaca Data Lakehouse architecture.
- Collaborate closely with sales, marketing, product, and operations teams to address key data flow needs.
- Operate the system and manage production issues in a timely manner.
Who You Are (Must-Haves):
- 7+ years of experience in data engineering, including 2+ years of building scalable, low-latency data platforms capable of handling >100M events/day.
- Proficiency in at least one programming language, with strong working knowledge of Python and SQL.
- Experience with cloud-native technologies like Docker, Kubernetes, and Helm.
- Strong hands-on experience with relational database systems.
- Experience in building scalable transformation layers, preferably through formalized SQL models (e.g., dbt).
- Ability to work in a fast-paced environment and adapt solutions to changing business needs.
- Experience with ETL technologies like Airflow and Airbyte.
- Production experience with streaming systems like Kafka.
- Exposure to infrastructure, DevOps, and Infrastructure as Code (IaaC).
- Deep knowledge of distributed systems, storage, transactions, and query processing.
How We Take Care of You:
- Competitive Salary & Stock Options
- Health Benefits
- New Hire Home-Office Setup: One-time USD $00
- Monthly Stipend: USD 150 per month via a Brex Card
Alpaca is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse workforce.
Recruitment Privacy Policy
Principal Platform Engineer - Americas
Posted today
Job Viewed
Job Description
Job Description: We’re looking for a curious, rigorous, problem-hungry platform engineer (who codes!) to carry the ball as we bring Ashby to the big leagues. Ashby builds software that lets talent teams build an efficient, delightful, respectful hiring process. Similarly, you’re an engineer who wants to build a “paved road” that excellent engineering teams can safely take to the moon and back. We have notable customers like Notion, Linear, Shopify, and Snowflake. Our growth and retention metrics are best-in-class among our peers: we have tens of millions in ARR, growing >100% year over year, over 2500 customers, very low churn, and many years of runway. We’ll share more details once we meet, but you now probably have a good idea as to why we're hiring for this role
Senior Data Platform Engineer
Posted today
Job Viewed
Job Description
Job Description
About the role
We’re looking for an engineer who thrives on building scalable data platforms and enjoys tackling complex backend challenges. This isn’t just a data engineering role, you’ll be designing and optimizing the data platform that powers Terminal’s API, managing everything from data streaming and storage to analytics features at petabyte scale.
You will lead building the data platform for the long-term vision. You’ll not only design and deliver complex systems, but also drive technical strategy, set architectural direction, and influence how the data platform evolves. You’ll partner closely with product, engineering teams, and leadership to ensure we’re building the right abstractions and reusable components that scale with our growth. You know when to slow down to build the right solution vs. work against time. This is a role with broader ownership, where your leadership and judgment will raise the technical bar across the team and directly impact how customers succeed with high-volume telematics data.
What you will do:- Own projects aimed at enhancing data replication, storage, enrichment, and reporting capabilities.
- Build and optimize efficient streaming and batch data pipelines that support our core product and API.
- Design scalable storage solutions for handling petabytes of IoT and time-series data.
- Develop and maintain real-time data systems to ingest growing data volumes.
- Implement distributed tracing, data lineage and observability patterns to improve monitoring and troubleshooting.
- Manage infrastructure that powers the current and future services, ensuring scalability, reliability, and zero downtime for HA services.
- Write clean, maintainable code in Java and Python for various platform components.
- Shape architectural decisions to ensure scalability and reliability throughout the data platform.
- 6+ years of experience in platform engineering or data engineering.
- 4+ years of experience designing and optimizing data pipelines at TB to PB scale.
- Proficient in Java, with a focus on clean, maintainable code.
- Strong system design skills with a focus on big data and real-time workflows.
- Experience with lake-house architectures (e.g., Iceberg, Delta, Paimon).
- Experience with real-time data processing tools like Kafka, Flink and Spark.
- Knowledge of distributed systems and large-scale data challenges.
- Strong problem-solving skills and a collaborative mindset.
- Nice-to-have:
- Experience working with orchestration / workflow engines (e.g. Step Functions, Temporal)
- Experience with serverless and/or event-driven architectures (e.g. AWS Lambda, SQS).
- Experience with Javascript/Typescript languages (for cross team work)
- Languages: Java, Python
- Framework: Springboot
- Storage: AWS S3, AWS DynamoDB, Apache Iceberg, Redis
- Streaming: AWS Kinesis, Apache Kafka, Apache Flink
- ETL: AWS Glue, Apache Spark
- Serverless : AWS SQS, AWS EventBridge, AWS Lambda and Step Functions.
- Infrastructure as Code: AWS CDK
- CI/CD: GitHub Actions