For over 16 years, WebLife has been a leader in the e-commerce industry, operating category-defining stores
and serving as one of the largest distributors of mailboxes in the US. Our success comes from building data-
driven operations and high-performing teams that thrive in fast-moving markets.
Now, we are taking that foundation into a bold new venture, building an AI-powered product that will redefine
how small and mid-sized businesses operate. We are a small, high-energy team that moves fast, works hands-
on, and embraces the creativity and grit of a startup as we shape what the future of business looks like.
By combining the agility of a startup with the backing of a proven e-commerce leader, we are creating an
environment where ideas turn into impact and where you can help write the next chapter of business with AI.
We are seeking a Data Engineer to join our new venture and help build the data ecosystem powering our AI-
native SaaS product for small and mid-sized businesses. This is a rare opportunity to design and deliver modern
data pipelines, platforms, and workflows from the ground up, directly influencing how our product scales and
delivers value without the constraints of legacy systems.
You will work closely with our Data Architect and product engineering team to translate requirements into
reliable, production-grade data solutions. In this role, you will gain hands-on experience with cutting-edge
technologies in a fast-paced startup environment, supported by the stability and resources of a proven e-
commerce leader.
This is a remote position that offers the opportunity to shape a global AI product from its earliest stages.
• Design and build scalable ELT/ETL pipelines to integrate and transform data from diverse sources
including e-commerce platforms, CRM systems, marketing tools, and APIs, using modern orchestration
frameworks.
• Implement and manage cloud-based data solutions on AWS for both real-time and batch data processing.
• Optimize data pipelines and storage layers to ensure efficiency, reliability, and scalability in a multi-tenant
SaaS environment.
• Develop and maintain data models and warehouse schemas that support analytics, reporting, and
machine learning use cases, with a focus on performance and cost-effectiveness.
• Design, implement, and maintain Infrastructure as Code to provision, configure, and scale cloud-based
data resources in a repeatable and secure manner.
• Develop and manage CI/CD pipelines for data workflows, ensuring automated testing, version control, and
seamless deployments across environments.
• Implement advanced observability practices by building monitoring dashboards, automated alerts, and
logging strategies to improve pipeline reliability and fault tolerance.
• Partner with the Data Architect and product engineering teams to design and deliver robust, production-
ready data solutions aligned with business requirements.
• Enable data-driven decision-making by delivering curated, well-documented datasets and APIs for
analysts, data scientists, and business stakeholders.
• Contribute to engineering excellence through active participation in code reviews, technical design
discussions, and cross-team collaboration.
• Stay current with emerging cloud, data engineering, and platform technologies, recommending
improvements and adopting new solutions where appropriate.
• Continuously explore and evaluate emerging data engineering technologies and best practices to drive
innovation in the data stack.
• Expand technical ownership by taking on complex projects and positioning yourself for mid-to-senior level
advancement within the organization.
• Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, or a related technical
field.
• 3+ years of experience as a Data Engineer, with a focus on large-scale data systems.
• Experience with international clients, particularly in the US, UK, European, or Australian markets,
preferred.
• Deep expertise in AWS and GCP cloud services, covering storage, compute, data processing, analytics, and
serverless platforms (e.g., AWS S3/Redshift/Glue/Lambda, GCP BigQuery/Cloud Storage/Dataflow/Cloud
Functions).
• Strong proficiency in SQL, Python, and data modeling for analytical and operational use cases.
• Hands-on experience with production-grade ETL/ELT frameworks and workflow orchestration (e.g., Airflow,
dbt, Talend, AWS Glue, GCP Dataform/Cloud Composer).
• Proven ability to design, deploy, and optimize data warehouses and lakehouse architectures using
technologies like BigQuery, Redshift, Snowflake, and Databricks.
• Experience with Infrastructure as Code tools (e.g., Terraform, AWS CloudFormation, GCP Deployment
Manager) for cloud resource provisioning and management.
• Proficiency with CI/CD pipelines and DevOps practices for data applications, including Git-based workflows
and containerization/orchestration using Docker, ECS, GKE, or Kubernetes.
• Excellent problem-solving abilities, with the capacity to translate requirements into production-grade,
maintainable systems.
• Strong communication and collaboration skills for cross-functional team environments.
• Ability to adapt quickly to new tools, frameworks, and emerging technologies.
• Experience building observable, scalable data systems with proper monitoring and alerting.
• Competitive USD-based compensation packages.
• Flexible, work-from-home setup as part of a globally connected team.
• AI-driven, innovation-focused projects.
• A culture that encourages curiosity and lifelong learning.
• Clear career paths and personal development opportunities.
Join us to build a global product from the ground up and redefine how businesses run with AI.