Data Engineer

Payoneer
Payoneer
Gurugram, IndiaPresencialCompetitivoPublicado hace 6 días

Anuncio original

About Payoneer

Founded in 2005, Payoneer is the global financial platform that removes friction from doing business across borders, with a mission to connect the world's underserved businesses to a rising global economy. We're a community with over 2,500 colleagues all over the world, working to serve customers, and partners in over 190 countries and territories.

By taking the complexity out of the financial workflows-including everything from global payments and compliance to multi-currency and workforce management, to providing working capital and business intelligence-we give businesses the tools they need to work efficiently worldwide and grow with confidence.

Role summary

We're looking for a Data Engineer who is a hands-on builder with a drive for excellence and a pragmatic, problem-solving mindset. You'll translate business and product needs into reliable batch and streaming data pipelines in a payments and fintech environment.

This role is best suited to an engineer with solid data engineering fundamentals who is excited to build, operate, and improve production data systems, while continuing to grow in streaming, platform reliability, and cloud-native data engineering practices.

AI-first mindset: We value engineers who can incorporate AI-enabled and agentic development practices into day-to-day delivery, using AI responsibly to accelerate development and testing, improve observability and data quality, and solve engineering use cases where it creates clear business value.

What You'll Do

  • Build, maintain, and optimize batch and streaming data pipelines that power product and business use cases, using distributed data processing frameworks such as Apache Beam, Spark, or Flink, with managed runners or engines such as Google Cloud Dataflow where relevant.
  • Develop curated datasets and dimensional models for analytics and reporting in cloud data warehouses.
  • Implement workflow orchestration and automation with an emphasis on reliability, repeatability, and clear failure handling.
  • Contribute to event-driven integrations using messaging platforms such as Kafka, building familiarity with core streaming concepts including windowing, late-data handling, replay and backfill strategies, and idempotency.
  • Work with operational data stores such as Bigtable, SQL Server, MongoDB, or equivalents where aligned to access patterns, scalability, and performance requirements.
  • Strengthen data quality and trust through validation frameworks, pipeline observability, monitoring, and governance-aligned practices.
  • Use AI-assisted development tools to improve throughput, for example through faster debugging, automated test scaffolding, and better documentation, and explore data engineering-adjacent AI use cases such as anomaly detection on pipeline or business metrics.

Who You Are

  • You have a solid foundation in data engineering and are excited to build and operate reliable data pipelines in production.
  • You're comfortable working across core batch data engineering patterns, and you have some exposure to streaming concepts and distributed processing at scale.
  • You enjoy debugging and improving performance and data quality.
  • You collaborate well with product, analytics, and business stakeholders and can translate requirements into clear technical tasks.
  • You care about engineering hygiene, including testing, documentation, and operational ownership, and you're open to using AI responsibly to improve your throughput and the quality of what you ship

Key skills and competencies

  • Hands-on experience building and maintaining production data pipelines, with strong SQL and data modelling fundamentals.
  • Experience with at least one distributed data processing framework such as Apache Beam, Spark, or Flink.
  • Experience with at least one cloud data warehouse such as BigQuery, Snowflake, Redshift, Databricks SQL, or Synapse.
  • Familiarity with pipeline orchestration using frameworks such as Airflow, Composer, Prefect, or equivalent.
  • Exposure to streaming platforms such as Kafka and an understanding of core streaming concepts including windowing, late data, replay, and idempotency.
  • Understanding of data quality and observability basics, including validation checks, monitoring, and lineage or metadata concepts.

Preferred

  • Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field.
  • Experience with at least one major cloud data platform such as Google Cloud, AWS, or Azure.
  • Prior exposure to fintech, payments, lending, or broader financial services domains.
  • Exposure to automation tools for reporting workflows.

Why this role

You'll work on high-impact data foundations that directly enable product outcomes, reporting, and downstream AI/ML use cases.

You'll ship in a collaborative environment that values clarity, ownership, and continuous improvement, with room to grow your technical depth across both batch and streaming systems.

The Payoneer Ways of Working

Act as our customer's partner on the inside
Learning what they need and creating what will help them go further.

Do it. Own it.
Being fearlessly accountable in everything we do.

Continuously improve
Always striving for a higher standard than our last. 

Build each other up 
Helping each other grow, as professionals and people.

If this sounds like a business, a community, and a mission you want to be part of, apply today.

We are committed to providing a diverse and inclusive workplace. Payoneer is an equal opportunity employer, and all qualified applicants will receive consideration for employment no matter your race, color, ancestry, religion, sex, sexual orientation, gender identity, national origin, age, disability status, protected veteran status, or any other characteristic protected by law. If you require reasonable accommodation at any stage of the hiring process, please speak to the recruiter managing the role for any adjustments. Decisions about requests for reasonable accommodation are made on a case-by-case basis.

Senior Operations Technology Engineer

Herzliya, Tel Aviv District, Israel
1d

Director, Talent Management

New York, US
163 mil US$ - 270 mil US$1d

Senior Credit Risk Manager

Buenos Aires, Argentina
1d

Premium Service Specialist

Shanghai, China
1d

Senior DevSecOps Engineer

Herzliya, Tel Aviv District, Israel
2d

AI-Native Software Engineer

Herzliya, Tel Aviv District, Israel
2d

Executive Assistant

Gurugram, India
5d

Enterprise Center of Excellence Manager

Herzliya, Tel Aviv District, Israel
6d

Regional Marketing Senior Director – LATAM

Buenos Aires, Argentina
6d

Team Leader R&D, Document Understanding

Herzliya, Tel Aviv District, Israel
6d

Senior Data Engineer

Gurugram, India
6d

Data Analyst Lead, RevOps

Gurugram, India
6d

Data Engineer (dbt)

Madrid, ES / Barcelona, ES / Santiago de Compostela, ES / València, ES
Nuevo

Data Engineer (Snowflake & DBT)

Madrid, ES / Barcelona, ES / Logroño, ES / Remote, ES / Santiago de Compostela, ES / València, ES
Nuevo

Data Scientist

Mexico City, Mexico, MX
Nuevo

Senior Data Scientist

Bedminster, USA, US / Chicago, USA, US
Nuevo

Junior AI Engineer

Milano, Italy, IT / Verona, Italy, IT
Nuevo
Híbrido

Senior Machine Learning Platform/Ops Engineer

Barcelona (Hybrid)
Nuevo
Híbrido

Data Engineer, People Analytics

Spain (Hybrid)
213 mil US$ - 250 mil US$Nuevo

Digital Consumer - Data Analyst Intern (m/f/x)

MADRID GENERAL OFFICE
Nuevo

Data Scientist (Experimentation & Personalization)

Madrid, ES / Madrid, ES / Madrid, ES
Nuevo

Senior Data Analyst (Product Analytics)

Madrid, ES / Madrid, ES / Madrid, ES
Nuevo

Candidatura gestionada por Payoneer