Requirements: English
Company: Addepto
Region: Wysokie Mazowieckie , Podlaskie Voivodeship
Addepto is a leading consulting and technology company specializing in AI and Big Data, helping clients deliver innovative data projects. We partner with top-tier global enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. Our exclusive focus on AI and Big Data has earned us recognition by Forbes as one of the top 10 AI companies. As a Senior Data Engineer, you will have the exciting opportunity to work with a team of technology experts on challenging projects across various industries, leveraging cutting-edge technologies. Here are some of the projects we are seeking talented individuals to join: Development and maintenance of a large platform for processing automotive data. The technology stack includes Spark, Cloudera, Airflow, Iceberg, Python, and AWS. This Azure and Databricks powered initiative combines diverse enterprise and public data sources. The data platform is at the early stages of the development, covering design of architecture and processes as well as giving freedom for technology selection. Centralized reporting platform for a growing US telecommunications company. This project involves implementing BigQuery and Looker as the central platform for data reporting. It focuses on centralizing data, integrating various CRMs, and building executive reporting solutions to support decision-making and business growth. Work in a supportive team of passionate enthusiasts of AI Big Data. Engage with top-tier global enterprises and cutting-edge startups on international projects. Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces. B2B, employment contracts, or contracts of mandate. Make use of 20 fully paid days off available for B2B contractors and individuals under contracts of mandate. Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching. Develop and maintain a high-performance data processing platform for automotive data, ensuring scalability and reliability. Optimize data workflows to ensure efficient data ingestion, processing, and storage using technologies such as Spark, Cloudera, and Airflow. Work with data lake technologies (e.G., Leverage cloud services (AWS) for infrastructure management and scaling of processing workloads. Write and maintain high-quality Python (or Java/Scala) code for data processing tasks and automation. At least 5 years of commercial experience implementing, developing, or maintaining Big Data systems, data governance and data management processes. Strong programming skills in Python (or Java/Scala): writing a clean code, OOP design. Hands-on with Big Data technologies like Spark, Cloudera, Data Platform, Airflow, NiFi, Docker, Kubernetes, Iceberg, Hive, Trino or Hudi. Ability to work independently and take ownership of project deliverables. Fluent in English (at least C1 level). Bachelors degree in technical or mathematical studies. Visit our website ( career page ) and social media ( Facebook, LinkedIn ).