Descrição:
A Randstad é a empresa mundial líder em talentos. Trabalhamos diariamente para encontrar as melhores oportunidades para os nossos candidatos, ajudando-os a alcançar o seu verdadeiro potencial. Queremos criar um impacto positivo na sociedade, proporcionando oportunidades equitativas para todas as pessoas, independentemente da sua origem, e ajudando-as a manterem-se relevantes num mundo do trabalho em constante mudança.
Randstad Digital is hiring a Data Engineer for direct integration, in a company located in Porto.
Hybrid work model (3 days on-site; 2 days remote / week).
descrição da função/job description
- Designs, builds, and maintains the infrastructure that enables the collection, storage, and processing of data within the HR function;
- Responsible for designing, developing, and implementing comprehensive analytics solutions that facilitate data-driven decision-making across an organization;
- Design, implement, and manage data pipelines that extract, transform, and load data from various sources to data warehouses or data lakes;
- Combine data from diverse sources (databases, APIs, file systems, and streaming services) to create a unified, organized data source;
- Optimize queries, data structures, and indexing strategies to ensure efficient and fast data retrieval, especially for large datasets;
- Resolve issues within data pipelines, monitor for performance bottlenecks, and implement solutions to improve reliability and efficiency;
- Structure and prepare data to be easily accessed and queried by data scientists, analysts, and machine learning models;
- In collaboration with the HR Reporting & Analytics team (and other data teams), create and process data features used in predictive models, ensuring data pipelines meet analytical requirements;
- Automate and schedule data workflows, ensuring that data pipelines run consistently and with minimal manual intervention;
- Use statistical techniques and tools (such as SQL, Python, R, or Excel) to analyze complex datasets and identify trends, patterns, and correlations;
- Conduct exploratory data analysis (EDA) to understand data distributions and relationships.
requisitos/requirements
- Degree in Computer Science, Information Technology, Software Engineering, Statistics or Data Science;
- More than 3 years’ experience working ETL systems, and big data tools;
- Skills in tuning queries and optimizing the data pipeline for faster, more efficient data flow;
- Experience designing efficient data models (star schema, snowflake schema) for relational and dimensional databases to improve data retrieval and analysis;
- Hands-on experience with data warehousing solutions like Amazon Redshift, Google BigQuery, Snowflake, or Azure Synapse for storing, processing, and querying large datasets;
- Practical experience with cloud platforms such as AWS, Google Cloud Platform, or Microsoft Azure (Microsoft Azure preferred);
- Working with data lakes (e.g. Azure Data Lake), particularly when handling unstructured or semi-structured data at scale;
- Proficiency in Python for data manipulation and scripting, as well as SQL for data querying and management;
- Practical experience in data visualization tools (Tableau, Looker, Power BI);
- Fluent in English, both spoken and written;
- Ability to manage complexity and timelines. Detail oriented and data driven work style;
- Great communication and ability to work in an international environment. Great presentation skills.
oferta/offer
Direct contract (contrato sem termo), health and life insurance, meal allowance of 10,2 €/work day, annual bonus up to 8% (company results and individual performance) + 40€ / month transportation + pension plan.
para se candidatar/to apply
location_DTS-2025-160841