Data Engineer
Olive Technologies Ltd
Data Engineer
£45000
Olive Technologies Ltd, Whitley, Reading
- Full time
- Permanent
- Remote working
Posted 2 weeks ago, 14 May | Get your application in now before you miss out!
Closing date: Closing date not specified
job Ref: e74cc50d4e9847348d60912dc2ad3799
Full Job Description
We are currently looking for Data Engineer for our Cloud and Enterprise Information
Management Practice. Develop architectural models for Cloud-based data management
solutions leveraging Microsoft Azure / AWS / GCP / Snowflake technologies to operate at
large scale and high performance.
MAIN DUTIES AND RESPONSIBILITIES
Responsible for the functional design requirements for a cloud-based Data Management
solution and design conceptual, logical, physical data models that can meet current and
future business needs.
2.
- Provides Cloud and Data Management environments, able to deep dive and
- Evaluate and Plan DWH Migrations to Cloud.
- Manage the data engineering roadmap and help to bring the organization
- Advance writing and building data pipelines, lakes, managing ETL processes
- Understanding relational and big data models to both store and access data
- Build data management platforms using Cloud Technologies like Data Factory,
- Experience in Data Architecture, Data Management and Analytical Technologies of
- Experience with Extract, Transform & Load and ELT development is required
- Familiarity and good understanding of Data Models - relational and dimension models
- Proven expertise in Data Modeling, Data Profiling, Data Analysis, Data Quality, Data
- Experience in leveraging Snowflake for building data lakes / data warehouses is
- Excellent SQL Skills along with ETL/data processing tools such as Informatica,
- Experience with Snowflake utilities such as SnowSQL, SnowPipe, Python, Tasks,
- Experience in Programming Languages such as Python
- Experience in using Python Libraries for Data mining, Data Modeling and Processing
- Experience in SQL / NO SQL Databases such as Oracle, MS SQL Server, Cassandra,
- Experience in Data Pipelines such as Kafka or Apache Airflow
- Experience in Big Data Ecosystem (Apache Hadoop, Spark, Kafka) and/or the IaaS or
- Should be strong in Azure Data Factory, Data Lake, Data Bricks, Cosmos SQL, Blob,
- Experience in BI tools such as Tableau, Power BI.
- Experience in Continuous Integration and Delivery.
- Experience with JIRA/Confluence and source control environments like Git, GitHub
- Experience with software development in a Windows, Linux/Unix environment.
- Experience in Agile methodologies (Kanban and SCRUM).
- Ability to work autonomously and with small cross-functional teams.
- Communication skills including technical documentation, development and delivery
identify root cause of issues.
3.
4.
towards an automated, scalable and fault-tolerant infrastructure
5.
and perform a number of transformations.
6.
from data visualization and other query tools.
7.
Data Lake, Data Bricks, Cosmos SQL, Blob, Redshift, Lambda, RDS, S3, EC2
modern data platforms on Cloud.
is required
Governance and Data Lineage.
highly preferred.
Talend, Pentaho, Databricks, Spark, Alteryx.
Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures.
and Data Visualizations
Mongo DB, HBase, Zen, Elastic Stack, Couch DB, Dynamo DB and others.
PaaS Ecosystem (Microsoft Azure, Google Cloud, AWS).
Redshift, Lambda, RDS, S3, EC2, Kinesis, AWS/Azure/Snowflake Data Warehouse and
other services.
etc.
of demonstrations and presentations and the ability to listen and communicate effectively
with functional leaders.