AWS Data Engineer


AWS Data Engineer


• Candidates with the skills (not all but some) AWS Step Functions, S3, Glue, EMR, Redshift, Dynamo DB, Aurora, Athena, Big data on AWS.

• AWS data services, Hadoop, Pyspark,

• Secondary Skill: Glue, Data Pipe Line, Databricks, Virtual Machine

• Role Description: Design & develop solution on AWS Cloud for data lake/integration using PaaS services like Glue, data pipe line etc Hadoop, Pyspark platform.


Required Skills:

• 7+ years of work experience with ETL, and business intelligence AWS data architectures.

• 3+ years of hands-on Spark/Scala/Python development experience.

• Experience developing and managing data warehouses on a terabyte or petabyte scale.

• Experience with core competencies in Data Structures, Rest/SOAP APIs, JSON, etc.

• Strong experience in massively parallel processing & columnar databases.

• Expert in writing SQL.

• Deep understanding of advanced data warehousing concepts and track record of applying these concepts on the job.

• Experience with common software engineering tools (e.g., Git, JIRA, Confluence, or similar)

• Ability to manage numerous requests concurrently and strategically, prioritizing when necessary.

• Good communication and presentation skills.

• Dynamic team player.

How to Apply:

Apply online at

Visit Site to Apply

Location: Lafayette, LA
Date Posted: February 12, 2021
Application Deadline: March 12, 2021