Amazon’s eCommerce Foundation (eCF) organization is responsible for the core components that drive the Amazon website and customer experience. Serving millions of customer page views and orders per day, eCF builds for scale. As an organization within eCF, the Business Data Technologies (BDT) group is no exception. We collect petabytes of data from thousands of data sources inside and outside Amazon including the Amazon catalog system, inventory system, customer order system, page views on the website and Alexa systems. We also support Amazon subsidiaries such as IMDB and Audible. We provide interfaces for our internal customers to access and query the data hundreds of thousands of times per day, using Amazon Web Service’s (AWS) Redshift, Hive, Spark, Scala and Flink. We build scalable solutions that grow with the Amazon business.BDT is growing, and the data processing landscape is shifting. Our data is consumed by thousands of teams across Amazon including Research Scientists, Machine Learning Specialists, Business Analysts and Data Engineers. BDT team is building an enterprise-wide Data Lake leveraging AWS technologies. We enable teams at Amazon to produce analytical data in any form of storage (S3, DynamoDB, Aurora, etc.) and process that data using any type of compute environment such as EMR/Spark, Redshift, Athena, and others via a common bus. We are developing innovative products including the next-generation of data catalog, data discovery engine, data transformation platform, and more with state-of-the-art user experience. We’re looking for top engineers to build them from the ground up.This is a hands-on position where you will do everything from designing and building extremely scalable components and cutting-edge features to formulating strategy and direction for data processing at Amazon. You will also mentor junior engineers and work with the most sophisticated customers in the business to help them get the best results. You need to not only be a top software developer with excellent programming skills, have an understanding of data processing and parallelization, and a stellar record of delivery, but also excel at leadership and customer obsession and have a real passion for massive-scale computing.Your responsibilities will include:- Keeping your finger on the pulse of the constantly evolving and growing data processing and data lake field- Translation of complex functional and technical requirements into detailed architecture and design- Delivering systems and features with top-notch quality, on time- Stay current on technical knowledge to keep pace with rapidly changing technology, and work with the team in bringing new technologies on boardA day in the lifeWork/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.Come help us build for the future of Data Lake!BASIC QUALIFICATIONS- 3+ years of non-internship professional software development experience- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience- Experience programming with at least one software programming language
...