Our Python Developer will have the opportunity to build scalable systems and software for commercial and government use, by developing scalable solutions that leverage modern architecture and best practices, and enable significant processing at runtime. You will provide technical expertise and support in the design, development, implementation, and testing.
You will also participate in and/or direct major deliverables of projects through all aspects of the software development lifecycle including scope and work estimation, architecture and design, coding for back-end, and unit testing. In addition, you would also work with our Data Science team to help operationalize the ML models for our v2 platform.
Your solutions should keep in mind scalability, with optimized usage of distributed computing using frameworks like Spark. You should also have strong familiarity and experience with how to leverage the AWS ecosystem to bring in relevant AWS tools, services, and resources to enable substantial processing of very large datasets before runtime, entity resolution between very large datasets, and real-time processing in a scalable, distributed computing environment.
Primary Responsibilities:
- Lead the development efforts to create platform v2
- Participate in software programming initiatives to support innovation and enhancement, using Python and PySpark.
- Leverage the AWS ecosystem to bring in relevant tools, services, and resources to enable distributed computing and scalable processing of very large datasets.
- Problem solve and think creatively about the big picture and solution for our customers, proactively anticipate problems, and be customer-centric in our development and design, and be open-minded to different solutions for achieving a development milestone.
- Clearly document processes, methodologies, and tools used.
Experience Required:
- B.S. in relevant technical degree
- Significant use and experience (at least 3-5 years) with Python, required
- Significant experience (at least 3-5 years) with distributed computing and corresponding languages such as PySpark in the Spark ecosystem, required
- Significant experience (at least 3-5 years) with the AWS ecosystem, including tools, services, and resources that enable scalable, distributed processing, required
- Significant use and experience with writing complex SQL queries and analysis of data correlations, required
- Experience utilizing software testing performance tools, such as Junit
- Experience and knowledge of Git, AWS CodeCommit, or other such repos
- Knowledge of ML fundamentals and acquaintance of popular ML libraries
- Ability to work independently and integrate with other team members
- Project management skills, ability to scope out timeline, methodology, and deliverables for development, testing, and integration into the platform
- Excellent communication skills (written and verbal)
- Well-versed with using version control systems
The position is remote. The candidate must have the legal right to work in the United States.
Interview Process:
We will conduct 3 rounds of interviews.
- First Round: Culture, fit, and background interview with the company Founders
- Second Round: Technical Interview
- Third Round: In-Person Day with Founders (We will have the candidate fly out to D.C. to meet the founders and team.)
Job Type: Full-time
Salary: $95,000.00 - $125,000.00 per year
Benefits:
- Dental insurance
- Health insurance
- Paid time off
Schedule:
Application Question(s):
- Are you able to work full-time on a W2?
- If selected, are you willing/able to fly to Washington, DC for an inperson interview (all travel expenses paid)?
Experience:
- Python: 3 years (Required)
- Distributed Programming: 2 years (Required)
- AWS deployment: 2 years (Required)
- Spark: 2 years (Required)
- PySpark: 1 year (Required)
- Apache Spark: 1 year (Preferred)