No two career paths will ever look the same. At Leidos, we know the most talented and diverse IT and engineering professionals will always have a multitude of career choices; your time at Leidos will be a wise investment in your career and in yourself. We welcome your perspective and ideas, in order to foster collaboration and deliver world-class solutions.
We look for solutions that not only transform businesses, but change the world.
Our Civil business is helping to modernize and manage infrastructure, systems and controls, and cybersecurity for civilian agencies and commercial clients around the globe. With core competencies in information technology, energy and environment, complex logistics, and specialized engineering, we solve technical challenges and implement newfound efficiencies on a number of programs including those that:
- Power homes and businesses
- Guide air traffic
- Streamline tax returns
- Protect digital footprints
- Contain environmental incidents
- Heighten port security
- Enable scientific discovery
Protect yourself and your family, with the benefits of working for a world-class employer.
When you join Leidos, you join a Fortune 500 company and one of Ethisphere Institute's "World's Most Ethical Companies"
Leidos...We strive to make the complex clear
Looking for a strong Big Data/Machine Learning Developer, who can design and develop complex machine learning models, Web API to consume the models, and deploy the solutions in distributed and/or cloud environment. The candidate should perform in a demanding, high-energy position requiring flexibility and innovative technical solutions to the challenges of processing, interpreting and analyzing large volumes of data, including text. We are seeking individuals with a unique blend of research and operational experience, in order to apply machine learning models and deep learning approaches to the problem of recognition in complex environments given sparse or limited training data. The candidate should be able to quickly understand existing deployed machine learning models, big-data applications, performance tuning, optimize, log analysis, issue resolution and continuous improvement to current operations. The candidate should possesses strong scripting skills in Java and Python programming in Linux and Windows environments. The candidate should be able to adopt to client needs, must be able to work independently and also with other teams, guide the business team, must possess strong written & oral communication and collaboration skills.
- BS/CS, MS/CS or equivalent
- 5+ years of experience in designing and building full stack solutions utilizing distributed computing using Python, Scala or Java including distributed file systems or multi-node databases.
- Experience in one or more areas of machine learning / artificial intelligence such as classifications, pattern recognition, NLP, Anomaly Detection, Recommender Systems, Sentiment Analysis, Clustering, Decision Trees, SVM, Topic Models and Neural Networks
- Deep knowledge and experience in supervised and unsupervised algorithms to effectively address business problems
- Excellent understanding of common families of models, feature engineering, feature selection and other practical machine learning issues, such as overfitting
- Programming experience using Python (iPython notebooks), Matlab, R, Scala, or Java
- Experience with distributed databases such as MongoDB, HBase, DynamoDB, Couch base, etc., and good skills in traditional databases such as MS-SQL with T-SQL, SSIS, SSAS or Oracle with PL/SQL, etc.,
- Excellent communication skills to communicate with wide technical and business users
- Demonstrate ability to build full stack systems architected for speed and distributed computing.
- Demonstrate ability to quickly learn new tools and paradigms to deploy cutting edge solutions
- Adept at simultaneously working on multiple projects, meeting deadlines, and managing expectations
- Good knowledge of search engine technologies such as Apache Solr, or Elastic Search and able to define schema, create collections, ingest data into search engine and retrieve data using streaming APIs and graph queries.
- Experience or understanding in AWS SageMaker, Cloudera Data Science Workbench (DSW) or similar ML/DS platform.
- Experience in Cloudera CDH or similar platform with application development skills in Morphline, Flume, Kafka
- Good exposure to cloud technologies such as AWS S3, Lambda functions, or Azure Data lake etc.,
- Experience of using Deep Learning frameworks (such as Tensorflow, Torch and Caffe)
- Knowledge of parallel computing approaches such as use of GPU parallelization is highly desirable
- Develop both deployment architecture and scripts for automated system deployment in AWS or on premise systems.
- Create compelling data visualizations using Tableau, Power BI, or Hue to communicate insights to a wide audience
- Application development experience with REST API, workflows Oozie, crontabs, and data integration with Sqoop with various data formats Parquet, Avro, Json, etc.,