In this position, you will be responsible for analyzing and optimizing key deep learning (DL) and machine learning (ML) models, algorithms and applications on current and next generation Intel hardware and instruction sets. You will be working as part of a team that conceives, researches, and prototypes new machine learning techniques and use cases with the goal of driving Intel growth in this space. This includes both ensuring that leading DL/ML frameworks (e.g., TensorFlow) are taking full advantage of features in our products, as well as impacting next generation products by driving technologies to ensure performance leadership on emerging DL/ML applications and use cases. Your primary role will be to resolve issues such as convergence, quantization, and optimizing new and existing models for new instructions sets such as VNNI. Ideal candidates would have good understanding of state-of-the-art techniques in machine learning and deep learning, performance optimization, and benchmarking, along with a strong understanding of computer architecture. Candidates must also possess strong verbal and written communication skills and the demonstrated ability to work in a demanding team-oriented environment. You are expected to maintain substantial knowledge of state-of-the-art principles and theories in the space of machine learning, performance optimization and computer architecture in general. You may also participate in the development of intellectual property.
- Experience with hyperparameter tuning
- PhD in CS or equivalent experience required
- Deep experience with at least one Machine/Deep Learning framework
- Linux programming and debugging
- Experience working in C/C++ and python
- Experience with Tensorflow
- Experience with performance profiling, characterization and optimization
- In-depth knowledge of computer micro-architecture
- Experience working with Linux threading architecture and OpenMP
- Experience with Intel(R) Math Kernel Library and BLAS
Intel Nervana, leveraging Intel’s world leading position in silicon innovation and proven history in creating the compute standards that power our world, is transforming Artificial Intelligence (AI). Harnessing silicon designed specifically for AI, end-to-end solutions that broadly span from the data center to the edge, and tools that enable customers to quickly deploy and scale up, Intel Nervana is inside AI and leading the next evolution of compute.
US, Arizona, Phoenix; US, Oregon, Hillsboro; US, California, San Diego;