Bringing the power of deep learning to data in tables

Amazon’s TabTransformer model is now available through SageMaker JumpStart and the official release of the Keras open-source library.

In recent years, deep neural networks have been responsible for most top-performing AI systems. In particular, natural-language processing (NLP) applications are generally built atop Transformer-based language models such as BERT.

One exception to the deep-learning revolution has been applications that rely on data stored in tables, where machine learning approaches based on decision trees have tended to work better.

At Amazon Web Services, we have been working to extend Transformers from NLP to table data with TabTransformer, a novel, deep, tabular, data-modeling architecture for supervised and semi-supervised learning.

Related content
Novel pretraining method enables increases of 5% to 14% on five different evaluation metrics.

Starting today, TabTransformer is available through Amazon SageMaker JumpStart, where it can be used for both classification and regression tasks. TabTransformer can be accessed through the SageMaker JumpStart UI inside of SageMaker Studio or through Python code using SageMaker Python SDK. To get started with TabTransformer on SageMaker JumpStart, please refer to the program documentation.

We are also thrilled to see that TabTransformer has gained attention from people across industries: it has been incorporated into the official repository of Keras, a popular open-source software library for working with deep neural networks, and it has featured in posts on Towards Data Science and Medium. We also presented a paper on the work at the ICLR 2021 Workshop on Weakly Supervised Learning.

The TabTransformer solution

TabTransformer uses Transformers to generate robust data representations — embeddings — for categorical variables, or variables that take on a finite set of discrete values, such as months of the year. Continuous variables (such as numerical values) are processed in a parallel stream.

We exploit a successful methodology from NLP in which a model is pretrained on unlabeled data, to learn a general embedding scheme, then fine-tuned on labeled data, to learn a particular task. We find that this approach increases the accuracy of TabTransformer, too.

In experiments on 15 publicly available datasets, we show that TabTransformer outperforms the state-of-the-art deep-learning methods for tabular data by at least 1.0% on mean AUC, the area under the receiver-operating curve that plots false-positive rate against false-negative rate. We also show that it matches the performance of tree-based ensemble models.

Related content
The Amazon-sponsored FEVEROUS dataset and shared task challenge researchers to create more advanced fact-checking systems.

In the semi-supervised setting, when labeled data is scarce, DNNs generally outperform decision-tree-based models, because they are better able to take advantage of unlabeled data. In our semi-supervised experiments, all of the DNNs outperformed decision trees, but with our novel unsupervised pre-training procedure, TabTransformer demonstrated an average 2.1% AUC lift over the strongest DNN benchmark.

Finally, we also demonstrate that the contextual embeddings learned from TabTransformer are highly robust against both missing and noisy data features and provide better interpretability.

Tabular data

To get a sense of the problem our method addresses, consider a table where the rows represent different samples and the columns represent both sample features (predictor variables) and the sample label (the target variable). TabTransformer takes the features of each sample as input and generates an output to best approximate the corresponding label.

In a practical industry setting, where the labels are partially available (i.e., semi-supervised learning scenarios), TabTransformer can be pre-trained on all the samples without any labels and fine-tuned on the labeled samples.

Additionally, companies usually have one large table (e.g., describing customers/products) that contains multiple target variables, and they are interested in analyzing this data in multiple ways. TabTransformer can be pre-trained on the large number of unlabeled samples once and fine-tuned multiple times for multiple target variables.

The architecture of TabTransformer is shown below. In our experiments, we use standard feature-engineering techniques to transform data types such as text, zip codes, and IP addresses into either numeric or categorical features.

Graphic shows the architecture of TabTransformer.
The architecture of TabTransformer.

Pretraining procedures

We explore two different types of pre-training procedures: masked language modeling (MLM) and replaced-token detection (RTD). In MLM, for each sample, we randomly select a certain portion of features to be masked and use the embeddings of the other features to reconstruct the masked features. In RTD, for each sample, instead of masking features, we replace them with random values chosen from the same columns.

In addition to comparing TabTransformer to baseline models, we conducted a study to demonstrate the interpretability of the embeddings produced by our contextual-embedding component.

In that study, we took contextual embeddings from different layers of the Transformer and computed a t-distributed stochastic neighbor embedding (t-SNE) to visualize their similarity in function space. More precisely, after training TabTransformer, we pass the categorical features in the test data through our trained model and extract all contextual embeddings (across all columns) from a certain layer of the Transformer. The t-SNE algorithm is then used to reduce each embedding to a 2-D point in the t-SNE plot.

T-SNE plots of learned embeddings for categorical features in the dataset BankMarketing. Left: The embeddings generated from the last layer of the Transformer. Center: The embeddings before being passed into the Transformer. Right: The embeddings learned by the model without the Transformer layers.
T-SNE plots of learned embeddings for categorical features in the dataset BankMarketing. Left: The embeddings generated from the last layer of the Transformer. Center: The embeddings before being passed into the Transformer. Right: The embeddings learned by the model without the Transformer layers.

The figure above shows the 2-D visualization of embeddings from the last layer of the Transformer for the dataset bank marketing. We can see that semantically similar classes are close to each other and form clusters (annotated by a set of labels) in the embedding space.

For example, all of the client-based features (colored markers), such as job, education level, and marital status, stay close to the center, and non-client-based features (gray markers), such as month (last contact month of the year) and day (last contact day of the week), lie outside the central area. In the bottom cluster, the embedding of having a housing loan stays close to that of having defaulted, while the embeddings of being a student, single marital status, not having a housing loan, and tertiary education level are close to each other.

Related content
Watch the keynote presentation by Alex Smola, AWS vice president and distinguished scientist, presented at the AutoML@ICML2020 workshop.

The center figure is the t-SNE plot of embeddings before being passed through the Transformer (i.e., from layer 0). The right figure is the t-SNE plot of the embeddings the model produces when the Transformer layers are removed, converting it into an ordinary multilayer perceptron (MLP). In those plots, we do not observe the types of patterns seen in the left plot.

Finally, we conduct extensive experiments on 15 publicly available datasets, using both supervised and semi-supervised learning. In the supervised-learning experiment, TabTransformer matched the performance of the state-of-the-art gradient-boosted decision-tree (GBDT) model and significantly outperformed the prior DNN models TabNet and Deep VIB.

Model nameMean AUC (%)
TabTransformer82.8 ± 0.4
MLP81.8 ± 0.4
Gradient-boosted decision trees82.9 ± 0.4
Sparse MLP81.4 ± 0.4
Logistic regression80.4 ± 0.4
TabNet77.1 ± 0.5
Deep VIB80.5 ± 0.4

Model performance with supervised learning. The evaluation metric is mean standard deviation of AUC score over the 15 datasets for each model. The larger the number, the better the result. The top two numbers are bold.

In the semi-supervised-learning experiment, we pretrain two TabTransformer models on the entire unlabeled set of training data, using the MLM and RTD methods respectively; then we fine-tune both models on labeled data.

As baselines, we use the semi-supervised learning methods pseudo labeling and entropy regularization to train both a TabTransformer network and an ordinary MLP. We also train a gradient-boosted-decision-tree model using pseudo-labeling and an MLP using a pretraining method called the swap-noise denoising autoencoder.

# Labeled data50200500
TabTransformer-RTD66.6 ± 0.670.9 ± 0.673.1 ± 0.6
TabTransformer-MLM66.8 ± 0.671.0 ± 0.672.9 ± 0.6
ER-MLP65.6 ± 0.669.0 ± 0.671.0 ± 0.6
PL-MLP65.4 ± 0.668.8 ± 0.671.0 ± 0.6
ER-TabTransformer62.7 ± 0.667.1 ± 0.669.3 ± 0.6
PL-TabTransformer63.6 ± 0.667.3 ± 0.769.3 ± 0.6
DAE65.2 ± 0.568.5 ± 0.671.0 ± 0.6
PL-GBDT56.5 ± 0.563.1 ± 0.666.5 ± 0.7

Semi-supervised-learning results on six datasets, each with more than 30,000 unlabeled data points, and different number of labeled data points. Evaluation metric is mean AUC in percentage.

# Labeled data50200500
TabTransformer-RTD78.6 ± 0.681.6 ± 0.583.4 ± 0.5
TabTransformer-MLM78.5 ± 0.681.0 ± 0.682.4 ± 0.5
ER-MLP79.4 ± 0.681.1 ± 0.682.3 ± 0.6
PL-MLP79.1 ± 0.681.1 ± 0.682.0 ± 0.6
ER-TabTransformer77.9 ± 0.681.2 ± 0.682.1 ± 0.6
PL-TabTransformer77.8 ± 0.681.0 ± 0.682.1 ± 0.6
DAE78.5 ± 0.780.7 ± 0.682.2 ± 0.6
PL-GBDT73.4 ± 0.778.8 ± 0.681.3 ± 0.6

Semi-supervised learning results on nine datasets, each with fewer than 30,000 data points, and different numbers of labeled data points. Evaluation metric is mean AUC in percentage.

To gauge relative performance with different amounts of unlabeled data, we split the set of 15 datasets into two subsets. The first set consists of the six datasets that containing more than 30,000 data points. The second set includes the remaining nine datasets.

When the amount of unlabeled data is large, TabTransformer-RTD and TabTransformer-MLM significantly outperform all the other competitors. Particularly, TabTransformer-RTD/MLM improvement are at least 1.2%, 2.0%, and 2.1% on mean AUC for the scenarios of 50, 200, and 500 labeled data points, respectively. When the number of unlabeled data becomes smaller, as shown in Table 3, TabTransformer-RTD still outperforms most of its competitors but with a marginal improvement.

Acknowledgments: Ashish Khetan, Milan Cvitkovic, Zohar Karnin

Related content

US, WA, Seattle
The Automated Reasoning Group in AWS Platform is looking for an Applied Scientist with experience in building scalable solver solutions that delight customers. You will be part of a world-class team building the next generation of automated reasoning tools and services. AWS has the most services and more features within those services, than any other cloud provider–from infrastructure technologies like compute, storage, and databases–to emerging technologies, such as machine learning and artificial intelligence, data lakes and analytics, and Internet of Things. You will apply your knowledge to propose solutions, create software prototypes, and move prototypes into production systems using modern software development tools and methodologies. In addition, you will support and scale your solutions to meet the ever-growing demand of customer use. You will use your strong verbal and written communication skills, are self-driven and own the delivery of high quality results in a fast-paced environment. Each day, hundreds of thousands of developers make billions of transactions worldwide on AWS. They harness the power of the cloud to enable innovative applications, websites, and businesses. Using automated reasoning technology and mathematical proofs, AWS allows customers to answer questions about security, availability, durability, and functional correctness. We call this provable security, absolute assurance in security of the cloud and in the cloud. See https://aws.amazon.com/security/provable-security/ As an Applied Scientist in AWS Platform, you will play a pivotal role in shaping the definition, vision, design, roadmap and development of product features from beginning to end. You will: - Define and implement new solver applications that are scalable and efficient approaches to difficult problems - Apply software engineering best practices to ensure a high standard of quality for all team deliverables - Work in an agile, startup-like development environment, where you are always working on the most important stuff - Deliver high-quality scientific artifacts - Work with the team to define new interfaces that lower the barrier of adoption for automated reasoning solvers - Work with the team to help drive business decisions The AWS Platform is the glue that holds the AWS ecosystem together. From identity features such as access management and sign on, cryptography, console, builder & developer tools, to projects like automating all of our contractual billing systems, AWS Platform is always innovating with the customer in mind. The AWS Platform team sustains over 750 million transactions per second. Learn and Be Curious. We have a formal mentor search application that lets you find a mentor that works best for you based on location, job family, job level etc. Your manager can also help you find a mentor or two, because two is better than one. In addition to formal mentors, we work and train together so that we are always learning from one another, and we celebrate and support the career progression of our team members. Inclusion and Diversity. Our team is diverse! We drive towards an inclusive culture and work environment. We are intentional about attracting, developing, and retaining amazing talent from diverse backgrounds. Team members are active in Amazon’s 10+ affinity groups, sometimes known as employee resource groups, which bring employees together across businesses and locations around the world. These range from groups such as the Black Employee Network, Latinos at Amazon, Indigenous at Amazon, Families at Amazon, Amazon Women and Engineering, LGBTQ+, Warriors at Amazon (Military), Amazon People With Disabilities, and more. Key job responsibilities Work closely with internal and external users on defining and extending application domains. Tune solver performance for application-specific demands. Identify new opportunities for solver deployment. About the team Solver science is a talented team of scientists from around the world. Expertise areas include solver theory, performance, implementation, and applications. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. We are open to hiring candidates to work out of one of the following locations: Portland, OR, USA | Seattle, WA, USA
CN, 11, Beijing
Amazon Search JP builds features powering product search on the Amazon JP shopping site and expands the innovations to world wide. As an Applied Scientist on this growing team, you will take on a key role in improving the NLP and ranking capabilities of the Amazon product search service. Our ultimate goal is to help customers find the products they are searching for, and discover new products they would be interested in. We do so by developing NLP components that cover a wide range of languages and systems. As an Applied Scientist for Search JP, you will design, implement and deliver search features on Amazon site, helping millions of customers every day to find quickly what they are looking for. You will propose innovation in NLP and IR to build ML models trained on terabytes of product and traffic data, which are evaluated using both offline metrics as well as online metrics from A/B testing. You will then integrate these models into the production search engine that serves customers, closing the loop through data, modeling, application, and customer feedback. The chosen approaches for model architecture will balance business-defined performance metrics with the needs of millisecond response times. Key job responsibilities - Designing and implementing new features and machine learned models, including the application of state-of-art deep learning to solve search matching, ranking and Search suggestion problems. - Analyzing data and metrics relevant to the search experiences. - Working with teams worldwide on global projects. Your benefits include: - Working on a high-impact, high-visibility product, with your work improving the experience of millions of customers - The opportunity to use (and innovate) state-of-the-art ML methods to solve real-world problems with tangible customer impact - Being part of a growing team where you can influence the team's mission, direction, and how we achieve our goals We are open to hiring candidates to work out of one of the following locations: Beijing, 11, CHN | Shanghai, 31, CHN
US, WA, Seattle
Are you interested in building, developing, and driving the machine learning technical vision, strategy, and implementation for AWS Hardware? AWS Hardware is hiring a Senior Applied Scientist (AS) to lead the definition and prioritization of our customer focused technologies and services. AWS Hardware is responsible for designing, qualifying, and maintaining server solutions for AWS and its customers as well as developing new cloud focused hardware solutions. You will be a senior technical leader in the existing Data Sciences and Analytics Team, build, and drive the data science and machine learning needed for our product development and operations. As a Senior AS at Amazon, you will provide technical leadership to the teams, organization and products for machine learning. Senior AS’s are specialists with deep expertise in areas such as machine learning, speech recognition, large language models (LLMs), natural language processing, computer vision, and knowledge acquisition, and help drive the ML vision for our products. They are externally aware of the state-of-the-art in their respective field of expertise and are constantly focused on advancing the state-of-the-art for improving Amazon’s products and services. The ideal candidate will be an expert in the areas of data science, machine learning, and statistics; specifically in recommendation systems development, classification, and LLMs. You will have hands-on experience leading multiple simultaneous product development and operations initiatives as well as be able to balance technical leadership with strong business judgment to make the right decisions about technology, infrastructure, methodologies, and productionizing models and code. You will strive for simplicity, and demonstrate significant creativity and high judgment backed by statistical proof. Key job responsibilities MS in Data Science, Machine Learning, Statistics, Computer Science, Applied Math or equivalent highly technical field. 10+ years of hands-on experience working in data science and/or machine learning using models and methods such as neural networks, random forests, SVMs or Bayesian classification. 3+ years developing recommendation systems and/or LLMs. 3+ years of experience working in software development, machine learning engineering or ops. Have a history of building highly scalable systems that capture and utilize large data sets in order to quantify your products performance via metrics, monitoring, and alarming. Experience using R, Python, Java, or other equivalent statistics and machine learning tools. Experienced in computer science fundamentals such as object-oriented design, data structures and algorithm design. 3+ years of experience developing in a cloud environment. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, San Diego
Do you want to join an innovative team of scientists who use deep learning, natural language processing, large language models to help Amazon provide the best seller experience across the entire Seller life cycle, including recruitment, growth, support and provide the best customer and seller experience by automatically mitigating risk? Do you want to build advanced algorithmic systems that help manage the trust and safety of millions of customer interactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Are you excited by the opportunity to leverage GenAI and innovate on top of the state-of-the-art large language models to improve customer and seller experience? Do you like to build end-to-end business solutions and directly impact the profitability of the company? Do you like to innovate and simplify processes? If yes, then you may be a great fit to join the Machine Learning Accelerator team in the Amazon Selling Partner Services (SPS) group. Key job responsibilities The scope of an Applied Scientist III in the Selling Partner Services (SPS) Machine Learning Accelerator (MLA) team is to research and prototype Machine Learning applications that solve strategic business problems across SPS domains. Additionally, the scientist collaborates with engineers and business partners to design and implement solutions at scale when they are determined to be of broad benefit to SPS organizations. They develop large-scale solutions for high impact projects, introduce tools and other techniques that can be used to solve problems from various perspectives, and show depth and competence in more than one area. They influence the team’s technical strategy by making insightful contributions to the team’s priorities, approach and planning. They develop and introduce tools and practices that streamline the work of the team, and they mentor junior team members and participate in hiring. We are open to hiring candidates to work out of one of the following locations: San Diego, CA, USA
IN, KA, Bengaluru
How to use the world’s richest collection of e-commerce data to improve payments experience for our customers? Amazon Payments Global Data Science team seeks a Senior Data Scientist for building analytical and scientific solutions that will address increasingly complex business questions in the Gift-Cards space. Amazon.com has a culture of data-driven decision-making and demands intelligence that is timely, accurate, and actionable. This team operates at WW level and provides a fast-paced environment where every day brings new challenges and opportunities. As a Senior Data Scientist in this team, you will be driving the Data Science/ML roadmap for business continuity & growth. You will develop statistical and machine learning models to solve for complex business problems in Gift-Cards space, design and run global experiments, and find new ways to optimize the customer experience. You will need to collaborate effectively with internal stakeholders, cross-functional teams to solve problems, create operational efficiencies, and deliver successfully against high organizational standards. You will explore GenAI use-cases within Gift-Cards space and also work on cross-disciplinary efforts with other scientists within Amazon. Key job responsibilities - You should be detail-oriented and must have an aptitude for solving unstructured and ambiguous problems. You should work in a self-directed environment, own tasks and drive them to completion - You should be passionate about working with huge data sets and be someone who loves to bring datasets together to answer business questions - You should demonstrate thorough technical expertise on feature engineering of massive datasets, exploratory data analysis, and model building using state-of-art ML algorithms - Random Forest, Gradient Boosting, SVM, Neural Nets, DL, Reinforcement Learning etc. You should be aware of automating feedback loops for algorithms in production - You should work closely with internal stakeholders like the business teams, engineering teams and partner teams and align them with respect to your focus areas - You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions and build mechanisms that answer those questions We are open to hiring candidates to work out of one of the following locations: Bengaluru, KA, IND
IN, KA, Bangalore
Are you interested in changing the Digital Reading Experience? We are from Kindle Books Team looking for a set of Scientists to take the reading experience in Kindle to next level with a set of innovations! We envision Kindle as the place where readers find the best manifestation of all written content optimized with features that enable them to get the most out of reading, and creators are able to realize their vision to customers quickly and at scale. Every time customers open their content, regardless of surface, they start or restart their reading in a familiar, useful and engaging place. We achieve this by building a strong foundation of core experiences and act as a force multiplier and partner for content creators (directly or indirectly) to easily innovate on top of Kindle's purpose built content experience stack in a simple and extensible way. We will achieve this by providing a best-in-class reading experience, unique content experiences, and remaining agile in meeting the evolving needs and preferences of our users. Our goal is to foster long-lasting reading habits and make us the preferred destination for enriching literary experiences. We are building a In The Book Science team and looking for Scientists, who are passionate about Reading and are willing to take Reading to the next level. Every Book is a complex structure with different entities, layout, format and semantics, with more than 17MM eBooks in our catalog. We are looking for experts in all domains like core NLP, Generative AI, CV and Deep Learning Techniques for unlocking capabilities like analysis, enhancement, curation, moderation, translation, transformation and generation in Books based on Content structure, features, Intent & Synthesis. Scientists will focus on Inside the book content and semantically learn the different entities to enhance the Reading experience overall (Kindle & beyond). They have an opportunity to influence in 2 major phases of life-cycle - Publishing (Creation of Books process) and Reading experience (building engaging features & representation in the book thereby driving reading engagement). Key job responsibilities - 5+ years of building machine learning models for business application experience - PhD, or Master's degree and 6+ years of applied research experience - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience programming in Java, C++, Python or related language - You have expertise in one of the applied science disciplines, such as machine learning, natural language processing, computer vision, Deep learning - You are able to use reasonable assumptions, data, and customer requirements to solve problems. - You initiate the design, development, execution, and implementation of smaller components with input and guidance from team members. - You work with SDEs to deliver solutions into production to benefit customers or an area of the business. - You assume responsibility for the code in your components. You write secure, stable, testable, maintainable code with minimal defects. - You understand basic data structures, algorithms, model evaluation techniques, performance, and optimality tradeoffs. - You follow engineering and scientific method best practices. You get your designs, models, and code reviewed. You test your code and models thoroughly - You participate in team design, scoping and prioritization discussions. You are able to map a business goal to a scientific problem and map business metrics to technical metrics. - You invent, refine and develop your solutions to ensure they are meeting customer needs and team goals. You keep current with research trends in your area of expertise and scrutinize your results. - Experience in mentoring junior scientists A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test solutions to improve our experience. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, model development and productionizing the same. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. We are open to hiring candidates to work out of one of the following locations: Banagalore, KA, IND | Bangalore, IND | Bangalore, KA, IND
IN, KA, Bangalore
Are you interested in changing the Digital Reading Experience? We are from Kindle Books Team looking for a set of Scientists to take the reading experience in Kindle to next level with a set of innovations! We envision Kindle as the place where readers find the best manifestation of all written content optimized with features that enable them to get the most out of reading, and creators are able to realize their vision to customers quickly and at scale. Every time customers open their content, regardless of surface, they start or restart their reading in a familiar, useful and engaging place. We achieve this by building a strong foundation of core experiences and act as a force multiplier and partner for content creators (directly or indirectly) to easily innovate on top of Kindle's purpose built content experience stack in a simple and extensible way. We will achieve this by providing a best-in-class reading experience, unique content experiences, and remaining agile in meeting the evolving needs and preferences of our users. Our goal is to foster long-lasting reading habits and make us the preferred destination for enriching literary experiences. We are building a In The Book Science team and looking for Scientists, who are passionate about Reading and are willing to take Reading to the next level. Every Book is a complex structure with different entities, layout, format and semantics, with more than 17MM eBooks in our catalog. We are looking for experts in all domains like core NLP, Generative AI, CV and Deep Learning Techniques for unlocking capabilities like analysis, enhancement, curation, moderation, translation, transformation and generation in Books based on Content structure, features, Intent & Synthesis. Scientists will focus on Inside the book content and semantically learn the different entities to enhance the Reading experience overall (Kindle & beyond). They have an opportunity to influence in 2 major phases of life-cycle - Publishing (Creation of Books process) and Reading experience (building engaging features & representation in the book thereby driving reading engagement). Key job responsibilities - 3+ years of building machine learning models for business application experience - PhD, or Master's degree and 2+ years of applied research experience - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience programming in Java, C++, Python or related language - You have expertise in one of the applied science disciplines, such as machine learning, natural language processing, computer vision, Deep learning - You are able to use reasonable assumptions, data, and customer requirements to solve problems. - You initiate the design, development, execution, and implementation of smaller components with input and guidance from team members. - You work with SDEs to deliver solutions into production to benefit customers or an area of the business. - You assume responsibility for the code in your components. You write secure, stable, testable, maintainable code with minimal defects. - You understand basic data structures, algorithms, model evaluation techniques, performance, and optimality tradeoffs. - You follow engineering and scientific method best practices. You get your designs, models, and code reviewed. You test your code and models thoroughly - You participate in team design, scoping and prioritization discussions. You are able to map a business goal to a scientific problem and map business metrics to technical metrics. - You invent, refine and develop your solutions to ensure they are meeting customer needs and team goals. You keep current with research trends in your area of expertise and scrutinize your results. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test solutions to improve our experience. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, model development and productionizing the same. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. We are open to hiring candidates to work out of one of the following locations: Bangalore, IND | Bangalore, KA, IND
US, WA, Seattle
Amazon is looking for a strategic, innovative science leader within the Global Talent and Compensation (GTMC) organization to lead an interdisciplinary team charged with developing data-driven solutions to model, automate, and inform high judgement decision making by bringing together science and technology in consumer grade internal talent products. GTMC delivers employee-focused experiences by providing scalable and responsive mechanisms for employees, as well as listening and signaling mechanisms for managers and leaders. They do this through intelligent, flexible, and extensible products and scalable data and science services. They set out to deliver a singular experience supporting multiple employee talent journeys (e.g., onboarding, evaluation, compensation, movement, promotion, exit), to generate and capture signals from product data, surface outliers, increase personalization, and improve the efficacy of “next best action” recommendations, for 1.6 million Amazonians around the world. In this role you will lead multiple research teams across the disciplines of Talent Management, Diversity Equity and Inclusion, and Compensation. You will interface with the most senior leaders at Amazon to develop and deliver on a strategic research roadmap that crosses all lines of Amazon businesses (e.g., Consumer, AWS, Devices, Advertising). This role will then partner with engineering and product management leader to deliver the outcomes of this research in production environments. Successful candidates will have an established background expertise in machine learning with some experience in applying this expertise to the fields of talent management, product management and/or software development. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Bellevue
Where will Amazon's growth come from in the next year? What about over the next five? Which product lines are poised to quintuple in size? Are we investing enough in our infrastructure, or too much? How do our customers react to changes in prices, product selection, or delivery times? These are among the most important questions at Amazon today. The Topline Forecasting team in the Supply Chain Optimization Technologies (SCOT) group is looking for innovative, passionate and results-oriented Principal Economist to provide thought-leadership to help answer these questions. You will have an opportunity to own the long-run outlook for Amazon’s global consumer business and shape strategic decisions at the highest level. The successful candidate will be able to formalize problem definitions from ambiguous requirements, build econometric models using Amazon’s world-class data systems, and develop cutting-edge solutions for non-standard problems. Key job responsibilities - You understand the state-of-the-art in time series and econometric modeling. - You apply econometric tools and theory to solve business problems in a fast moving environment. - You excel at extracting insights and correct interpretations from data using advanced modeling techniques. - You communicate insights in a digestible manner to senior leaders in Finance and Operations within the company. - You are able to anticipate future business challenges and key questions, and have the ability to design modeling solutions to tackle them. - You have broad influence over the Topline team’s scientific research agenda and deliverables. - You contribute to the broader Econ research community in Amazon. - You advise other economists on scientific best-practices and raise the bar of research. - You will actively mentor other scientists and contribute to their career development. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | New York, NY, USA
US, WA, Seattle
Are you a scientist interested in pushing the state of the art in LLMs, ML or Computer Vision forward? Are you interested in working on ground-breaking research projects that will lead to great products and scientific publications? Do you wish you had access to large datasets? Answer yes to any of these questions and you’ll fit right in here at Amazon. We are looking for a hands-on researcher, who wants to derive, implement, and test the next generation of Generative AI algorithms (either LLMs, Diffusion Models, auto-regressors, VAEs, or other generative models). The research we do is innovative, multidisciplinary, and far-reaching. We aim to define, deploy, and publish cutting edge research. In order to achieve our vision, we think big and tackle technology problems that are cutting edge. Where technology does not exist, we will build it. Where it exists we will need to modify it to make it work at Amazon scale. We need members who are passionate and willing to learn. “Amazon Science gives you insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Our scientists continue to publish, teach, and engage with the academic community, in addition to utilizing our working backwards method to enrich the way we live and work.” Please visit https://www.amazon.science for more information #hltech #hitech Key job responsibilities - Derive novel ML or Computer Vision or LLMs and NLP algorithms - Design and develop scalable ML solutions - Work with very large datasets - Work closely with software engineering teams and Product Managers to deploy your innovations - Publish your work at major conferences/journals. - Mentor team members in the use of your Generative AI and LLMs. About the team We are a tight-knit group that shares our experiences and help each other succeed. We believe in team work. We love hard problems and like to move fast in a growing and changing environment. We use data to guide our decisions and we always push the technology and process boundaries of what is feasible on behalf of our customers. If that sounds like an environment you like, join us. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA