KDD 2023: Graph neural networks’ new frontiers

Conference general chair and Amazon Scholar Yizhou Sun on modeling long-range dependencies, improving efficiency, and new causal models.

In 2021 and 2022, when Amazon Science asked members of the program committees of the Knowledge Discovery and Data Mining Conference (KDD) to discuss the state of their field, the conversations revolved around graph neural networks.

Yizhou Sun.jpeg
Yizhou Sun, an associate professor of computer science at the University of California, Los Angeles; an Amazon Scholar; and general chair of the 2023 Knowledge Discovery and Data Mining Conference.

Graph learning remains the most popular topic at KDD 2023, but as Yizhou Sun, an associate professor of computer science at the University of California, Los Angeles; an Amazon Scholar; and the conference’s general chair, explains, that doesn’t mean that the field has stood still.

Graph neural networks (GNNs) are machine learning models that produce embeddings, or vector representations, of graph nodes that capture information about the nodes’ relationships to other nodes. They can be used for graph-related tasks, such as predicting edges or labeling nodes, but they can also be used for arbitrary downstream processing tasks, which simply take advantage of the information encoded in graph structure.

But within that general definition, “the implication of ‘graph neural network’ could be very different,” Sun says. “‘Graph neural network’ is a very broad term.”

For instance, Sun explains, traditional GNNs use message passing to produce embeddings. Each node in the graph is embedded, and then each node receives the embeddings of its neighboring nodes (the passed messages), which it integrates into an updated embedding. Typically, this process is performed two to three times, so that the embedding of each node captures information about its one- to three-hop neighborhood.

Related content
Information extraction, drug discovery, and software analysis are just a few applications of this versatile tool.

“If I do message passing, I can only collect information from my immediate neighbors,” Sun explains. “I need to go through many, many layers to model long-range dependencies. For some specific applications, like software analysis or simulation of physical systems, long-range dependency becomes critical.

“So people asked how we can change this architecture. They were inspired by the transformer” — the attention-based neural architecture that underlies today’s large language models — “because the transformer can be considered a special case of a graph neural network, where in the input window, every token can be connected to every other token.

“If every node can communicate with every node in the graph, you can easily address this long-range-dependency issue. But there will be two limitations. One is efficiency. For some graphs, there are many millions or even billions of nodes. You cannot efficiently talk to everyone else in the graph.”

The second concern, Sun explains, is that too much long-range connectivity undermines the very point of graphical representation. Graphs are useful because they capture meaningful relationships between nodes — which means leaving out the meaningless ones. If every node in the graph communicates with every other node, the meaningful connections are diluted.

Related content
In tests, new approach is 15 to 18 times as fast as predecessors.

To combat this problem, “people try to find a way to mimic the position encoding in the text setting or the image setting,” Sun says. “In the text setting, we just turned the position into some encoding. And later, in the computer vision domain, people said, ‘Okay, let's also do that with image patches.’ So, for example, we can break each image into six-by-six patches, and the relative position of those patches can be turned into a position encoding.

“So the next question is, in the graph setting, how we can get that natural kind of relative position? There are different ways to do that, like random walk — a very simple one. And also people try to do eigendecomposition, where we utilize eigenvectors to encode the relative position of those nodes. But eigendecomposition is very time consuming, so again, it comes down to the efficiency problem.”

Efficiency

Indeed, Sun explains, improving the efficiency of GNNs is itself an active area of research — from high-level algorithmic design down to the level of chip design.

“At the algorithm level, you might try to do some sort of sampling technique, just try to make the number of operations smaller,” she says. “Or maybe just design some more efficient algorithms to sparsify the graphs. For example, let's say we wanted to do some sort of similarity search, to keep the most similar nodes to each target node. Then people can design some smart index technology to make that part very fast.

“And in the inference stage, we can do knowledge distillation to distill a very complicated model, let's say a graph neural network, into a very simple graph neural network — or not necessarily a graph neural network, maybe just a very simple kind of structure, like an MLP [multilayer perceptron]. Then we can do the calculation much faster. Quantization can also be applied in the inference stage to make computation much faster.

Related content
Amazon’s George Karypis will give a keynote address on graph neural networks, a field in which “there is some fundamental theoretical stuff that we still need to understand.”

“So that's at the algorithm level. But nowadays people go deeper. Sometimes, if you want to solve the problem, you need to go to the system level. So people say, let's see how we can design this distributed system to accelerate the training, accelerate the inference.

“For example, in some cases, the memory becomes the main constraint. In this case, probably the only thing we can do is distribute the workload. Then the natural problems are how we can coordinate or synchronize the model parameters trained by each computational node. If we have to distribute the data to 10 machines, how can you coordinate with those 10 machines to make sure you only have one final version?

“And people now even go even deeper, to do the acceleration on the hardware side. So software-hardware co-design also becomes more and more popular. It requires people to really know so many different fields.

“By the way, at KDD, compared to many other machine learning conferences, real-world problems are always our top focus. In many cases, in order to solve the real-world problem, we have to talk to people with different backgrounds, because we cannot just wrap it up into the kind of ideal problems we solved when we were in high school.”

Applications

Beyond such general efforts to improve GNNs’ versatility and accuracy, however, there’s also new research on specific applications of GNN technology.

“There’s some work on how we can do causal analysis in the graph setting, meaning that the objects actually interfere with each other,” Sun explains. “This is quite different from the traditional setting: the patients in a drug study, for example, are independent from each other.

Related content
Novel cross-graph-attention and self-attention mechanisms enable state-of-the-art performance.

“There is also a new trend to combine deep representation learning with the causal inference. For example, how can we represent the treatment you try as a continuous vector, instead of just a binary treatment? Can we make the treatment timewise continuous — meaning that it's not just a static kind of one-time treatment? If I put the treatment 10 days later, how would the outcome compare to putting the treatment 20 days later? Time is very important; how can we inject that time information in?

“Graphs can also be considered a good data structure to describe multiagent dynamical systems — how those objects interact with each other in a dynamic network setting. And then, how can we incorporate the generative idea into graphs? Graph generation is very useful for many fields, such as in the drug industry.

“And then there are so many applications where we can benefit from large language models [LLMs]. For example, knowledge graph reasoning. We know that LLMs hallucinate, and reasoning on KGs is very rigorous. What would be a good combination of these two?

“With GNNs, there’s always new stuff. Graphs are just a very useful data structure to model our interconnected world.”

Related content

US, NY, New York
Amazon is looking for a Senior Applied Scientist to help build the next generation of sourcing and vendor experience systems. The Optimal Sourcing Systems (OSS) owns the optimization of inventory sourcing and the orchestration of inbound flows from vendors worldwide. We source inventory from thousands of vendors for millions of products globally while orchestrating the inbound flow for billions of units. Our goals are to increase reliable access to supply, improve supply chain-driven vendor experience, and reduce end-to-end supply chain costs, all in service of maximizing Long-Term Free Cash Flow (LTFCF) for Amazon. As a Senior Applied Scientist, you will work with software engineers, product managers, and business teams to understand the business problems and requirements, distill that understanding to crisply define the problem, and design and develop innovative solutions to address them. Our team is highly cross-functional and employs a wide array of scientific tools and techniques to solve key challenges, including optimization, causal inference, and machine learning/deep learning. Some critical research areas in our space include modeling buying decisions under high uncertainty, vendors' behavior and incentives, supply risk and enhancing visibility and reliability of inbound signals. Key job responsibilities You will be a science tech leader for the team. As a Applied Scientist you will: - Set the scientific strategic vision for the team. You - - lead the decomposition of problems and development of roadmaps to execute on it. - Set an example for other scientists with exemplary scientific analyses; maintainable, extensible, and well-tested code; and simple, intuitive, and effective solutions. - Influence team business and engineering strategies. - Exercise sound judgment to prioritize between short-term vs. long-term and business vs. technology needs. - Communicate clearly and effectively with stakeholders to drive alignment and build consensus on key initiatives. - Foster collaborations between scientists across Amazon researching similar or related problems. - Actively engage in the development of others, both within and outside the team. - Engage with the broader scientific community through presentations, publications, and patents.
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models, and are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. You’ll be working with talented scientists, engineers, and technical program managers (TPM) to innovate on behalf of our customers. If you’re fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey!
GB, London
Disrupting the way Amazon fulfills our customers’ orders. Amazon operations is changing the way we improve Customer Experience through flawless fulfillment focused on 1) successful on-time delivery, 2) at speed and 3) at the lowest possible cost. Being the engine of Amazon Operational excellence, driving zero defects through ideal operation, being the heart of the Fulfillment network and its center of excellence, being proactive and aspiring for zero defects across the network with 100% organizational engagement. For example, our applied science team leverage a variety of advanced machine learning and cloud computing techniques to power Amazon's operations performance management. This includes building algorithms and cloud services using LLMs, deep neural networks, and other ML approaches to make root cause analysis of incidents and defects better. They develop machine learning models to predict inbound capacity forecasts and select the optimal order of unloading and stowing the incoming items in the Fulfilment Center. The teams also utilize Langchain, Amazon Bedrock, Amazon Textract, ElasticCache Redis, Opensearch and Kubernetes to extract insights from big data and deliver recommendations to operations managers, continuously improving through offline analysis and impact evaluation. Underpinning these efforts are unique technical challenges, such as operating at unprecedented scale (100k requests per second with SNS/SQS and <1ms latency with Redis) while respecting privacy and customer trust guarantees, and solving a wide variety of complex computational operational problems related to inbound management for unloading and stowing before stow time SLA, outbound for picking and packing before SLAM PAD time and shipping for staging and loading before Critical Pull Time. Key job responsibilities GOX team is looking for a Senior Applied Scientist to support our vision of giving our customers the best fulfillment experience in the world, and our mission of delighting our customers by providing capabilities, tools and mechanisms to fulfillment operations. As Skynet Sr. APSCI, you would be providing resources and expertise for all data related reports (dashboard, scorecards…), analysis (statistical approach), and Machine Learning products and tools development. On top of your internal customers within GOX team you would be supporting more widely with your experience and skills all across the org, partnering with a wide range of departments within Ops Integration (Packaging, Sustainability) within the company mainly with ICQA, ATS, AMZL, GTS… on several projects. You will be part of the community of Scientists within Amazon Operations including other AS, BIEs, SDEs, … split across the different departments. You will be part of projects requiring your close collaboration and interactions with Operations that require you to have a good understanding of product flow and process all along the distribution chain. The GOX team is now recognized for its expertise and excellence in creating tools that improve massively the customer experience. Several of them now rolled out in other regions with some of these tools becoming worldwide standard. Reporting to the GOX Senior Manager, you will be responsible for developing the data-driven decision process from historical data and ML based predictive analysis and maintaining accurate and reliable data infrastructure. You will work across the entire business, and be exposed to a wide range of functions from Operations, Finance, Technology, and Change management. The successful candidate will be able to work with minimal instruction and oversight, manage multiple tasks and support projects simultaneously. Maintaining your relationships with the customers in operations and within the team, while owning deliverables end-to-end is expected. Critical to the success of this role is your ability to work with big data, develop insightful analysis, communicate findings in a clear and compelling way and work effectively as part of the team, raising the bar and insisting on high standards. About the team GOX DEA team is the engine of Amazon Operational excellence at the heart of the fulfillment network operations, aspiring zero defects. It is our purpose to improve Customer Experience through flawless fulfillment focused on 1) successful on-time delivery, 2) at speed and 3) at the lowest possible cost. Our Solutions support on-time delivery of billions of packages to our customers across the globe leveraging AI & Generative AI technology.
US, WA, Redmond
Project Kuiper is an initiative to increase global broadband access through a constellation of 3,236 satellites in low Earth orbit (LEO). Its mission is to bring fast, affordable broadband to unserved and underserved communities around the world. Project Kuiper will help close the digital divide by delivering fast, affordable broadband to a wide range of customers, including consumers, businesses, government agencies, and other organizations operating in places without reliable connectivity. The Kuiper Global Capacity Planning team owns designing, implementing, and operating systems that support the planning, management, and optimization of Kuiper network resources worldwide. We are looking for a talented principal scientist to lead Kuiper’s long-term vision and strategy for capacity simulations and inventory optimization. This effort will be instrumental in helping Kuiper execute on its business plans globally. As one of our valued team members, you will be obsessed with matching our standards for operational excellence with a relentless focus on delivering results. Key job responsibilities In this role, you will: Work cross-functionally with product, business development, and various technical teams (engineering, science, R&D, simulations, etc.) to establish the long-term vision, strategy, and architecture for capacity simulations and inventory optimization. Design and deliver modern, flexible, scalable solutions to complex optimization problems for operating and planning satellite resources. Lead short and long terms technical roadmap definition efforts to predict future inventory availability and key operational and financial metrics across the network. Design and deliver systems that can keep up with the rapid pace of optimization improvements and simulating how they interact with each other. Analyze large amounts of satellite and business data to identify simulation and optimization opportunities. Synthesize and communicate insights and recommendations to audiences of varying levels of technical sophistication to drive change across Kuiper. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
US, WA, Bellevue
Amazon’s Last Mile Team is looking for a passionate individual with strong optimization and analytical skills to join its Last Mile Science team in the endeavor of designing and improving the most complex planning of delivery network in the world. Last Mile builds global solutions that enable Amazon to attract an elastic supply of drivers, companies, and assets needed to deliver Amazon's and other shippers' volumes at the lowest cost and with the best customer delivery experience. Last Mile Science team owns the core decision models in the space of jurisdiction planning, delivery channel and modes network design, capacity planning for on the road and at delivery stations, routing inputs estimation and optimization. Our research has direct impact on customer experience, driver and station associate experience, Delivery Service Partner (DSP)’s success and the sustainable growth of Amazon. Optimizing the last mile delivery requires deep understanding of transportation, supply chain management, pricing strategies and forecasting. Only through innovative and strategic thinking, we will make the right capital investments in technology, assets and infrastructures that allows for long-term success. Our team members have an opportunity to be on the forefront of supply chain thought leadership by working on some of the most difficult problems in the industry with some of the best product managers, scientists, and software engineers in the industry. Key job responsibilities Candidates will be responsible for developing solutions to better manage and optimize delivery capacity in the last mile network. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. These positions will focus on identifying and analyzing opportunities to improve existing algorithms and also on optimizing the system policies across the management of external delivery service providers and internal planning strategies. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. To support their proposals, candidates should be able to independently mine and analyze data, and be able to use any necessary programming and statistical analysis software to do so. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs.
US, NY, New York
Amazon is investing heavily in building a world class advertising business and developing a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses for driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. Sponsored Products DP Experience and Market place org is looking for a strong Senior Applied Scientist who has a track-record of performing deep analysis and is passionate about applying advanced ML and statistical techniques to solve real-world, ambiguous and complex challenges to optimize and improve the product performance, and who is motivated to achieve results in a fast-paced environment. The position offers an exceptional opportunity to grow your technical and non-technical skills and make a real difference to the Amazon Advertising business. As a Senior Applied Scientist on this team, you will: * Be the technical leader in Machine Learning; lead efforts within this team and collaborate across teams * Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, perform hands-on analysis and modeling of enormous data sets to develop insights that improve shopper experiences and merchandise sales * Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. * Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. * Research new and innovative machine learning approaches. * Promote the culture of experimentation and applied science at Amazon About the team Sponsored Products (SP) is Amazon's largest and fastest growing business. Over the last few years we grown to a multi-billion dollar business. SP ads are shown prominently throughout search and detail pages, allowing shoppers to seamlessly discover products sold on Amazon. Ad experience and market place is one of the highest impact decisions we make. This role has unparalleled opportunity to grow our marketplace and deliver value for advertisers and shoppers. You will invent new experiences and influence customer-facing shopping experiences; this is your opportunity to work within the fastest-growing businesses across all of Amazon!
US, WA, Redmond
Have you ever wanted to be part of a team that is building industry changing technology? Amazon’s Project Kuiper is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband network connectivity to unserved and underserved communities around the world. The Kuiper Business Solutions team owns a suite of products and services to operate and scale Kuiper. We are looking for a passionate, talented, and inventive Data Scientist with a background in AI, Gen AI, Machine Learning, NLP, to lead delivering best in class automated customer service and business analytic solutions for Kuiper Customer Service. As a Data Scientist, you will be responsible for the development, fine-tuning, and evaluation of AI models that power our chatbot and IVR solutions. Your work will ensure the chatbot and IVR is accurate, reliable, and continually improving to meet customer needs. This role involves collaborating with cross-functional teams to integrate AI solutions into our customer service platform, monitor their performance, and implement ongoing enhancements. The ideal candidate has experience in successfully building chat bots using AI technologies, measuring their performance and delivering ongoing improvements. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities * Build and validate data pipelines for training and evaluating the LLMs * Extensively clean and explore the datasets * Train and evaluate LLMs in a robust manner * Design and conduct A/B tests to validate model performance * Automate model inference on AWS infrastructure
US, VA, Herndon
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply cutting edge Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center at AWS is a new strategic team that helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, data scientists, engineers, and solution architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train and fine tune the right models, define paths to navigate technical or business challenges, develop proof-of-concepts, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Data Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. This position requires that the candidate selected be a US Citizen. Key job responsibilities As an Data Scientist, you will * Collaborate with AI/ML scientists and architects to Research, design, develop, and evaluate cutting-edge generative AI algorithms to address real-world challenges * Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production * Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder * Provide customer and market feedback to Product and Engineering teams to help define product direction About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.