Skip to content
/ angel Public

A Flexible and Powerful Parameter Server for large-scale machine learning

License

Notifications You must be signed in to change notification settings

Angel-ML/angel

Folders and files

NameName
Last commit message
Last commit date
Oct 29, 2019
Nov 24, 2022
May 5, 2020
Jun 6, 2017
Aug 6, 2018
Nov 22, 2022
Jun 16, 2017
Jun 11, 2021
May 31, 2022
Jan 7, 2019
Nov 24, 2022
Jun 11, 2021
Jul 30, 2019
Jul 12, 2018
Jun 30, 2020
Aug 3, 2018
May 30, 2022
Aug 3, 2018
Nov 21, 2022
Nov 21, 2022
Oct 14, 2021
Jun 11, 2021

Repository files navigation

license Release Version PRs Welcome Download Code

(ZH-CN Version)

Angel is a high-performance distributed machine learning and graph computing platform based on the philosophy of Parameter Server. It is tuned for performance with big data from Tencent and has a wide range of applicability and stability, demonstrating increasing advantage in handling higher dimension model. Angel is jointly developed by Tencent and Peking University, taking account of both high availability in industry and innovation in academia.

With model-centered core design concept, Angel partitions parameters of complex models into multiple parameter-server nodes, and implements a variety of machine learning algorithms and graph algorithms using efficient model-updating interfaces and functions, as well as flexible consistency model for synchronization.

Angel is developed with Java and Scala. It supports running on Yarn. With PS Service abstraction, it supports Spark on Angel. Graph computing and deep learning frameworks support is under development and will be released in the future.

We welcome everyone interested in machine learning or graph computing to contribute code, create issues or pull requests. Please refer to Angel Contribution Guide for more detail.

Introduction to Angel

Design

Quick Start

Deployment

Programming Guide

Algorithm

Community

FAQ

Papers

  1. PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm. WWW, 2022
  2. Graph Attention Multi-Layer Perceptron. KDD, 2022
  3. Node Dependent Local Smoothing for Scalable Graph Learning. NeurlPS, 2021
  4. PSGraph: How Tencent trains extremely large-scale graphs with Spark?.ICDE, 2020.
  5. DimBoost: Boosting Gradient Boosting Decision Tree to Higher Dimensions. SIGMOD, 2018.
  6. LDA*: A Robust and Large-scale Topic Modeling System. VLDB, 2017
  7. Heterogeneity-aware Distributed Parameter Servers. SIGMOD, 2017
  8. Angel: a new large-scale machine learning system. National Science Review (NSR), 2017
  9. TencentBoost: A Gradient Boosting Tree System with Parameter Server. ICDE, 2017