Work in @XiaoMi, @unitedstack and @4paradigm for Storage(HBase), IaaS(OpenStack, Kubernetes), Big data(Spark, Flink) and Machine Learning(TensorFlow).
Block or Report
Block or report tobegit3hub
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned Loading
-
-
-
apache/tvm Public
Open deep learning compiler stack for cpu, gpu and specialized accelerators
-
4paradigm/OpenMLDB Public
OpenMLDB is an open-source database particularly designed to efficiently provide consistent data for machine learning driven applications.
-
TensorFlow template application for deep learning
596 contributions in the last year
Less
More
Contribution activity
November 2021
Created 51 commits in 1 repository
Created a pull request in 4paradigm/spark that received 3 comments
feat: upgrade openmldb to 0.3.0
Upgrade hybridse-native to 0.3.0 Upgrade openmldb-batch to 0.3.0
+17
−5
•
3
comments
Opened 14 other pull requests in 2 repositories
4paradigm/OpenMLDB
12
merged
- feat: upgrade hybridse to 0.3.2 for openmldb-batch
- feat: add openmldb-native and OpenmldbCatalogService in openmldb-batch
- feat: add spark jobs in Batchjob for TaskManager
- feat: update and add more apis for TaskManager
- feat: sync job info with online system table
- feat: add job management apis in TaskManager
- feat: upgrade hybridse to 0.3.0 for openmldb-batch
- feat: refactor TaskManager and add unit tests
- feat: set hadoop version as 2.7.4 which is compatible with spark source code
- fix: add dummy class for openmldb-batchjob
- feat: add chinese docs for openmldb spark
- feat: add openmldb spark distribution docs
4paradigm/spark
2
merged
Reviewed 32 pull requests in 2 repositories
4paradigm/OpenMLDB
31 pull requests
- feat: support physical LoadDataPlan in spark planner
- feat: update and add more apis for TaskManager
- ci: fix hybridse workflow
- feat: add system table
- build(deps): bump guava from 27.0.1-jre to 29.0-jre in /test/integration-test/openmldb-test-java
- build(deps): bump snakeyaml from 1.17 to 1.26 in /java/openmldb-jmh
- build(deps-dev): bump junit from 4.12 to 4.13.1 in /java/openmldb-jmh
- build(deps): bump pyyaml from 5.3.1 to 5.4 in /test/integration-test/python-sdk-test
- build(deps): bump commons-io from 2.4 to 2.7 in /test/integration-test/openmldb-test-java
- feat: generate load data physical plan
- build: manage thirdparty & zetasql via cmake
- fix: multiple definition compile error on linux
- feat: sync job info with online system table
-
feat: fetch latest
OpenMLDBpackage when build docker - feat: support spark conf in taskmanager
- feat: add unique job id for taskmanager
- feat: cherry pick from v0.3.2, add 'at' function
- feat: support having clause
- feat: add unsafe getasstring
- fix: fix slf4j pom
- feat: resolve conflict for support column default value
- style: update some cpp style from snake case to camel case
- feat: support string delimiter and quote
- fix: openmldb compilation
- refactor: define performance_sensitive in sql_cluster
- Some pull request reviews not shown.
4paradigm/zetasql
1 pull request
Created an issue in 4paradigm/OpenMLDB that received 2 comments
Opened 28 other issues in 1 repository
4paradigm/OpenMLDB
8
closed
20
open
- Add Spark jobs in openmldb-batchjob for TaskManager
- Add job management APIs for TaskManager
- Support new SQL syntax for job management
- Fail to show tables which are created by Java SDK
- Fail to access all indexes when creating tables with multiple indexes
- Lack of openmld-jdbc 0.3.0-allinone and 0.3.2-allinone
- RuntimeException when submitting local classes by SparkLauncher
- Add unit test for TaskManager by submitting local classes
- Support SELECT INTO syntax and export data with Spark
- Support online data import with Spark
- Support Spark connector to write data in online storage
- Support symbol-import without copying data
- Support physical plan of "LOAD DATA" in Spark planner
- Generate physical plan of "LOAD DATA" and " SELECT INTO" in SQL parser
- Support flexible Spark parameters for TaskManager
- Use system table to store job info
- Create job info system table when NameServer starts
- Generate internal unique job id
- Define the metadata of TaskManager job
- SparkBatchSql job register db and table from custom catalog
- Add java API for Spark driver to get custom catalog
- Add catalog API for NameServer to get table info
- Update command "CREATE TABLE" and "LOAD DATA" to set offline table meta
- Add offline table meta for table info
- Support set execute_mode in CLI
- Some issues not shown.