Highlights
- Arctic Code Vault Contributor
Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 50 million developers.
Sign up
Pinned
3,029 contributions in the last year
Contribution activity
October 2020
Created a pull request in apache/spark that received 17 comments
[SPARK-33082][SPARK-20202][BUILD][SQL][FOLLOW-UP] Remove Hive 1.2 workarounds and Hive 1.2 profile in Jenkins script
What changes were proposed in this pull request?
This PR removes the leftover of Hive 1.2 workarounds and Hive 1.2 profile in Jenkins script.
test…
- [SPARK-33091][SQL] Avoid using map instead of foreach to avoid potential side effect at callers of OrcUtils.readCatalystSchema
- [SPARK-33017][PYTHON][DOCS][FOLLOW-UP] Add getCheckpointDir into API documentation
- Debug Hadoop 3.2 profile
- [SPARK-33069][INFRA] Skip test result report if no JUnit XML files are found
- [SPARK-33051][INFRA][R] Uses setup-r to install R in GitHub Actions build
- [SPARK-33108][BUILD] Remove sbt-dependency-graph SBT plugin
- [SPARK-33105][INFRA] Change default R arch from i386 to x64 and parametrize BINPREF
- [SPARK-33102][SQL] Use stringToSeq on SQL list typed parameters
- [SPARK-33094][SQL][2.4] Make ORC format propagate Hadoop config from DS options to underlying HDFS file system
- [SPARK-33094][SQL][3.0] Make ORC format propagate Hadoop config from DS options to underlying HDFS file system
- [SPARK-33101][ML][3.0] Make LibSVM format propagate Hadoop config from DS options to underlying HDFS file system
- [SPARK-33099][K8S] Respect executor idle timeout conf in ExecutorPodsAllocator
- [SPARK-33079][TESTS] Replace the existing Maven job for Scala 2.13 in Github Actions with SBT job
- [SPARK-33101][ML] Make LibSVM format propagate Hadoop config from DS options to underlying HDFS file system
- [SPARK-32793][FOLLOW-UP] Minor corrections for PySpark annotations and SparkR
- [SPARK-32047][SQL]Add JDBC connection provider disable possibility
- [SPARK-33094][SQL] Make ORC format propagate Hadoop config from DS options to underlying HDFS file system
- [SPARK-33089][SQL] make avro format propagate Hadoop config from DS options to underlying HDFS file system
- [SPARK-33082][SPARK-20202][BUILD][SQL][FOLLOW-UP] Remove Hive 1.2 workarounds and Hive 1.2 profile in Jenkins script
- [SPARK-32793][SQL] Add raise_error function, adds error message parameter to assert_true
- [SPARK-32511][FOLLOW-UP][SQL][R][PYTHON] Add dropFields to SparkR and PySpark
- [SPARK-33086][PYTHON] Add static annotations for pyspark.resource
- [SPARK-21708][BUILD] Migrate build to sbt 1.x
- [SPARK-33002][PYTHON] Remove non-API annotations.
- [SPARK-32189][DOCS][PYTHON][FOLLOW-UP] Fixed broken link and typo in PySpark docs
- [SPARK-33073][PYTHON][3.0] Improve error handling on Pandas to Arrow conversion failures
- [SPARK-33067][SQL][TESTS][FOLLOWUP] Check error messages in JDBCTableCatalogSuite
- [SPARK-33073][PYTHON] Improve error handling on Pandas to Arrow conversion failures
- [SPARK-29250][test-maven][test-hadoop2.7] Upgrade to Hadoop 3.2.1 and move to shaded client
- [SPARK-33067][SQL][TESTS] Add negative checks to JDBC v2 Table Catalog tests
- Some pull request reviews not shown.
- Remove the schedule for the current version
- Fix the input check in groupby.
- Add python 3.9 to CI and setup.py
- Fix the input check in DataFrame.
- Fix the input checks in Index.
- Fix the input check in _Frame.to_csv.
- Fix the input check in namespace.
- Fix the input check in Series.
- Fix the input check in accessors.
- Implemented insert for Index & MultiIndex
- Fix Series.fillna with inplace=True on non-nullable column.
- Modify input check for indexers to support non-string column names.
- Implemented Series.compare
Created an issue in ScaCap/action-surefire-report that received 2 comments
Make no-JUnit test file case as successful case with a warning
In my case, I dynamically skip and only run the relevant tests (and therefore don't generate JUnit XML files at all if all are skipped). In this ca…