-
Updated
May 30, 2022 - Go
#
hdfs
Here are 746 public repositories matching this topic...
SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.
kubernetes
distributed-systems
fuse
replication
cloud-drive
s3
posix
s3-storage
hdfs
distributed-storage
distributed-file-system
erasure-coding
object-storage
blob-storage
seaweedfs
hadoop-hdfs
tiered-file-system
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
-
Updated
May 30, 2022 - Go
Expressive analytics in Python at any scale.
mysql
python
bigquery
sqlalchemy
sql
database
spark
hadoop
arrow
clickhouse
sqlite
impala
postgresql
pandas
pyspark
hdfs
dask
pyarrow
datafusion
duckdb
-
Updated
May 30, 2022 - Python
The Universal Storage Engine
data-science
storage-engine
s3
sparse-data
scientific-computing
s3-storage
arrays
hdfs
data-analysis
dataframes
tiledb
dense-data
sparse-arrays
-
Updated
May 30, 2022 - C++
bbaja42
commented
Dec 18, 2018
Similar to how unix ls works, param could be -t
1
1
80+ DevOps & Data CLI Tools - AWS, GCP, GCF Python Cloud Functions, Log Anonymizer, Spark, Hadoop, HBase, Hive, Impala, Linux, Docker, Spark Data Converters & Validators (Avro/Parquet/JSON/CSV/INI/XML/YAML), Travis CI, AWS CloudFormation, Elasticsearch, Solr etc.
python
linux
docker
aws
elasticsearch
devops
json
cloudformation
spark
hadoop
avro
travis-ci
solr
gcp
hbase
pyspark
hdfs
parquet
dockerhub
gcf
-
Updated
May 26, 2022 - Python
Web tool for Kafka Connect |
-
Updated
Dec 14, 2021 - JavaScript
Kafka Connect HDFS connector
-
Updated
May 26, 2022 - Java
Open
Migrate to goavro v2
efirs
opened
Oct 9, 2019
Divolte Collector
-
Updated
Aug 16, 2021 - Java
Fundamentals of Spark with Python (using PySpark), code examples
python
machine-learning
sql
database
big-data
spark
apache-spark
hadoop
analytics
parallel-computing
distributed-computing
apache
map-reduce
pyspark
hdfs
dataframe
mlib
-
Updated
Jul 7, 2020 - Jupyter Notebook
lovechang1986
opened
Mar 21, 2017
json
data-science
machine-learning
query
csv
spark
scale
avro
azure
text
svm
s3
root
hdfs
parquet
query-engine
nested
dataframes
schemaless
jsoniq
-
Updated
May 5, 2022 - Java
DC/OS SDK is a collection of tools, libraries, and documentation for easy integration of technologies such as Kafka, Cassandra, HDFS, Spark, and TensorFlow with DC/OS.
kubernetes
elasticsearch
kafka
cassandra
tensorflow
declarative
mesos
dcos
hdfs
stateful-containers
dcos-data-services-guild
-
Updated
May 25, 2022 - Java
ElasticCTR,即飞桨弹性计算推荐系统,是基于Kubernetes的企业级推荐系统开源解决方案。该方案融合了百度业务场景下持续打磨的高精度CTR模型、飞桨开源框架的大规模分布式训练能力、工业级稀疏参数弹性调度服务,帮助用户在Kubernetes环境中一键完成推荐系统部署,具备高性能、工业级部署、端到端体验的特点,并且作为开源套件,满足二次深度开发的需求。
-
Updated
Jul 11, 2020 - Python
HDFS compress tar zip snappy gzip uncompress untar codec hadoop spark
-
Updated
Apr 24, 2018 - Scala
HDFS Shell is a HDFS manipulation tool to work with functions integrated in Hadoop DFS
-
Updated
Mar 7, 2022 - Java
jcrist
commented
Aug 16, 2018
Given the new key-value store event stream, it'd be nice to have something like:
$ skein kv events <application id> [options...]
where the process blocks, and logs the event stream to the console until interrupted. This would be useful for debugging, as well as demos.
good first issue
Good for newcomers
A tool for scale and performance testing of HDFS with a specific focus on the NameNode.
testing
hadoop
scale
performance-metrics
hdfs
testing-tools
performance-analysis
hdfs-dfs
performance-test
performance-testing
hadoop-filesystem
hadoop-framework
scale-up
hadoop-hdfs
-
Updated
Nov 7, 2021 - Java
Improve this page
Add a description, image, and links to the hdfs topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hdfs topic, visit your repo's landing page and select "manage topics."
Problem description
I am getting the following error when reading a file from an S3 bucket: