-
Updated
Apr 9, 2022
streaming-data
Here are 293 public repositories matching this topic...
-
Updated
Apr 11, 2022 - Go
-
Updated
Apr 13, 2022 - Python
- Breaking change? (if so, please describe the impact and migration path for existing application instances)
What changes did you make? (Give an overview)
Added button to the topic list witch is copying the topics
Is there anything you'd like reviewers to focus on?
I had some problems with routing and testi
Problem description
I am getting the following error when reading a file from an S3 bucket:
Invalid bucket name "xxxx:yyyy@bucket": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$" or be an ARN matching the regex "^arn:(aws).*:s3:[a-z\-0-9]+:[0-9]{12}:accesspoint[/:][a-zA-Z0-9\-]{1,63}$|^arn:(aws).*:s3-outposts:[a-z\-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9\-]{1,63}[/:]acce
-
Updated
Apr 14, 2022 - Java
Implement progressive versions of hopping and tumbling windows:
- Both window macro methods should get added versions that take an additional parameter
- The parameter should represent the time interval that should be used to produce intermediate results of aggregations
- The parameter should be a clean divisor of the tumble size for tumbling windows and the hop size for hopping windows
-
Updated
Dec 31, 2021 - Python
-
Updated
Feb 11, 2022 - Go
-
Updated
Dec 13, 2021 - C
-
Updated
Apr 3, 2022 - Julia
Hello, I have a CSV file that has 9 features and 9 expected targets, and I want to test 2 regression models on this data (that should be generated as a stream).
When I test the MultiTargetRegressionHoeffdingTree and RegressorChain on this data I get a bad R2-score, but when I tried normalizing my data with scikit-learn I get a pretty good R2-score. The problem is that I should not use sci
-
Updated
Dec 21, 2021
-
Updated
Mar 17, 2022 - Java
-
Updated
Jan 13, 2021 - Python
-
Updated
Jan 31, 2022 - HTML
It is currently hard for users to track which versions of dependencies they are getting and which versions they should use when adding extra dependencies to their projects. This results in code like this in our own example projects:
libraryDependencies ++= Seq(
"com.lightbend.akka" %% "akka-stream-alpakka-file" % "1.1.2",
"com.typesafe.akka" %% "akka-http-spray-js
CASE doesn't work well with null. This works as expected and prints 'works':
WITH 2 AS name
RETURN CASE name
WHEN 2 THEN 'works'
WHEN null THEN "doesn't work"
ELSE 'something went wrong'
END
If we swap the first case from 2 to 3. It should print 'something went wrong', but instead it prints "doesn't work":
WITH 2 AS name
RETURN CASE name
WHEN 3 THEN 'works'
Is your feature request related to a problem? Please describe.
Today the user needs to deploy udf jars and reference data csvs manually to the blob location
Describe the solution you'd like
Enable the user to choose a file on a local disk which the web portal will then upload to the right location
-
Updated
Jul 11, 2021 - Java
-
Updated
May 11, 2020 - Jupyter Notebook
-
Updated
Mar 14, 2021 - C++
I totally forgot we have machinery for this in the "muti program" tests. We can likely reuse this on the "external ctrl-c" tests as well!
-
Updated
Sep 8, 2020 - Python
-
Updated
Mar 24, 2022 - Jupyter Notebook
-
Updated
Apr 14, 2022 - C#
-
Updated
Apr 3, 2022 - TypeScript
Improve this page
Add a description, image, and links to the streaming-data topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the streaming-data topic, visit your repo's landing page and select "manage topics."
Under the hood, Benthos
csv inputuses the standardencoding/csvpackages's csv.Reader struct.The current implementation of csv input doesn't allow setting the
LazyQuotesfield.We have a use case where we need to set the
LazyQuotesfield in order to make things work correctly.