stream-processing
Here are 763 public repositories matching this topic...
-
Updated
Feb 2, 2022 - Java
-
Updated
Jan 17, 2022
-
Updated
Jan 29, 2022 - Python
Is your feature request related to a problem? No.
It would be nice to have a single binary with the frontend build embedded in query-service go binary.
Describe the solution you'd like
We can use go.rice to embed the frontend build with query-service.
https://github.com/GeertJohan/go.rice
Additional context
It would help not only for getting started with SigNoz
-
Updated
Sep 29, 2021
Describe the bug
If you try to create a KAFKA formatted source with a BYTES column, the command will return
CREATE STREAM TEST (ID BYTES KEY, b BYTES) WITH (kafka_topic='test', format='DELIMITED');
The 'KAFKA' format does not support type 'BYTES'
This is because the BYTES type is missing [here](https://github.com/confluentinc/ksql/blob/a27e5e7501891e644196f8d164d078672e0feecd
Avoid controlling endless loop with exception in loadAnonymousClasses, e.g. by extracting loading class to the method:
private boolean tryLoadClass(String innerClassName) {
try {
parent.loadClass(innerClassName);
} catch (ClassNotFoundException ex) {
return false;
}
return true;
}Under the hood, Benthos csv input uses the standard encoding/csv packages's csv.Reader struct.
The current implementation of csv input doesn't allow setting the LazyQuotes field.
We have a use case where we need to set the LazyQuotes field in order to make things work correctly.
I have this implemented in a custom marshaler now but wondering if it makes sense to push back upstream - when we are integrating with legacy services we find it useful to use the correlation ID as the message ID when first brought into watermill - from there they get sent on headers to subsequent services and work normally.
Its a simple change if it would make sense for other users.
-
Updated
Feb 3, 2022 - C
-
Updated
Feb 3, 2022 - Java
-
Updated
Aug 6, 2021 - C
It would be really useful if there was a method that could insert a column into an existing Dataframe between two existing columns. I know about .addColumn, but that seems to place the new column at the end of the Dataframe.
For example:
df.print()
A | B
======
7 | 5
3 | 6
df.insert({ "afterColumn": "A", "newColumnName": "C", "data": [4,1], inplace: true })
df.print()
-
Updated
Jan 4, 2022 - Go
-
Updated
Feb 2, 2022
-
Updated
Dec 28, 2021 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printflike we have been until now. We need to either include a timestamp in every@printfcall (laborious and error prone) or c
-
Updated
Feb 2, 2022 - Java
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Dec 14, 2021 - JavaScript
The mapcat function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:
user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.
-
Updated
Feb 2, 2022 - Java
-
Updated
Jan 10, 2022 - Go
-
Updated
Jan 31, 2022 - Scala
-
Updated
Feb 2, 2022 - Java
-
Updated
Jan 30, 2022 - Go
-
Updated
Oct 17, 2021 - Go
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h