DataOps
DataOps is an automated, process-oriented methodology, used by analytic and data teams, to improve the quality and reduce the cycle time of data analytics. While DataOps began as a set of best practices, it has now matured to become a new and independent approach to data analytics. DataOps applies to the entire data lifecycle from data preparation to reporting, and recognizes the interconnected nature of the data analytics team and information technology operations.
Here are 108 public repositories matching this topic...
-
Updated
Jul 7, 2022 - Shell
Description
The table headers in SummaryDriftReport are misaligned with the rest of the table, content, as shown below:
Suggestions
Table header should be aligned with content.
Related
Relates to organization/repo#number <!-- Reference related commits, issu
The default RubrixLogHTTPMiddleware record mapper for token classification expect a structured including a text field for inputs. This could make prediction model inputs a bit cumbersome. Default mapper could accepts also flat strings as inputs:
def token_classification_mapper(inputs, outputs):
i-
Updated
Aug 10, 2022 - Scala
-
Updated
Aug 10, 2022 - Java
We had two questions about this in #support on Slack recently.
We need to add it to FAQ and see if there are other parts of the flow where we can better address this.
-
Updated
Aug 9, 2022 - Python
-
Updated
Aug 7, 2022
-
Updated
Aug 11, 2022 - Shell
-
Updated
Aug 11, 2022 - Python
-
Updated
Aug 10, 2022 - Java
-
Updated
Aug 10, 2022 - Python
Currently, both Kafka and Influx sink logs only the data(Row) that is being sent.
Add support for logging column names as well along with data points similar to the implementation in log sink.
This will enable users to correlate the data points with column names.
Zap configurations should be pushed to grpc middleware here: cmd/setup.go#L47
What is the feature request? What problem does it solve?
As employees leave the organization/company or users change mails , eventually the notification list configured for the job would start containing a lot of invalid mails. This causes issues with SMTP relay (e.g postfix) which could be buffering all invalid requests until the queu is full, which cause all mails coming for all jobs to b
-
Updated
Aug 11, 2022 - Go
In golang client, consumers get dynamic message instance after parsing. Add an example in the docs on how to use dynamic message instance to get values of different types in consumer code.
List of protobuf types to cover
- timestamp
- duration
- bytes
- message type
- struct
- map
-
Updated
Apr 29, 2022 - Shell
-
Updated
Aug 11, 2022 - Go
Is your feature request related to a problem? Please describe.
1. Wrong Log level on sink retry notification. It should be warn | error instead of info.
2. Also it does not show unit of time, like the following
retrying sink in 5000000
Describe the solution you'd like
1. Change log level on sink retry to warn or error.
2 And log should show uni
-
Updated
Jan 31, 2022
Already searched before asking?
- I had searched in the issues and found no similar issues.
Scaleph Version or Branch
dev
What happened
nginx setting config error that cause http response 401 see flowerfine/scaleph#147
-
Updated
Aug 10, 2022 - Python
-
Updated
Jul 21, 2022 - Python

When there are many connectors, it is not convenient for users to locate the failed task because it cannot be filtered by the task status.