Skip to content

field type problems #18855

Open
Open
@basking-in-the-sun2000

Description

@basking-in-the-sun2000

Thought I had already reported part of the problems, but couldn't find the post. Since it has grown in scope, I'm adding a new thread. Been having a lot of issues with how influxdb handles field types.

  1. (inserting a number with or without a fractional part onto a preexisting field)
    For instance, if the field is float, but added a record that doesn't include a fractional part (just the integer), influxdb solution is to create a new field with type integer. The same happens the other way, if you are adding a float into an integer field, it creates a float version of the field.
name tags fieldKey fieldType
Huawei_daily Adj float
Huawei_daily Adj integer
Huawei_daily Insulation float
Huawei_daily P_Exp float
Huawei_daily P_Grid float
Huawei_daily P_Grid integer

This leads to a lot of issues, since future updates or inserts fail, because it claims the fields don't match (error":"partial write: field type conflict)

  1. (Into queries create different type values)
    However, the issue also goes beyond this. I found that when doing into calls, it creates float types out of integer values, even when a regular call (without the into clause) doesn't display any float values. Did add round, ceil and floor to try to force the value into an integer, but didn't work. It still created a decimal value. Even when the rounding function should had created lets say 46, it returned a 46.3 value.

To try to isolate the issue, i even did each field independently. Seems that it only happened when the fields were integer in the source, or had quotes around the field name (because of the dash). In my case both conditions were true for the same fields, so don't know which would be the problem, if any. Also the same problem happened when it was called directly or through a continuous query.

SELECT ROUND(MEAN("M_A-P")) AS "M_A-P" INTO logger_ds.autogen.Huawei FROM logger.autogen.Huawei GROUP BY time(5m).

Finally was able to get around this through a kludge by adding a small decimal value to the value, MEAN("M_A-P") + 0.00001 AS "M_A-P"

Expected behavior:

  1. Have integer forming functions (round, ceil or floor) truly return an integer value always. This didn't happened all the time, think I got a few dozen decimal values out of a set of 30-40 thousand records

  2. If the receiving field is a float, it should be able to accept an integer (no fractional value) without creating a new field type. Just convert 46 into 46.0. Probably the same with a decimal values going an integer field. However, I can see this being a bit trickier, since people might have exceptions to this case

  3. If you create multiple types for a field, it shouldn't cause execution to fail when adding one of the two types. Don't think it should create another type for an existing field, but if you must, then it shouldn't be a show stopper.

Actual behavior:
I'm getting a error":"partial write: field type conflict

Environment info:

  • System info: Raspberry 4b 4gb, Linux 4.19.118-v7l+ armv7l
  • InfluxDB version: InfluxDB v1.8.0 (git: 1.8 781490d)

Config:

Click to expand config [meta] dir = "/var/lib/influxdb/meta" retention-autocreate = true logging-enabled = true

[data]
dir = "/var/lib/influxdb/data"
index-version = "tsi1"
wal-dir = "/var/lib/influxdb/wal"
wal-fsync-delay = "0s"
validate-keys = false
query-log-enabled = false
cache-max-memory-size = 1073741824
cache-snapshot-memory-size = 67108864
cache-snapshot-write-cold-duration = "15m0s"
compact-full-write-cold-duration = "4h0m0s"
compact-throughput = 25165824
compact-throughput-burst = 50331648
max-series-per-database = 1000000
max-values-per-tag = 100000
max-concurrent-compactions = 1
max-index-log-file-size = 1048576
series-id-set-cache-size = 100
series-file-max-concurrent-snapshot-compactions = 0
trace-logging-enabled = false
tsm-use-madv-willneed = false

[coordinator]
write-timeout = "10s"
max-concurrent-queries = 0
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0

[retention]
enabled = true
check-interval = "30m0s"

[shard-precreation]
enabled = true
check-interval = "10m0s"
advance-period = "30m0s"

[monitor]
store-enabled = true
store-database = "_internal"
store-interval = "10s"

[subscriber]
enabled = true
http-timeout = "30s"
insecure-skip-verify = false
ca-certs = ""
write-concurrency = 40
write-buffer-size = 1000

[http]
enabled = true
bind-address = ":8086"
auth-enabled = false
log-enabled = false
suppress-write-log = false
write-tracing = false
flux-enabled = true
flux-log-enabled = false
pprof-enabled = true
pprof-auth-enabled = false
debug-pprof-enabled = false
ping-auth-enabled = false
https-enabled = false
https-certificate = "/etc/ssl/influxdb.pem"
https-private-key = ""
max-row-limit = 0
max-connection-limit = 0
shared-secret = ""
realm = "InfluxDB"
unix-socket-enabled = false
unix-socket-permissions = "0777"
bind-socket = "/var/run/influxdb.sock"
max-body-size = 25000000
access-log-path = ""
max-concurrent-write-limit = 0
max-enqueued-write-limit = 0
enqueued-write-timeout = 30000000000

[logging]
format = "auto"
level = "info"
suppress-logo = false

[[graphite]]
enabled = false
bind-address = ":2003"
database = "graphite"
retention-policy = ""
protocol = "tcp"
batch-size = 5000
batch-pending = 10
batch-timeout = "1s"
consistency-level = "one"
separator = "."
udp-read-buffer = 0

[[collectd]]
enabled = false
bind-address = ":25826"
database = "collectd"
retention-policy = ""
batch-size = 5000
batch-pending = 10
batch-timeout = "10s"
read-buffer = 0
typesdb = "/usr/share/collectd/types.db"
security-level = "none"
auth-file = "/etc/collectd/auth_file"
parse-multivalue-plugin = "split"

[[opentsdb]]
enabled = false
bind-address = ":4242"
database = "opentsdb"
retention-policy = ""
consistency-level = "one"
tls-enabled = false
certificate = "/etc/ssl/influxdb.pem"
batch-size = 1000
batch-pending = 5
batch-timeout = "1s"
log-point-errors = true

[[udp]]
enabled = false
bind-address = ":8089"
database = "udp"
retention-policy = ""
batch-size = 5000
batch-pending = 10
read-buffer = 0
batch-timeout = "1s"
precision = ""

[continuous_queries]
log-enabled = true
enabled = true
query-stats-enabled = true
run-interval = "30s"

[tls]
min-version = ""
max-version = ""

Logs:

400: {"error":"partial write: field type conflict: input field "Temp" on measurement "Huawei_daily" is type float, already exists as type integer dropped=1"}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions