Skip to content
#

cluster

Here are 1,497 public repositories matching this topic...

jradtilbrook
jradtilbrook commented Apr 8, 2020

Sorry in advance but the issue template really does not apply at all to my issue.

Abstract

When using the none driver in a linux environment I get issues with PVCs after restarting. This is due to the /tmp directory being a tmpfs filesystem.

Details

Many mainstream linux distributions adopted systemd as the init system quite a while ago, and under this system, the /tmp director

Qiao-Jin
Qiao-Jin commented Apr 21, 2020

I'm new in Akka and trying to self-define a dispatcher in my code. But there are so few akka dotnet examples available, and most of them are using config files..
Could I create a dispatcher in code, i.e., like the follows:

public Dispatcher myDispatcher = new Dispatcher(new DispatcherConfigurator(ConfigurationFactory.ParseString(@"akka.actor.default-dispatcher { type = ""Akka.Dispatch.TaskDisp

VictoriaMetrics
korjavin
korjavin commented Apr 16, 2020

Describe the bug
vmagent with sd_k8s

To Reproduce
Start vmagent with config:

    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod

on a cluster with many pods.

Please add pod name to error log.

It's hard to find them in a big cluster. In a message like this it's not clear what was the pod.

2020-04-15T22:32:32.512Z	error	Victor
abhishekjiitr
abhishekjiitr commented Jun 23, 2019

The AWS walkthrough states the only required policies are AmazonEC2FullAccess, AutoscalingFullAccess & AmazonVPCFull Access.
However, the AWS Walkthrough fails with the following message:

2019-06-24T02:46:13+05:30 [✖]  Error during apply of atomic reconciler, attempting clawback: AccessDenied: User: arn:aws:iam::406569211866:user/k
kafka-monitor

Xinfra Monitor monitors the availability of Kafka clusters by producing synthetic workloads using end-to-end pipelines to obtain derived vital statistics - E2E latency, service availability & message loss rate. It reassigns partition & trigger preferred leader election to ensure each broker acts as leader of at least 1 partition of monitor topic.

  • Updated Jun 11, 2020
  • Java
ialidzhikov
ialidzhikov commented May 3, 2020

Currently I cannot find any docs about dependency-watchdog. Currently it seems to be:

  1. probing the kube-apiserver and scaling down the kube-controller-manager to 0 replicas when the kube-apiserver is reachable internally but unreachable externally
  2. restarting control plane components in CrashloopBackoff once etcd is again available
prupert
prupert commented Apr 7, 2020

Describe the bug

/usr/share/icinga2/include/command-plugins.conf provides the http CheckCommand. Since version 2.3.0 there is a new option in check_http which allows for checking the correctness of a certificate and hostname match: --verify-host. Please add this to the built in template library.

https://icinga.com/docs/icinga2/latest/doc/10-icinga-template-library/

$ /usr/
dustinmcbride
dustinmcbride commented Sep 16, 2019

Issue
When using SingleBrowserImplementation and chrome gets into a state in which it cannot be restarted then the error does not bubble which causes a javascript unhandledrejection. Since there is no way to catch this it forces consuming code into a dead end. Using node v8.11.1

Reproduction:
I have not found a way to put chrome into such a state that it cannot be restarted so the rep

gclough
gclough commented Apr 17, 2020

If repmgr standby promote is executed with a failed Primary then it doesn't run:

postgres@ip-10-11-12-13[iltestdb01:5432] ~$ repmgr standby promote --siblings-follow
WARNING: unable to connect to remote host "iltest_pg1" via SSH
WARNING: unable to connect to remote host "iltest_pg5" via SSH
ERROR: 2 of 3 sibling nodes unreachable via SSH:
DETAIL:   iltest_pg1 (ID: 1)
DETAIL:   iltes
brurend
brurend commented Oct 30, 2019

Description
I still haven't been able to reproduce this crash consistently but it been happening every now and then according to our Crashlytics. Also we haven't changed anything significantly with out cluster implementation so I'm not sure what could be causing it.

Crashed: NSOperationQueue 0x1c482b240 (QOS: UNSPECIFIED)
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000000
$s7Cluster1

geerlingguy
geerlingguy commented Dec 29, 2018

I just bought four Raspberry Pi PoE HATs, and I'm trying to incorporate them into the cluster... but there are two issues I'm running into currently:

  1. They come with female-to-female 9mm M2.5 spacers that need to be precisely this height so the contact can be made with the GPIO port and the PoE header. So I can't physically screw the Pi into the bone-style clear case using it's screws and h
alexlipa91
alexlipa91 commented Sep 27, 2019

Hi there, probably stupid question but is there any detailed doc of what kind of content the config json can contain? I see you can setup username and password for each kernel: is this an authentication against the livy server?
Is there a way to specify the address of the server?
Also, is it possible to customize the location of the config.json file?

Thanks!

Improve this page

Add a description, image, and links to the cluster topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the cluster topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.