K8S Swiss Staking Pool
This pool is hosted on a kubernetes cluster to be as available as possible. This cluster is not hosted by a cloud provider but in a private server room.
How to check that the node is active?
You can simply refer to the information reported by the node on the pooltool.io website! This site allows you to go back to the pool statistics and as you can see, in the Height column, the K8S pook is still active and synchronized!
In the case that the pool no longer synchronizes, another node automatically takes over !
See status on pooltool.io !
Or see the status on adapools.org !
K8S Pool Dashboard
Staking With Us Is Staking Intelligently
All our servers are powered by solar panels.
Let’s talk about money…
We want above all to participate in the cardano blockchain, this is why we decided to have as a minimum cost fix of 20 ada per epochs (which does not even cover the electricity costs to run this node …) . We decided to set 4% the costs to run this node which is actually not the cheapest pool BUT we decided to limit ourselves to 1000 ada per epochs to be able to enjoy your benefits of your ada ! Here I hope that these decisions will encourage you to stake with us! In any case, thank you for participating in the cardano blockchain!
With a kubernetes cluster, we don’t need to stop the service to update the system. Everything is managed by a transparent deployment and update system.
Running without interruptions!
Our node is currently running perfectly and no cut should be made during the next updates of jormungandr or update of a configuration file. It is composed of a deployment with a single node but with 3 replicates. This configuration allows access to these 3 nodes with the same IP and the same port. This deployment serves as a bootstrapping cluster which allows you to restart a node in less than one minute ! The node leader is simply a node with a multitude of checks to verify that the node is still perfectly sync and restarts when necessary but this rarely happens (on average once every 5 days currently).
Our current configuration is a kubernetes cluster composed of 3 servers.