ceph 505
How To Configure Single-Node Ceph Cluster To Run Properly
23 days ago by bmdmc
Ceph is designed to be a fault-tolerant, scalable storage system. This means that in a production environment, it is expected that at a minimum, there will be three Ceph nodes in a cluster. If you can only afford a single node for now, or if you need only a single Ceph node for testing purposes, You will run into some problems. A single-node Ceph cluster will consider itself to be in a degraded state, since by default, it will be looking for another node to replicate data to. You will not be able to use it. This How-To will show you how to reconfigure a single Ceph Node so that it will be usable. This will work if your Ceph Node has at least two OSDs available. We have added an introduction to ceph in our previous article to get started.
ceph
linux
23 days ago by bmdmc
GitHub - rook/rook: Storage Orchestration for Kubernetes
kubernetes
k8s
ceph
storage
storage.claims
golang
persistence
5 weeks ago by po
Storage Orchestration for Kubernetes. Contribute to rook/rook development by creating an account on GitHub.
5 weeks ago by po
Rook.io
5 weeks ago by vicchow
Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.
rook
kubernetes
file
storage
orchestration
ceph
devops
framework
5 weeks ago by vicchow
Bug #22102: BlueStore crashed on rocksdb checksum mismatch - bluestore - Ceph
6 weeks ago by pjjw
tracking this bug, as it looks similar to mine, but marked wontfix as a swap bug in the kernel. interesting as the host does use zram.
ceph
bug
osd
rocksdb
6 weeks ago by pjjw
Installing Debian on QEMU’s 64-bit ARM “virt” board | translatedcode
8 weeks ago by pjjw
how to do whole-system emulation installs of amd64 debian
arm
ceph
build
linux
qemu
8 weeks ago by pjjw
Tuning for All Flash Deployments - Ceph - Ceph
8 weeks ago by jrisch
Ceph Tuning and Best Practices for All Flash Intel® Xeon® Servers
ceph
tuning
filesystem
8 weeks ago by jrisch
Rook v0.9: New Storage Backends in town! – Rook Blog
9 weeks ago by summerwind
Rook は完全にピボットした感がる。
rook
cassandra
ceph
9 weeks ago by summerwind
The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.
11 weeks ago by bmdmc
The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.
ceph
11 weeks ago by bmdmc
related tags
5.1 an ansible api archive arm arm64 authorization automatic automation aws backport backup benchmark bug build cas cassandra cauchy cephfs cerner cloud clt_news cluster cockroachdb code connect containers ctrip data database devops distrib.fs distributed docker docs documentation downtime ebs ec ec2 erasure external fail-over file filesystem filesystems fix framework freebsd fstrim gentoo gluster golang high-availability hoarding how inoreader intro iscsi juju k8s kubernetes lang:en linux linux_applications linux_development luminous lustre mac-mini marvell meltdown memory minio nfs opennebula openpolicyagent openstack operations ops orchestration osd oss outage paper performance persistence pg postgres proxmox qemu raspberrypi rbd recovery redhat rocksdb rook s3 san server ses software storage.claims storage storpool swarm sync sysadmin talks test to tools troubleshoot tuning using utilization ve virtualization vm yahoo zfsCopy this bookmark: