-
Notifications
You must be signed in to change notification settings - Fork 601
Cache Distribution #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Mike Lloyd <[email protected]>
Signed-off-by: Mike Lloyd <[email protected]>
Signed-off-by: Mike Lloyd <[email protected]>
This change would allow users to either flush the cache locally to disk or to buffer it over a network connection where it can be sent downstream. Signed-off-by: Mike Lloyd <[email protected]>
Signed-off-by: Mike Lloyd <[email protected]>
… wrong. So I started over with a different encoder. Signed-off-by: Mike Lloyd <[email protected]>
So to use gob encoding in Go, you have to have a fixed-width data type, so no slices. Due to the sharding design, without doing an entire rewrite of the whole sharding subsystem I won't be able to use gob encoding for flushing data to a writer. I found this out once I started looking at where coveralls said the test coverage was lower. I followed a crazy rabbit hole of good and bad ideas, and decided Once gob was determined to not be the right solution (spent like two hours on it), JSON seemed to be the next appropriate encoder. In order to have the proper schema for exporting the cache, I had a couple options. I could either continue with unexported field off the primary struct types and implement a custom marshaler, which would copy the relevant data into a new unexported struct with exported fields, or I could export the necessary fields on the current structs. The latter seemed less invasive and indirect. Now that this aspect is done, I'm going to work on cache importing via This is useful for distributed cache design across multiple machines, especially if you have a cache cluster, with a master -> slaves relationship for HA/Failover designs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for PR.
I don't understand this PR. Title is misleading. What feature you really want to add? Can you start with documentation? It will review if we all understand what it should do.
JSON seemed to be the next appropriate encoder.
JSON is the worst data format you can use. It's designed for people not machines.
maxShardSize uint32 | ||
Config Config | ||
ShardMask uint64 | ||
MaxShardSize uint32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you changed visibility of this fields? This makes PR unreadable and cover main purpose of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And breaks API.
Closing due to lack of activity. Please reopen if you want to work on reported issues. |
Adds io.Writer interface to flush cache on eviction and/or manually via new bigcache.Flush() API to an arbitrary destination.
Tests to match.